Category Archives: C#

NET 7 Design Patterns In-Depth 4. Behavioral Design Patterns – Part I

Chapter 4
Behavioral Design Patterns – Part I
Introduction
Behavioral design patterns deal with the behavior of objects and classes. Different methods and algorithms for doing the same work between different objects are the main goals and focal points of this category of design patterns. In this category of design patterns, not only objects and classes are discussed, but the relationship between them is also discussed.

Structure
In this chapter, we will cover the following topics:

Chain of responsibility
Command
Mediator
State
Strategy
Template method
Objectives
It is expected that by the end of this chapter, you will be familiar with the six most famous behavioral design patterns and be able to understand their differences. Using the points presented in this chapter, it is still expected that you will be able to design the correct infrastructure for the classes to fulfil their expected behaviors.

Behavioral design patterns
In this category of design patterns, there are a total of eleven different design patterns, and in this chapter, the six most famous design patterns of this category are introduced, which are:

Chain of responsibility: It allows sending requests along a chain of objects.
Command: Encapsulate a request or task in the form of an object.
Mediator: Facilitate the communication between several objects by directing toward a central object.
State: Change the behavior of the object by changing its internal state.
Strategy: Separate different methods and algorithms for the execution of a task by forming an algorithmic family.
Template method: While defining the main body of the execution of a task, the steps of its execution are assigned to the child classes.
Behavioral design patterns are divided into two types: class and object. Class categories try to distribute a behavior between classes using inheritance. Template Method and Interpreter (Which will discuss in Chapter 5, Behavioral Design Patterns, Part II) are included in this category. On the other hand, the object category aims to show how objects can work together to do a task by using composition. Other behavior patterns are also included in this category.

Apart from the eleven GoF behavioral design patterns, other design patterns deal with the behavior between objects and classes. Including:

Blackboard pattern: Provides the possibility of integrating specialized and diverse modules for which there is no definite strategy for their development.
Null object pattern: Provide the default object value.
Protocol stack: Provide communication by different layers, like the different layers of the network.
Scheduled task pattern: Provide the possibility of executing a task in certain time frames.
Single-serving visitor pattern: It is an improved version of the Visitor design pattern.
Specification pattern: By using Boolean logic, it tries to combine different business logic.
Chain of responsibility
Name:

Chain of responsibility

Classification:

Behavioral design patterns

Also known as:

---

Intent:

This design pattern tries to make the incoming request move along a chain of processors, and at each step, each member of the chain checks the request and decides to deliver the request to the next member of the chain or terminate the request.

Motivation, Structure, Implementation, and Sample code:

Suppose that a requirement is raised and it is requested to design and implement an infrastructure through which users can create a web service. In general, the mechanism is the request that is created by the user and sent to the server, and the corresponding web service is found and executed on the server side.

After some time, it is announced that requests must be authenticated before running the web service. After a while, it is announced that access must be controlled for the requests after authentication. These continuous changes in the requirements gradually cause the quality of the code to drop.

To avoid this, the whole process can be seen as a chain, where each member of this chain performs a task. For example, the first node performs authentication, and the second node performs access control. If we want to add the request log mechanism later, we can easily define a node in the chain for it and then put it in place. Refer to Figure 4.1:

Figure%204.1.png
Figure 4.1: Calling web service request processing chain

With these explanations, it can be concluded that we are finally facing a series of handlers, and each handler is doing its work. Each handler has several tasks, including executing its logic and pointing to the next node. With these explanations, we are faced with the following class diagram:

Figure%204.2.png
Figure 4.2: Chain of Responsibility design pattern UML diagram

In Figure 4.2, the Handle method is responsible for implementing the logic of each node. This method should call the Next method to maintain the execution sequence. The ContinueWith method specifies the next node in the chain and stores its value in _successor, and the Next method executes the next nodes in the chain. For the diagram in Figure 4.2, we have the following code:

public class Request

{

public string IP { get; set; }

public string Url { get; set; }

public string Username { get; set; }

}

public abstract class RequestHandler

{

RequestHandler _successor;

public abstract void Handle(Request request);

public void ContinueWith(RequestHandler handler) { _successor = handler; }

protected void Next(Request request)

{

if (_successor != null)

_successor.Handle(request);

}

}

public class AuthenticationHandler : RequestHandler

{

public override void Handle(Request request)

{

if (!string.IsNullOrWhiteSpace(request.Username) &&

UserHasAccess(request.Username, request.Url))

Next(request);

else

throw new Exception("User not authenticated");

}

}

public class AuthorizationHandler : RequestHandler

{

public override void Handle(Request request)

{

if (request.IP.StartsWith("10."))

Next(request);

else

throw new Exception("Access Denied");

}

}

public class LoggingHandler : RequestHandler

{

public override void Handle(Request request)

{

//log the request here

Next(request);

}

}

To use the preceding structure, you can use the following code:

RequestHandler authenticationHandler = new AuthenticationHandler();

RequestHandler authorizationHandler = new AuthorizationHandler();

RequestHandler loggingHandler = new LoggingHandler();

authenticationHandler.ContinueWith(authorizationHandler); //Line 4

authorizationHandler.ContinueWith(loggingHandler); //Line 5

authenticationHandler.Handle(new Request //Line 6

{

IP = "192.168.1.1",

Username = "vahid",

Url = "http://abc.com/get"

});

As it is clear in lines 4 and 5 of the preceding code, each handler, using the ContinueWith method, has specified what handler should be the next node of the chain, and then in line 6, the execution of the chain is started, and then with the help of the Next method, this execution is carried out from node to node.

Participants:

Handler: In the preceding scenario, the RequestHandler is responsible for defining the format for processing requests.
Concrete handler: In the preceding scenario, it is the same as AuthenticationHandler, AuthorizationHandler, and LoggingHandler and is responsible for processing the relevant request. The behavior in this section is that if the handler is responsible for processing the request, it performs the processing; otherwise, it transfers the request to the next node.
Client: The same user is responsible for starting the request processing in the chain through the concrete handler.
Notes:

One of the tasks of the ConcreteIterator is to maintain the position of the current object.
Care should be taken not to form a loop in the design of the chain
Another example of implementing this design pattern can be Middleware in ASP.NET Core.
Handlers can be implemented using composite.
Consequences: Advantages

The request processing order can be controlled. For example, check IP access first, then authenticate the user, then control the access.
The Single Responsibility Principle (SRP) is upheld because the classes that are tasked with carrying out the operation are kept apart from the classes that are tasked with implementing it.
The open/close principle is met. It is possible to add new handlers to the chain or remove existing handlers without changing the user code. This increases flexibility.
Dependency is reduced. By using this design pattern, both the sender and receiver of the request do not have detailed information about each other and are not dependent on each other. Also, the input request has no information about the chain structure.
Consequences: Disadvantages

Some requests may not be processed due to not finding a suitable handler, and therefore there is no guarantee that the request will be processed.
Applicability:

When it is necessary to perform different processes on a request in order.
When you are faced with variable handlers while processing a request, the order of the handlers is not predetermined.
Related patterns:

Some of the following design patterns are not related to the chain of responsibility design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Composite
Decorator
Observer
Command
Command
Name:

Command

Classification:

Behavioral design patterns

Also known as:

Action/Transaction

Intent:

This design pattern tries to encapsulate a request or work in the form of an object. Further, this object can be queued and executed to process and perform assigned work. This feature makes it possible to undo a request before it is processed or interrupt the execution of the task.

Motivation, Structure, Implementation, and Sample code:

Suppose we are designing a system for the human resources unit of the organization. One of the stated requirements is that after hiring a new employee, it should be possible to define an email account, identification card, and business card for him. This requirement has been introduced in such a way that the human resource unit expert must register his request to perform all three tasks in the system. We also need to design the implementation of these three tasks in the form of a transaction so that if any one of the tasks is not done, the rest of the tasks are not done either.

With these explanations, it is clear that we are dealing with a series of commands, such as the command to create an email account, the command to create an ID card, and the command to create a business card. Each of these commands should be able to do two different things according to the requirement description. For example, the email account creation command must be able to create the email account and undo the creation of the email account. We are also faced with a set of commands. Each of these commands is a member of this set and starts to be executed in a way.

With these words, Figure 4.3 class diagram can be imagined:

Figure%204.3.png
Figure 4.3: Command design pattern UML diagram

As shown in the Figure 4.3 diagram, the expert of the human resources unit (Client) submits his request to the Recruitment class, and this class executes the commands through the commands that have been registered, and whenever a problem occurs, it undoes the operation he/she does. In this design, it is clear that the Client is not directly connected with the EmployeeManager, because the Client may not know the logic of executing tasks or canceling tasks. The Client only presents their request, and this request is queued and executed in the form of a series of commands. For Figure 4.3 design, the following codes can be imagined:

public interface IRecruitmentCommand

{

void Execute();

void Undo();

}

public class CreateEmailCommand : IRecruitmentCommand

{

EmployeeManager _manager;

public CreateEmailCommand(EmployeeManager manager) => _manager = manager;

public void Execute() => _ manager.CreateEmailAccount();

public void Undo() => _ manager.UndoCreateEmailAccount();

}

public class DesignIdentityCardCommand : IRecruitmentCommand

{

EmployeeManager _ manager;

public DesignIdentityCardCommand(EmployeeManager manager)

=> _manager = manager;

public void Execute() => _ manager.DesignIdentityCard();

public void Undo() => _ manager.UndoDesignIdentityCard();

}

public class DesignVisitingCardCommand : IRecruitmentCommand

{

EmployeeManager _ manager;

public DesignVisitingCardCommand(EmployeeManager manager)

=> _manager = manager;

public void Execute() => _ manager.DesignVisitingCard();

public void Undo() => _ manager.UndoDesignVisitingCard();

}

/*

For simplicity, the implementation for the methods of this class is not provided

*/

public class EmployeeManager

{

public EmployeeManager(int employeeId) { }

public void CreateEmailAccount() { }

public void UndoCreateEmailAccount() { }

public void DesignIdentityCard() { }

public void UndoDesignIdentityCard() { }

public void DesignVisitingCard() { }

public void UndoDesignVisitingCard() { }

}

public class Recruitment

{

public List Commands { get; set; }

= new List();

public void Invoke()

{

try

{

foreach (var command in Commands)

command.Execute();

}

catch //If an error occurs, all tasks must be canceled from the beginning

{

foreach (var command in Commands)

command.Undo();

}

}

}

To use this structure, you can do the following:

EmployeeManager employeeManager = new(1); // New employee with ID 1 is being recruited

Recruitment recruitment = new();

recruitment.Commands.Add(new CreateEmailCommand(employeeManager));

recruitment.Commands.Add(new DesignIdentityCardCommand(employeeManager));

recruitment.Commands.Add(new DesignVisitingCardCommand(employeeManager));

recruitment.Invoke();

As it is clear in the preceding code, the Client first creates an object of the type EmployeeManager. Then, instead of using the methods inside this class, it sets commands through Recruitment and then assigns the task of executing the request to Recruitment.

Participants:

Command: In the preceding scenario, it is the IRecruitmentCommand, which is responsible for defining the template for executing requests. The command usually declares methods such as Execute or Undo.
ConcreteCommand: In the preceding scenario, it is the same as CreateEmailCommand, DesignIdentityCardCommand, and DesignVisitingCardCommand, which is responsible for implementing the structure provided by the command. This implementation happens in the form of calling the desired work (Action) through the receiver object.
Invoker: In the preceding scenario, it is Recruitment that has the task of asking a command to execute the request.
Receiver: In the preceding scenario, it is the same as EmployeeManager and is responsible for implementing the logic of each task.
Client: It is the same user who is responsible for creating ConcreteCommand objects and sending them to the receiver.
Notes:

By using serialization, commands can be converted into strings and stored in the data source. They can even be sent over the network to a remote user.
To better implement the undo process, a class can be used as history. All the operations that are executed are stored in the form of a stack data structure. Next, you can perform the undo operation by scrolling the stack. The implementation of the undo operation with this design pattern does not seem simple because, during an operation, some private variables may have changed. To solve this problem, you can use the Memento design pattern.
Consequences: Advantages

The single responsibility principle is upheld because the classes that implement the operation are distinct from the classes that invoke the operation.
New commands can be defined and used without changing the existing codes. For this reason, the Open/Close principle is observed.
Undo and redo processes can be implemented.
It is possible to implement the operation with a delay.
Consequences: Disadvantages

Since a new layer is added between the sender and receiver, the complexity of the design increases.
Applicability:

When we need to take a break between tasks, or we have tasks that need to be executed according to a schedule.
When we need to have an operation with undo capability.
The handlers in the chain of responsibility design pattern can be implemented with the help of this design pattern.
To save a copy of the command, you can use the prototype design pattern.
When it is necessary to implement the call-back mechanism, commands are an object-oriented alternative to callbacks. This is achieved with the help of sending actions through parameters to objects.
Related patterns:

Some of the following design patterns are not related to the command design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Prototype
Composite
Memento
Observer
Chain of responsibility
Mediator
Name:

Mediator

Classification:

Behavioral design patterns

Also known as:

---

Intent:

This design pattern tries to communicate between different objects through a coordination center. The coordination center has the task of maintaining communication, and in this way, the applicant and the respondent can communicate and do not need information.

Motivation, Structure, Implementation, and Sample code:

Suppose a requirement has been raised in which it is requested to provide the possibility of conversation between users. Users will need to chat, specify the text of the message and the recipient of the message, send the message, and the other party will receive the message. For the design of this requirement, it is clear that if the users are in direct communication with each other, the complexity will be very high.

On the other hand, users can send and receive messages through an interface. In this case, users will not need to communicate with each other directly, and all their transactions will be done through the interface. So far, the scenario can be concluded that on one side of the design, we have the users who need to talk to each other, which is called colleagues, and on the other side, we have the communication interface, which is called the mediator. With the preceding explanations, the following class diagram can be imagined:

Figure%204.4.png
Figure 4.4: Mediator design Pattern UML diagram

As it is clear from the Figure 4.4 class diagram, different users can talk to each other, all of them are defined in this scenario through the Participant class and have implemented the IParticipant interface, and in this way, they provide their implementation for the Send and Receive methods. The responsibility of these two methods is to send and receive messages. On the other hand, a communication intermediary called Chatroom has been defined, which implements the IChatroom interface and provides its implementation of Login, Send, and Logout methods. The user sends a message through this method. For the Figure 4.4 class diagram, you can have the following code:

public interface IChatroom

{

void Login(IParticipant participant);

void Send(string from, string to, string message);

void Logout(IParticipant participant);

IParticipant GetParticipant(string name);

}

public class Chatroom : IChatroom

{

Dictionary participants = new();

public void Login(IParticipant participant)

{

if (!participants.ContainsKey(participant.Name))

participants.Add(participant.Name, participant);

}

public void Logout(IParticipant participant)

{

if (participants.ContainsKey(participant.Name))

participants.Remove(participant.Name);

}

public void Send(string from, string to, string message)

{

IParticipant receiver = participants[to];

if (receiver != null)

receiver.Receive(from, message);

else

throw new Exception("Invalid participant");

}

public IParticipant GetParticipant(string name)

=> participants.ContainsKey(name) ? participants[name] : null;

}

public interface IParticipant

{

public string Name { get; set; }

void Send(string to, string message);

void Receive(string from, string message);

}

public class Participant : IParticipant

{

private readonly IChatroom room;

public string Name { get; set; }

public Participant(string name, IChatroom room)

{

this.Name = name;

this.room = room;

}

public void Receive(string from, string message)

=> Console.WriteLine($"Sender: {from}, To: {Name}, Message: {message}");

public void Send(string to, string message)

=> room.Send(Name, to, message);

}

As can be seen in the preceding code, when the Send method is called in Participant, instead of directly delivering the request to the target, this method delivers the request to the room object. This object plays the role of mediator. Then, upon receiving the Send request, the room object calls the Receive method and delivers the message to the recipient. Also, the IChatroom interface in this example has an auxiliary method called GetParticipant, which is not related to the design and is only used to find other users.

Participants:

Mediator: In the preceding scenario, it is IChatroom and is responsible for defining the format for communication with colleagues.
ConceretMediator: In the preceding scenario, it is the Chatroom and is responsible for implementing the format provided by the mediator, and in this way, it communicates between colleagues. This class needs to know the colleagues in order to communicate.
Colleague: In the preceding scenario, it is the IParticipant and Participant who is always in contact with the Mediator, and whenever any of the colleagues needs to communicate with another colleague, they communicate through the mediator.
Client: It is the same user who executes the code through the mediator and ConcreteMediator.
Notes:

Mediator must implement collaborative communication between colleagues.
The purpose of this design pattern is to eliminate interdependencies.
When there is only one mediator, there is no need to define mediator and ConcreteMediator, and colleagues can directly communicate with ConcreteMediator. This principle also applies to colleagues.
Consequences: Advantages

By using this design pattern, communication between objects is transferred to a single place, and in this way, while complying with the single responsibility principle, maintaining the code is also easier. You can also focus on the communication logic of objects through Mediator and on the business of objects through colleagues.
Without the need to change the codes, a new mediator can be defined, and in this way, the Open/Close principle has been observed.
Stickiness between classes decreases, and colleagues communicate through the mediator.
Reusability is improved, and colleagues can be used by defining the new mediator.
By using this design pattern, many-to-many relationships become one-to-many relationships. One-to-many relationships are much simpler and easier to maintain and understand.
Consequences: Disadvantages

Over time, mediator objects themselves become objects that contain many complexities, which are called God objects, which can cause problems.
Applicability:

When it is not possible to change classes due to dependence on other classes, in this case, all communication is taken out of the class and established through a mediator.
When we need to use an object in another program, it is not possible due to many dependencies on other objects. In this case, with the presence of a mediator, communications are taken to this class, and therefore the class can be used in other programs as well. A sufficient condition to use the class in another application is to create a new mediator.
Related patterns:

Some of the following design patterns are not related to the mediator design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Facade
Observer
State
Name:

Adapter

Classification:

Behavioral design patterns

Also known as:

Objects for states

Intent:

This design pattern tries to change the behavior of the object by changing its internal state. In this case, different situations are presented in the form of different objects.

Motivation, Structure, Implementation, and Sample code:

Suppose there is a requirement in which a workflow engine is requested to be produced for the support process of a company. The support process in this company is that the manager of the support unit sends a ticket to one of the experts from the list of registered tickets; the expert checks the ticket and takes the necessary action. Then it waits for the confirmation of the user who registered the ticket. After the end user's approval, the support manager makes a final check, the ticket is closed, and the next ticket is sent to the expert. This description of the state diagram of this process can be considered as follows:

Figure%204.5.png
Figure 4.5: Company's support process

To implement this entire structure, you can use only one class and control different states and movement between these states through if and switch blocks. In this way of implementation, by adding a new state to the process or applying changes to any of the existing states, the class codes will be changed, which will make the code testing process be done from the beginning. Therefore, this type of design will require code maintenance and development.

However, to design the preceding process, each situation can be defined as a class, and the business related to each situation can be encapsulated in that class. In this case, the problem raised about adding a new status, or changing the business of the existing status, will not affect the codes of other statuses. After each state is converted to a class, an interface is needed to process tickets. This interface class maintains the status of each ticket and allows the processing of each ticket.

According to the preceding explanations, the following class diagram can be imagined:

Figure%204.6.png
Figure 4.6: State design pattern UML diagram

As can be seen in the preceding class diagram, each of the states has become a series of classes that implement the ITicketState interface, and for this reason, they must provide their implementation for the Process method. Through this method, the work that needs to be done in any situation is carried out. Also, the TicketContext class contains the State variable through which it stores the state of each ticket. The Process method in this class is also responsible for using State to call the appropriate Process method in one of the AssignState, DoingState, ApprovalState, or ClosingState classes. According to these explanations, the following code can be imagined for Figure 4.6:

public interface ITikcetState

{

void Process(TicketContext context);

}

public class AssignState : ITikcetState

{

public void Process(TicketContext context)

{

Console.WriteLine("Ticket Assigned");

context.State = new DoingState();

}

}

public class DoingState : ITikcetState

{

public void Process(TicketContext context)

{

Console.WriteLine("Ticket Done");

context.State = new ApprovalState();

}

}

public class ApprovalState : ITikcetState

{

public void Process(TicketContext context)

{

Console.WriteLine("Ticket Approved");

context.State = new ClosingState();

}

}

public class ClosingState : ITikcetState

{

public void Process(TicketContext context)

{

Console.WriteLine("Ticket Closed");

context.State = new AssignState();

}

}

public class TicketContext

{

public ITikcetState State { get; set; }

public TicketContext(ITikcetState state) => this.State = state;

public void Process() => State.Process(this);

}

As can be seen in the preceding code, every time the Process method is called in the TicketContext class, the state of the ticket will change and go to the next state. To use the preceding code, you can do the following:

TicketContext context = new(new AssignState());

context.Process();

In the first line, the initial state is AssignState, so by calling the Process method in the second line, the Process method in AssignState will be executed. This method will also change the state of the ticket to DoingState after the work is done. Therefore, by calling the Process method in the TicketContext class again, this time, the Process method in the DoingState class will be executed, and this cycle will proceed in the same way.

Participants:

Context: In the preceding scenario, it is the TicketContext and is responsible for maintaining the current status. Also, this class is responsible for defining the appropriate framework for use by the client.
State: In the preceding scenario, ITicketState is responsible for defining the format for processing states.
ConcreteState: In the preceding scenario, it is AssignState, DoingState, ApprovalState, and ClosingState, which is responsible for implementing the business related to each state according to the format provided by State.
Client: It is the same user who executes the codes through context.
Notes:

Both the context class and the ConcreteState class can change the state of the object.
If the classes related to different states need context, according to the preceding scenario, you can send a context object to ConcreteState.
Both context and ConcreteState can determine the next state related to the current state based on the conditions.
The client sets the initial state through context. After the initial setting, the client no longer needs to interact directly with the state objects.
There are two modes regarding the creation and destruction of state objects. In the first case, the state object is created whenever necessary and destroyed after the work is done. The second state of the state object is never destroyed after it is created. The choice between these two modes depends on different modes. For example, if the status changes do not occur continuously or the statuses are unclear at the time of execution, the first mode is a suitable option. This mode prevents the creation of objects that are not used. But if the situation changes continuously, you can use the second mode.
State objects can often be defined using the singleton pattern.
Consequences: Advantages

Considering that the codes related to each situation are transferred to the corresponding molasses, the single responsibility principle has been observed.
Since by creating a new state, there is no change in the classes of the previous situations and the context class, the open/close principle has been observed.
By removing if blocks and similar blocks from the context class, existing codes become simpler, and maintenance and development are easier.
Consequences: Disadvantages

If the number of states is small, using this design pattern causes complexity.
Applicability:

It can be useful when an object should have different behaviors based on different states.
When an object has a large number of states and the codes for each state are constantly changing.
When there is a class in which there are a large number of if blocks or similar blocks that change the behavior provided by the methods of the class.
When there are duplicate codes for each situation.
Related patterns:

Some of the following design patterns are not related to the state design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Singleton
Flyweight
Strategy
Strategy
Name:

Strategy

Classification:

Behavioral design patterns

Also known as:

Policy

Intent:

This design pattern tries to separate different methods and algorithms of executing a task by forming an algorithm family. The formation of this family will make it possible to replace the algorithms with the least changes when using them.

Motivation, Structure, Implementation, and Sample code:

Suppose a requirement is raised during which it is requested to provide the possibility of outputting the data of a table. Table data can be exported to CSV, TXT, and XML formats. To implement this requirement, different output algorithms can be mentioned in one class. This method, like the problem discussed in the state design pattern, will cause changes in an algorithm or the addition of a new algorithm, leading to changes in class codes. This makes the code more difficult to maintain.

But it is possible to implement this method, similar to the approach that existed in the state design pattern, to implement each algorithm in a separate class. On the other hand, each algorithm forms a family by implementing and following an interface. The user can also communicate with this family by using the context class. In this way, applying a change in an algorithm or adding a new algorithm will affect only the same algorithm, and this will increase the maintainability and extensibility of the code.

According to the preceding explanations, the following class diagram can be imagined:

Figure%204.7.png
Figure 4.7: Strategy design pattern UML diagram

As can be seen in the Figure 4.7 class diagram, different algorithms include XMLExporter, CSVExporter, and TXTExporter classes, which all implement the IExporter interface and, in this way, provide their implementation for the Export method. The ExportContext class also takes the type of algorithm required by the user and calls the corresponding Export method using the Process method. This type of algorithm or work execution method is called strategy. For Figure 4.7 class diagram, the following code can be considered:

public interface IExporter

{

void Export(object data);

}

public class XMLExporter : IExporter

{

public void Export(object data)

=> Console.WriteLine("data exported to xml");

}

public class CSVExporter : IExporter

{

public void Export(object data)

=> Console.WriteLine("data exported to csv");

}

public class TXTExporter : IExporter

{

public void Export(object data)

=> Console.WriteLine("data exported to txt");

}

Public class ExportContext

{

private readonly IExporter _strategy;

public ExportContext(IExporter strategy) => this._strategy = strategy;

public void Process(object data) => _strategy.Export(data);

}

To use this code, you can do the following:

ExportContext export = new ExportContext(new XMLExporter());

export.Process(new { Name = "Vahid", LastName = "Farahmandian" });

Participants:

Strategy: In the preceding scenario, it is the same as IExporter and is responsible for defining the format for defining algorithms.
ConcreteStrategy: In the preceding scenario, it is XMLExporter, CSVExporter, and TXTExporter, and it is responsible for implementing the format provided by the strategy.
Context: Allows the client to use different algorithms. ConcreteStrategies can access the data of this class.
Client: The same user executes the code by presenting the strategy to the Context.
Notes:

Using inheritance, common codes between algorithms can be transferred to another class
Generics can also be used in context implementation. When the strategies are clear at the time of compilation, and there is no need to change them at the time of execution, then this method can be useful.
Sending strategy to context can be optional. In this case, if the user does not introduce the desired strategy, the context class will use the default strategy to advance the work.
There are different ways to implement this design pattern. One of these methods is using delegates. For this type of implementation, we have the following code:
public delegate void Export(object data);

public class Exporter

{

public static void XMLExport(object data)

{ Console.WriteLine("data exported to xml"); }

public static void CSVExport(object data)

{ Console.WriteLine("data exported to csv"); }

public static void TXTExport(object data)

{ Console.WriteLine("data exported to txt"); }

}

public class ExportContext

{

private Export _strategy;

public ExportContext(Export strategy) => this._strategy = strategy;

public void Process(object data) => _strategy(data);

}

which can be used as follows:

ExportContext export = new(new Export(Exporter.XMLExport));

export.Process(new { Name = "Vahid", LastName = "Farahmandian" });

This design pattern and the state design pattern have many similarities in the first point, but they have differences, including:
The state design pattern has to do with the internal state of a class, while the strategy pattern is just an algorithm that is used to do something and has nothing to do with the internal state of the class.
In the state design pattern, the state object usually needs a context to access information from the context. But in the strategy design pattern, each algorithm is an independent class and does not need client information.
In general, the state design pattern tries to do different things based on different situations, but the strategy design pattern tries to do the same thing with different existing methods.
Consequences: Advantages

It is possible to change the used algorithm during the execution.
The implementation details of an algorithm can be separated from the user of the algorithm.
It is possible to define a new algorithm without changing the existing codes or change the existing algorithm without changing other algorithms. For this reason, the Open/Close principle has been observed.
Consequences: Disadvantages

If we are dealing with a small number of algorithms that rarely change, using this pattern can increase the complexity.
The client needs to know the differences between the existing algorithms to be able to use them.
Applicability:

When we are faced with different algorithms to do the same task.
When the algorithms use data that the client should not have access to, by using this design pattern, the related and specific data of each algorithm are encapsulated in the class of that algorithm.
When several classes are similar and differ only in the way of implementing behaviors.
When we need to separate the business domain of a class from the algorithm implementation details, these algorithm implementation details are usually not important in the business domain of a class. For example, in a business related to a class, the sorting operation needs to happen somewhere. From the business point of view, it does not matter which algorithm the sorting function is, so, in this case, the strategy design algorithm can be useful.
When we are faced with a class that has many if blocks and similar blocks, and during these blocks, it tries to choose and execute one of the different algorithms.
Related Patterns:

Some of the following design patterns are not related to the strategy design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Decorator
Flyweight
State
Template method
Template method
Name:

Template method

Classification:

Behavioral design patterns

Also known as:

---

Intent:

This design pattern tries to define the main body by doing a task and assigning its implementation steps to the child classes.

Motivation, Structure, Implementation, and Sample code:

Suppose you are implementing the ability to enter data from different data sources into your system's database. Data can be imported from different sources and in different formats. For example, you may want to enter data from various sources in txt, XML, and CSV formats into your database.

In the preceding scenario, the overall structure of the work is fixed and will probably be similar to the following:

Figure%204.8.png
Figure 4.8: Data transformation process

The steps and the sequence are fixed for all source and destination formats, and the way of implementing these steps is different for each source type. For example, if the data source has CSV format, probably in the source data validation step, commas and columns should be checked. Or if the data is in XML format, then it is necessary to pay attention to the correctness of the XML format and the compatibility of some XSD structures.

With the preceding explanations, the following class diagram can be considered:

Figure%204.9.png
Figure 4.9: Template Method design pattern UML diagram

As it is clear in Figure 4.9 class diagram, the main body of the work is defined through the DataReader class. In this class, all methods except the Import method are abstract methods. The Import method is not abstract because child classes are not allowed to change it, and this method defines the sequence of work execution. This method is called the template method. The child classes inherit from the DataReader class and provide their implementation for each step. For the preceding class diagram, the following codes can be considered:

public abstract class DataReader

{

public void Import()

{

Connect();

Validate();

Read();

Convert();

}

public abstract void Connect();

public abstract void Validate();

public abstract void Read();

public abstract void Convert();

}

public class TXTReader : DataReader

{

public override void Connect()

=> Console.WriteLine("Connected to txt file");

public override void Validate()

=> Console.WriteLine("Data has valid txt format");

public override void Read()

=> Console.WriteLine("Data from txt file read done");

public override void Convert()

=> Console.WriteLine("Data converted from txt to destination type");

}

public class XMLReader: DataReader

{

public override void Connect()

=> Console.WriteLine("Connected to xml file");

public override void Validate()

=> Console.WriteLine("Data has valid xml format");

public override void Read()

=> Console.WriteLine("Data from xml file read done");

public override void Convert()

=> Console.WriteLine("Data converted from xml to destination type");

}

public class CSVReader : DataReader

{

public override void Connect()

=> Console.WriteLine("Connected to csv file");

public override void Validate()

=> Console.WriteLine("Data has valid csv format");

public override void Read()

=> Console.WriteLine("Data from csv file read done");

public override void Convert()

=> Console.WriteLine("Data converted from csv to destination type");

}

As can be seen in the preceding code, all the methods, except the Import method, are abstract. To use this structure, you can do the following:

DataReader reader = new CSVReader();

reader.Import();

Participants:

AbstractClass: In the preceding scenario, it is the same DataReader that is responsible for abstractly defining the steps of work execution. Also, this class is responsible for defining and implementing the template method. The template method is responsible for calling the steps based on the expected structure.
ConcreteClass: In the preceding scenario, it is TXTReader, XMLReader, and CSVReader. It is responsible for implementing the abstract steps defined by AbstractClass.
Client: The same user executes the code in the desired way.
Notes:

This design pattern creates an inverted control structure, which is called the Hollywood principle. This principle shows how the parent class can call the methods of the child class.
It is very important to identify hook and abstract methods in the template method implementation. Hook methods are methods that have a default implementation in the parent class, and child classes can override them. While abstract methods have no implementation, child classes must override them.
One should always try to keep the number of abstract methods in the template Method small.
The use of naming conventions can help the user identify the methods that should be overridden. For example, it can be a good idea to use the Do prefix before the names of abstract methods (DoRead, DoConvert, and so on). Of course, if you use Visual Studio to develop C# codes, it is possible to override the abstract methods of the parent in the child classes through this IDE.
The factory method is often called the template method.
Consequences: Advantages:

The user can define and implement certain parts of the algorithm based on his needs.
Reducing the size of the codes by transferring the common parts of the algorithm from the child classes to the parent class.
Consequences: Disadvantages

Limiting the user to the general structure provided.
As the number of abstraction steps increases, the code becomes more difficult to maintain.
Applicability:

When implementing an algorithm, we need to leave the implementation of some steps of the algorithm to the child classes to implement them based on their needs.
When several classes almost implement the same algorithm and differ only in a series of small parts. In this case, common codes can be removed from the child classes and placed in the parent class to prevent code rewriting.
Related patterns:

Some of the following design patterns are not related to the template method design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Factory method
Strategy
Observer
Conclusion
In this chapter, you got acquainted with the most famous behavioral design patterns and learned how to send requests along a chain of objects. We learned to encapsulate a request or task in the form of an object and how to facilitate communication between several objects by directing them toward a central object. We also learned to change the behavior of the object by changing its internal state and to separate different methods and algorithms for the execution of a task by forming an algorithmic family. We learned to assign the execution steps of a task to child classes while defining the main body of the execution of a task.

In the next chapter, you will learn about the rest of the behavioral design patterns.

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors:

https://discord.bpbonline.com

NET 7 Design Patterns In-Depth 3. Structural Design Patterns

Chapter 3
Structural Design Patterns
Introduction
Structural design patterns deal with the relationships between classes in the system. This category of design patterns determines how different objects can form a more complex structure together.

Structure
In this chapter, we will cover the following topics:

Structural design pattern
Adapter
Bridge
Composite
Decorator
Façade
Flyweight
Proxy
Objectives
By the end of this chapter, you will be familiar with structural design patterns and be able to understand their differences. It is expected that by using the points presented in this chapter, you will be able to design the correct structure for the classes and use each of the structural design patterns in their proper place.

Structural design patterns
In this category of design patterns, there are seven different design patterns, which are:

Adapter: Allows objects with incompatible interfaces to cooperate and communicate. This design pattern can exist in two different ways:
Adapter pipeline: In this case, several adapters are used together.
Retrofit interface pattern: An Adapter is used as a new interface for several classes.
Bridge: Separates abstractions from implementation. This design pattern can also be implemented in another way:
Tombstone: In this type of implementation, an intermediate object plays the seeker role, and through that, the exact location of the destination object is identified.
Composite: A tree structure can be created with a single mechanism.
Decorator: New behaviors and functions can be dynamically added to the object without inheritance.
Facade: It is possible to connect to a complex system by providing a straightforward method.
Flyweight: It provides the ability to answer many requests through the sharing of objects
Proxy: By providing a substitute for the object, you can do things like controlling access to the object
In addition to these seven GoF patterns, other patterns are about the structure of objects:

Aggregate pattern: It is a Composite with methods for aggregating children.
Extensibility pattern: Hides complex code behind a simple interface.
Marker pattern: An empty interface assigns metadata to the class.
Pipes and filters: A chain of processes in which each process's output is the next input.
Opaque pointer: Pointer to an undeclared or private type, to hide implementation details.
Adapter
In this section, the adapter design pattern is introduced and analyzed, according to the structure presented in GoF Design Patterns section in Chapter 1, Introduction to Design pattern.

Name:

Adapter

Classification:

Structural Design Patterns

Also known as:

Wrapper

Intent:

This design pattern tries to link two classes that are not structurally compatible with each other and work with them.

Motivation, Structure, Implementation, and Sample code:

Suppose a company has designed an infrastructure to connect and call web services, and through this infrastructure, it connects to various web services. This infrastructure is as follows:

Figure3.1.png
Figure 3.1: Client-External Service initial relation

In the current structure, the Client calls the services through an ExternalService object and according to the type of web service (let us assume it is only Get and Post). Currently, the codes on the client side are as follows:

IExternalService service = new ExternalService();

service.Url = “http://something.com/api/user”;

service.Get(“pageIndex = 1”);

This mechanism of calling a web service is used in most places of the software. After some time, the company concluded using an external server to connect to external services, and for this purpose, it bought a product called API Gateway. This product’s problem is connecting to the services purchased using the API Gateway. The class structure of the same is provided as follows:

Figure3.2.png
Figure 3.2: API Gateway Proxy class structure

Now, to oblige the client to use APIGatewayProxy, we will need to change many parts of the program, which will be a costly task that can naturally involve risks. It is possible to act another way and make the two classes APIGatewayProxy and ExternalService compatible with each other and finally work with them.

To establish this compatibility, we will need to define another class according to the provided IexternalService format and communicate with APIGatewayProxy through this class. In this case, while there will be much fewer changes on the client side (to the extent of changing the class name), we will be able to work with another class that is completely different from ExternalService in terms of the structure instead of working with ExternalService. According to the preceding explanations, the following class diagram can be considered:

Figure3.3.png
Figure 3.3: Adapter design pattern UML diagram

As seen in the preceding class diagram, the ServiceAdapter class is defined, which implements the IexternalService interface. This is because the ServiceAdapter class matches the ExternalService class. Then inside the ServiceAdapter class, APIGatewayProxy should be connected. For the preceding class diagram, the following code can be considered:

public interface IexternalService

{

public string Url { get; set; }

public Dictionary Headers { get; set; }

void Get(string queryString);

void Post(object body);

}

public class ExternalService : IexternalService

{

public string Url { get; set; }

public Dictionary Headers { get; set; }

public void Get(string queryString)

=> Console.WriteLine($”Getting data from: {Url}?{queryString}”);

public void Post(object body)

=> Console.WriteLine($”Posting data to: {Url}”);

}

public class APIGatewayProxy

{

public string BaseUrl { get; set; }

public void Invoke(

string action, object parameters, object body,

string verb, Dictionary headers

)

=> Console.WriteLine($”Invoking {verb} {action} from {BaseUrl}”);

}

public class ServiceAdapter : IexternalService

{

public string Url { get; set; }

public Dictionary Headers { get; set; }

public void Get(string queryString)

{

var proxy = new APIGatewayProxy(){ BaseUrl = Url[..Url.LastIndexOf(“/”)]};

proxy.Invoke(

Url[(Url.LastIndexOf(“/”) + 1)..],

queryString, null, “GET”, Headers);

}

public void Post(object body)

{

var proxy = new APIGatewayProxy(){ BaseUrl = Url[..Url.LastIndexOf(“/”)]};

proxy.Invoke(

Url[(Url.LastIndexOf(“/”) + 1)..],

null, body, “POST”, Headers);

}

}

As it is clear in the preceding code, ServiceAdapter communicates with APIGatewayProxy while implementing the IExternalService interface. In the preceding implementation, instead of creating an APIGatewayProxy object every time inside the methods, this can be done in the ServiceAdapter constructor, and that object can be used in the methods. Therefore, according to the preceding codes, the client can use these codes as follows:

IexternalService service = new ServiceAdapter();

service.Url = “http://something.com/api/user”;

service.Get(“pageIndex=1”);

As you can see, the preceding code is the same one presented at the beginning of the discussion, with the difference that it was made from the ExternalService object class at the beginning of the discussion. Still, in the preceding code, it is made from the ServiceAdapter object.

Participants:

Target: In the preceding scenario, it is the same as IexternalService, which is responsible for defining the client’s interface. This interface defines the communication protocol between Adaptee and the Client.
Adaptee: In the preceding scenario, it is the same as APIGatewayProxy, which means an interface must match the existing structure. Usually, the client cannot communicate with this class because it has a different structure.
Adapter: In the preceding scenario, it is the same as ServiceAdapter and is responsible for adapting the Adaptee to the Target.
Client: The same user uses objects compatible with Target.
Notes:

The vital point in this design pattern is that the client calls the desired operation in the Adapter, and this Adapter is responsible for calling the corresponding operation in the Adaptee.
The amount of work the adapter must do in this design pattern depends on the similarity or difference between the Target and the Adaptee.
To implement an Adapter, you can also use an inheritance called Class Adapter. Using this method, the Adapter inherits from the Adaptee class while implementing the interface provided by Target. If we assume that Target is a class, it would be impossible to implement this method in languages like C#, where multiple inheritances are not allowed.
The implementation presented in the previous section, in which the Adapter has an Adaptee object, is called Object Adapter.
The bridge design pattern can be very similar to the adapter design pattern, but these two have different purposes. The bridge design pattern tries to separate the structure from the implementation to enable these two parts to be developed independently. But the adapter tries to change the current interface structure.
The decorator design pattern tries to add a series of new features to the class without changing the interface structure. The proxy design pattern also tries to define a proxy for an object. Therefore, these two design patterns are different from the adapter design pattern.
In the preceding implementation, the intention was only to match Adaptee with Target. If there is a need for both the Adaptee to match the Target and the Target to match the Adaptee, then the Two-Way Adapter will need to be used. In this case, while implementing the Target, the adapter also inherits from the Adaptee. The prerequisite for implementing this method in languages like C# that does not support multiple inheritances is that the Target or Adaptee must be an interface. This method is briefly shown in the following code:
Public interface Target{

void A();

}

public class Adaptee{

Public void B(){}

}

Public class Adapter: Adaptee, Target{

Public void A(){}

}

An object can contain methods and variables. Methods are executable, and variables can be initialized and updated. With this description, if we store the method inside the variable, we can give more flexibility to the adapter design pattern. Because in this way, the method to be executed can be changed at the time of execution. In other words, in previous implementations of the adapter design pattern, the client needed to know the method’s name to use it. If it is possible to change the name of the method in question at the time of execution, it is possible to make the name of the method that the client calls and the method that exists in the Target different. In this case, the Adapter should be able to manage this name change. To implement this mechanism, you can use delegates in C# language. This way of implementing the adapter design pattern is also called Pluggable Adapter.
Public class Adaptee

{

public void Print(string message) => Console.WriteLine(message);

}

public class Target

{

public void Show(string input) => Console.WriteLine(input);

}

public class Adapter : Adaptee

{

public Action Request;

public Adapter(Adaptee adaptee) => Request = adaptee.Print;

public Adapter(Target target) => Request = target.Show;

}

The preceding code is a Two-Way Adapter implemented in a Pluggable way. As you can see, a Request is defined as an Action type and a different method is executed in the constructor according to the input sent. To use the preceding code, you can proceed as follows:

Adapter adapter1 = new(new Adaptee());

adapter1.Request(“Hello Vahid”);

Adapter adapter2 = new(new Target());

adapter2.Request(“Hi my name is Vahid”);

In the first line, an Adaptee object(adapter1) is sent to the Adapter, and then the Request is executed. This will make the Print method that is placed in the Request to be executed. In the next instance(adapter2), a Target object has been sent to the Adapter, which causes the Show method to be executed when the Request is executed. Using a two-way adapter to implement a pluggable adapter is not necessary. Apart from this method, there are other ways to implement pluggable adapter, including using abstract operations or a parametrized adapter.

Consequences: Advantages

Since the structure is separated from the business logic, SRP is followed.
You can add an adapter without changing the client codes. In this case, OCP has been observed.
Consequences: Disadvantages

It is necessary to define and add new classes and interfaces. It can sometimes increase the complexity of the code.
Applicability:

When we need to use an existing class, but the structure of this class does not match the existing codes.
When we have several classes inherited from a parent class and all lack a certain behavior. Assuming that this behavior cannot be placed in the parent class, Adapter can be used to cover this need.
Related Patterns:

Some of the following design patterns are not related to Adapter design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Bridge
Proxy
Decorator
Bridge
In this section, the bridge design pattern is introduced and analyzed, according to the structure presented in GoF design patterns section in Chapter 1, Introduction to Design Pattern.

Name:

Bridge

Classification:

Structural design patterns

Also known as:

Handle/Body

Intent:

This design pattern tries to separate the abstraction from the implementation and develop abstraction and implementation independently and separately. In other words, this design pattern tries to divide larger classes into two independent structures called abstraction and implementation. With this, each of these two structures can be developed independently. Consider two independent islands. Each of these islands can develop and change independently and a bridge has been created to connect these two islands.

Motivation, Structure, Implementation, and Sample code:

Suppose a requirement has been raised, and it is requested to implement the necessary infrastructure for a service bus. This service bus should be able to call the desired service and log the received result. To implement, the team concludes that they will first develop a service bus with minimum features and complete it in the following versions. Therefore, for the first version, the team only wants the service bus to save the received response’s log in a text file while running the service. For the second version, the team will try to provide the service bus with the ability of access control and then log in to the elastic database.

Considering the preceding scenario, one design option is to create a class for the first version, define the features in this class, and do the same for the second version through a separate class. To integrate different versions, these classes can inherit from a parent class. This design is based on inheritance and problem with this design, is that if we want to develop the connecting to services or develop the logs requirements, we will probably need to define a new class. Suppose in the third version, we want to add the log to the SQL Server database. We will need to define a new class and support this requirement.

A good design in this scenario is to separate the work of invoking the service and logging. Allowing the logging or connecting to services needs to be developed separately. For this relationship to be formed, the type of relationship changes from Inheritance to Aggregation, and this aggregation relationship plays the role of a bridge between these two separate islands. For each island, an interface is defined so that different implementations can happen according to these interfaces.

The following class diagram can be considered with the explanations provided in the preceding sections. The following class diagram shows that connecting to the service is separated from logging. The ILogger interface is defined to develop the log, and the IServiceBus interface to develop the connection to the services. The BasicBus and AdvandeBus classes implement the IServiceBus interface, and the TextLogger and ElasticLogger classes implement the ILogger interface. Also, IServiceBus is connected to ILogger through Logger. Therefore, the client can use his appropriate log class and gateway. Also, if we want to add the ability to log in to SQL Server, we can easily define this class, and there will be no need to change the bus section:

Figure3.4.png
Figure 3.4: Bridge design pattern UML diagram

For the preceding class diagram, the following codes can be considered:

public class Log

{

public DateTime LogDate { get; set; }

public string Result { get; set; }

public string Message { get; set; }

}

public interface ILogger

{

void Log(Log data);

}

public class TextLogger : ILogger

{

public void Log(Log data)

=> Console.WriteLine(

$"Log to text: {data.LogDate}-{data.Result}-{data.Message}");

}

public class ElasticLogger : ILogger

{

public void Log(Log data)

=> Console.WriteLine(

$"Log to elastic: {data.LogDate}-{data.Result}-{data.Message}");

}

public interface IServiceBus

{

public ILogger Logger { get; set; }

void Call(string url);

}

public class BasicBus : IServiceBus

{

public ILogger Logger { get; set; }

public void Call(string url)

{

Console.WriteLine($"Request sent to {url}");

Logger.Log(new Log {

LogDate = DateTime.Now,

Result = "OK",

Message = "Response received."

});

}

}

public class AdvanceBus : IServiceBus

{

public ILogger Logger { get; set; }

public void Call(string url)

{

if (new Uri(url).Scheme == "http")

Logger.Log(new Log {

LogDate = DateTime.Now,

Result = "Failed",

Message = "HTTP not supported!"

});

else

{

Console.WriteLine($"Request sent to {url}");

Logger.Log(new Log {

LogDate = DateTime.Now,

Result = "OK",

Message = "Response received."

});

}

}

}

As it is clear in the preceding code, the IServiceBus interface has a feature called Logger of ILogger type, which is connected to the logging mechanism. This connection is visible inside the Call method. Also, the two classes BasicBus and AdvanceBus have implemented the IServiceBus interface, the BasicBus class is the same as the first version and the AdvanceBus is the same as the second version. To use this structure, you can do the following:

IServiceBus bus = new BasicBus()

{

Logger = new TextLogger()

};

bus.Call("https://google.com");

According to the preceding code, if we want to save the log in Elasticsearch instead of text file, we can create object from ElasticLogger instead of creating object from TextLogger. Also, if we want to use the second version or AdvandeBus instead of the first version or BasicBus, we need to create an object from AdvanceBus instead of using the object from BasicBus.

Participants:

Abstraction: which is the same as IServiceBus in the preceding scenario. It is responsible for defining the format for abstractions. This interface should also provide a reference to the implementor.
RefinedAbstraction: the same as BasicBus and AdvanceBus in the preceding scenario. It is responsible for implementing the format provided by abstraction.
Implementor: which in the preceding scenario is ILogger. It is responsible for defining the format for implementations. This interface does not need to have a reference to abstraction.
ConcreteImplementor: which in the preceding scenario is the same as TextLogger and ElasticLogger, is responsible for implementing the template provided by implementor.
Client: It is the user and executes the code through abstraction.
Notes:

In this design pattern, abstraction is responsible for sending the request to the implementor.
Abstraction usually defines and implements a series of complex behaviors dependent on a series of basic behaviors defined through the implementor.
If we have more than one implementation, we cannot use implementor.
The important point in design pattern is when and from which implementor should the object be prepared? There are different ways to answer this question. One method is to prepare the appropriate object from implementor using the Abstract Factory design pattern, according to other features. In this case, the client will not be involved in the complexity of providing the object from the implementor.
Consequences: Advantages

It is possible to define a new RefinedAbstraction without changing the implementor. It is also possible to add new ConcreteImplementors without changing the abstraction. Hence, the OCP has been met.
Due to the separation of abstraction and implementor from each other, SRP has been observed.
Consequences: Disadvantages

Applying this design pattern to a code structure with many internal dependencies will increase complexity.
Applicability:

This design pattern can be used when faced with a class where different types of behavior have been implemented. Suppose there was a class that included the log operation in the text file and elastic together. Then, using this design pattern, we could separate this one-piece structure. The smaller the classes are, the easier it will be to debug and develop them.
When developing a class in different and independent dimensions, this design pattern can be used. For this purpose, each dimension can be defined and implemented in a separate structure.
If we need to move different implementations together during execution, we can benefit from this pattern.
Related patterns:

Some of the following design patterns are not related to bridge design pattern, but to implement this design pattern, the following design patterns will be useful:

Abstract Factory
Builder
Adapter
Composite
In this section, the composite design pattern is introduced and analyzed, according to the structure presented in GoF design patterns section in Chapter 1, Introduction to design pattern.

Name:

Composite

Classification:

Structural design patterns

Also known as:

---

Intent:

This design pattern tries to treat objects of the same type in a single form and turn them into a tree structure.

Motivation, Structure, Implementation, and Sample code:

Suppose there is a requirement in which we need to give the user the possibility to define the program’s menus. Each menu can have sub-menus and menu members. Each member of the menu has text and address of the desired page, which should be directed to the desired page by clicking on it. In this scenario, we are facing a hierarchical or tree structure. For example, the following figure shows an example of this menu:

Figure3.5.png
Figure 3.5: Sample menu structure

In the preceding structure, the main menu has a member called Overview. This member has four other members: Intro, Architecture Components, Class Libraries and Tutorials. The Tutorials member has two other members: Template Changes and Use Visual Studio, of which Use Visual Studio again has three members: New Project, Debug and Publish. In other words, each member in the menu, according to the Figure 3.5, has three features: text, address and subcategory members. The leaves of the tree, in the preceding structure, do not have subgroup members and only have text and address.

With these explanations, we are faced with two entities. One entity for tree leaves and another for tree nodes. With these explanations, the following figure of class diagram can be imagined:

Figure3.6.png
Figure 3.6: Composite design pattern UML diagram

As shown in the preceding class diagram, there are two types of menu members. The first type: tree leaves and the second type: tree nodes. These different types, regardless of whether they are leaves or nodes, have text and address and can be printed. Menu class has the role of tree nodes and allows to add sub-set nodes or leaves to the nodes. For this reason, the Menu class, regardless of whether it implements the IMenuComponent interface, also has an aggregation relationship with this interface. The MenuItem class also has the role of a leaf in the preceding design. With the preceding explanations, the following code can be considered for the Figure 3.6:

public interface ImenuComponent

{

public string Text { get; set; }

public string Url { get; set; }

string Print();

}

public class Menu : ImenuComponent

{

public string Text { get; set; }

public string Url { get; set; }

public List Children { get; set; }

public string Print()

{

StringBuilder sb = new();

sb.Append($”Root: {Text}”);

sb.Append(Environment.NewLine);

foreach (var child in Children)

{

sb.Append($”Parent: {Text}, Child: {child.Print()}”);

sb.Append(Environment.NewLine);

}

return sb.ToString();

}

}

public class MenuItem : ImenuComponent

{

public string Text { get; set; }

public string Url { get; set; }

public string Print() => $”{Text}”;

}

To create a menu, with the structure presented at the beginning of the topic, you can proceed as follows:

ImenuComponent menu = new Menu()

{

Text = “Overview”,

Url = “/overview.html”,

Children = new List()

{

new MenuItem{Text =”Intro”,Url=”/intro.html”},

new MenuItem{Text =”Architecture Component”,Url=”/arch.html”},

new MenuItem{Text =”Class Libraries”,Url=”/class.html”},

new Menu{

Text =”Tutorials”,

Url=”/tutorials.html”,

Children=new List

{

new MenuItem{Text =”Template Changes”,Url=”/tpl.html”},

new Menu{

Text =”Use Visual Studio”,

Url=”/vs.html”,

Children=new List

{

new MenuItem{Text =”New Project”,Url=”/new-project.html”},

new MenuItem{Text =”Debug”,Url=”/debug.html”},

new MenuItem{Text =”Publish”,Url=”/publish.html”}

}

}

}

}

}

};

Console.WriteLine(menu.Print());

When the Print method is called in the last line, according to the nature of the menu object, which is of the Menu type, the Print method in this class is called. Within this method, according to the nature of each member (node or leaf), the corresponding Print method is called in the corresponding class.

Participants:

Component: In the preceding scenario, it is the same as ImenuComponent and is responsible for defining the common format for Leaf and Composite. If needed, component can be defined as an abstract class and default implementations can be placed in this class. Also, component should provide a template for member navigation. Being able to access the parent node is another task that the Component can provide, however this feature is optional for it.
Leaf: In the preceding scenario, it is the same as MenuItem and is responsible for implementing the leaf.
Composite: In the preceding scenario, it is the same as Menu and is responsible for the task of the node. Each node can have a subset and composite should provide appropriate capabilities for this purpose.
Client: It is the user and creates a tree structure through component.
Notes:

Using the composite design pattern, we have a primitive type (leaf) and a composite type (node). Both of these types implement a common interface. According to these points, a tree structure can be implemented using this design pattern.
Typically, the navigation method is such that the request is executed if it reaches the leaf. Otherwise, if it reaches the node, the request is sent to the subset nodes or leaves. This process continues until the leaf is reached. In this case, in the node, you can perform series of tasks before or after sending the request to the subset nodes or leaves.
By placing a reference to the parent node, it is possible to obtain both its subset and parent nodes through one node. This feature helps to implement the Chain of Responsibility design pattern.
Using the Flyweight design pattern, the parent can be shared between nodes or leaves.
To design the component as much as possible, common tasks should be placed in this class or interface. But sometimes, we face meaningless behaviors while composite and meaningful for others. For example, in the preceding scenario, children are only placed in composite, whereas we could have placed it in component and then provided it in the default implementation. What decision to make in these circumstances is a cost-benefit analysis. For example, in the preceding scenario, if we put the children attribute in the component, we would make the method of dealing with Leaf and Composite the same from the client's point of view. This creates transparency in the design. But on the other hand, it causes us to endanger the safety of the code or to allow the client to do meaningless things that cannot be detected at the time of compilation. This will also waste space(of course, this waste of space is very small).
On the other hand, by not placing children in the component, we have maintained the safety of the code, and the client cannot do meaningless tasks, and these tasks can be identified at the time of compilation. But in this case, we have lost the transparency. Therefore, the choice of the implementation method is very dependent on the cost-benefit analysis.

When you need to operate such as search and this operation is repeated many times, you can increase the overall efficiency by using caching.
The structure can be navigated using the Iterator design pattern.
By using the Visitor template, you can perform a task on all members.
Consequences: Advantages

The method of dealing with primitive and compound types is fixed from the client’s point of view, making it easier for the client.
You can add new members without changing the existing codes. This addition of new types occurs in a situation where the client is not involved in these events and in this way the OCP has been observed.
You can easily work with complex tree structures.
Consequences: Disadvantages

Although the possibility of adding new members can be considered an advantage, but from another point of view, this can also be a disadvantage. Because it is appreciated that sometimes we want to define restrictions on different members and specific members can only be added. Using this design pattern, solving this possibility requires runtime controls.
It may be not easy to define the component and put common behaviors; we have to define the component very general.
Applicability:

When we are faced with a tree structure of objects.
When we need, from the client’s point of view, the method of dealing with leaves and nodes seems fixed. This feature is achieved by implementing a single and common interface between nodes and leaves.
Related patterns:

Some of the following design patterns are not related to Composite design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Chain of Responsibility
Flyweight
Iterator
Visitor
Command
Builder
Decorator
Interpreter
Decorator
In this section, the decorator design pattern is introduced and analyzed, according to the structure presented in GoF design patterns section in Chapter 1, Introduction to Design patterns.

Name:

Decorator

Classification:

Structural design patterns

Also known as:

Wrapper1

Intent:

This design pattern tries to add some new functionality to the class at runtime or dynamically.

Motivation, Structure, Implementation, and Sample code:

Suppose we are designing a photo editing application and there is a requirement to convert the image to black-white or to add text on the image. There are different ways to solve requirement. One way is to go into the main class and add the necessary methods to black-white and add text. But this will make the main class complicated to maintain by adding more requirements.

Another way is to define a series of classes for black-white and adding text that inherit from the main class. In fact, in such scenarios where the capabilities of a class need to be developed, inheritance is one of the first ideas that comes to mind. But inheritance, in turn, has a series of points that must be considered before applying it to the design. For example, in many programming languages, a class can inherit

from only one class. Also, due to its nature, inheritance has a static nature, that is, by using inheritance, one behavior can be replaced by the behavior of the parent class, and the behavior of the parent class cannot be changed. Even the desired main class may be defined as Sealed and there is practically no possibility of inheriting from the class.

The third solution is to use aggregation or composition relationships to solve this problem. By using this type of a relationship, an object has a reference to the main object and in this way, it can do a series of additional tasks compared to the main object. Also, by using this solution, we will no longer face the limitation we had in inheritance. The only important point in this solution is that from the user’s point of view, the main and supplementary objects should look the same. To comply with this point, both the main class and the supplementary class implement a single interface. And finally, to be able to add a new feature to the main object, it is enough to send the main object to the supplementary class.

Therefore, with these explanations, the following class diagram can be considered:

Figure3.7.png
Figure 3.7: Decorator design pattern UML diagram

As shown in the diagram Figure 3.7, the Photo class is the main class that implements the IPhoto interface. If the client wants the original photo, he uses this class and receives the original image. On the other side is the PhotoDecoratorBase class. This class is considered abstract in the preceding diagram. This class implements the Iphoto interface like the Photo class and has an aggregation relationship with Iphoto. This means that by taking a Photo object, it can add a series of new features to it. Next, WatermarkDecorator and BlackWhiteDecorator are defined, which inherit from PhotoDecoratorBase. The important point here is that the GetPhoto method in the PhotoDecoratorBase class is defined as a virtual method so that each child class can change that behavior. With all these explanations, the client can deliver the main image to the decorators and, in this way, do the desired work on the photo, or it can do nothing with the decorators and work directly with the Photo class.

According to the preceding diagram, the following codes can be considered:

public interface Iphoto

{

Bitmap GetPhoto();

}

public class Photo : Iphoto

{

private readonly string filename;

public Photo(string filename) => this.filename = filename;

public Bitmap GetPhoto() => new(filename);

}

public abstract class PhotoDecoratorBase : Iphoto

{

private readonly Iphoto _photo;

public PhotoDecoratorBase(Iphoto photo) => _photo = photo;

public virtual Bitmap GetPhoto() => _photo.GetPhoto();

}

public class WatermarkDecorator : PhotoDecoratorBase

{

private readonly string text;

public WatermarkDecorator(Iphoto photo, string text)

: base(photo) => this.text = text;

public override Bitmap GetPhoto()

{

var photo = base.GetPhoto();

Graphics g = Graphics.FromImage(photo);

g.DrawString(

text,

new Font(“B Nazanin”, 18),

Brushes.Black,

photo.Width / 2,

photo.Height / 2);

g.Save();

return photo;

}

}

public class BlackWhiteDecorator : PhotoDecoratorBase

{

public BlackWhiteDecorator(Iphoto photo) : base(photo) { }

public override Bitmap GetPhoto()

{

var photo = base.GetPhoto();

//Convert photo to black and white here

return photo;

}

}

As can be seen in the preceding code, the Photo class has implemented the Iphoto interface. The PhotoDecoratorBase class also receives an Iphoto object while implementing this interface. Then the decorators inherit from this class and each one tries to provide its own functionality. Each decorator, while rewriting GetPhoto, first calls GetPhoto related to the Photo object and then adds a new feature to it. In order to use this code, you can do the following:

Iphoto photo = new Photo(“C:\\sample.png”);

photo.GetPhoto();//returns the original photo

WatermarkDecorator watermarkDecorator = new WatermarkDecorator(photo, “Sample Watermark”);

watermarkDecorator.GetPhoto();

BlackWhiteDecorator blackWhiteDecorator = new BlackWhiteDecorator(photo);

blackWhiteDecorator.GetPhoto();

In the first line, a Photo object is created and the GetPhoto method of the Photo class is called through this object. In the third and fourth lines, we give the object created in the first line to the WatermarkDecorator class and add text to the photo through that. In the fifth and sixth lines, we give the object created in the first line to the BlackWhiteDecorator and convert the image to black and white.

Participants:

Component: In the preceding scenario, it is the same as Iphoto and has the task of defining the template for the classes to which we want to add new features.
ConcreteComponent: In the preceding scenario, it is Photo that implements the template provided by component. In fact, this class is the same class to which we want to add a new feature.
Decorator: In the preceding scenario, the PhotoDecoratorBase has a reference to the component and tries to provide a template matching the template provided by the component.
ConcreteDecorator: In the preceding scenario, it is the same as WatermarkDecorator and BlackWhiteDecorator, which adds functionality to the component.
Client: It is the same user who uses the code through the component or decorator.
Notes:

In this design pattern, the decorator sends the incoming request to the component. In this process, it may perform some additional tasks before or after sending the request to the component.
Using this design pattern, there is more flexibility in adding or removing a feature than inheritance.
By using this design pattern and decorator, you can add features based on your requirements, and you don’t need to add all useless features to the class.
The decorator object must have an interface that matches the component interface.
If only one feature needs to be added, then there is no need to define the decorator abstract class.
The component class should be light and simple as much as possible, otherwise the decorator will have to implement features that it does not need, and this will make the decorator heavy and increase its complexity.
Using the Strategy design pattern, the internal structure and communications are changed and transformed, while by using the Decorator, only the procedure of the object is changed. For example, suppose we are dealing with a heavy and complex component. In this case, instead of defining a decorator for it, you can use Strategy to send a series of component capabilities to other objects. In this case, there is no need to transfer the weight of the component to the decorator.
Using this design pattern, only the procedure of the object is changed, so there is no need for the component to know about the decorators. This point is also one of the differences between Strategy and Decorator.
Adapter, Proxy and Decorator design patterns have similarities, but at the same time they have an important difference. The Adapter design pattern basically provides a different interface, the Proxy pattern provides exactly the same interface, and the Decorator pattern reinforces and provides the existing interface.
The Chain of Responsibility design pattern also has similarities with the Decorator design pattern. The difference between these two design patterns is that by using Chain of Responsibility, the chain of execution can be interrupted somewhere, but this is not possible in Decorator. Also, by using Chain of Responsibility, the execution of a task in one node of the chain is independent of its execution in another node. But in Decorator, the execution of a task depends on the main interface.
If there is a need to define different decorators, then these decorators can be independent from each other.
Consequences: Advantages

You can extend the behavior of an object without adding a child class.
It is possible to add or disable some features to the class at runtime.
Several behaviors can be combined with each other using decorator.
A large class can be divided into several small classes, each class has a single task, and in this way, SRP is observed.
Consequences: Disadvantages

Removing a Decorator has some complications.
Code debugging process becomes more complicated.
Applicability:

When we need to add a feature dynamically and subtly to a class and at the same time, we do not want to affect the previous objects.
When it is not possible to add new functionality to the class through inheritance.
Related patterns:

Some of the following design patterns are not related to Decorator design pattern, but in order to implement this design pattern, checking the following design patterns will be useful:

Strategy
Composite
Chain of responsibility
Adapter
Proxy
Facade
In this section, the facade design pattern is introduced and analyzed, according to the structure presented in GoF design patterns section in Chapter 1, Introduction to Design Patterns.

Name:

Facade

Classification:

Structural design patterns

Also known as:

---

Intent

This design pattern tries to facilitate communication with different components of a system by providing a simple interface.

Motivation, Structure, Implementation, and Sample code:

In Figure 3.8 to use different components of the system, users are directly connected with these components, which increases the complexity. The Facade design pattern tries to manage this complexity by providing a simple interface, so that the user does not get involved in communication complications:

Figure3.8.png
Figure 3.8: Interactions in system without the presence of Facade

Therefore, in the presence of the Facade design pattern, the Figure 3.8 becomes as follows:

Figure3.9.png
Figure 3.9: Interactions in system with presence of Façade

In order to clarify the issue, pay attention to the following scenario:

Suppose there is a need in which the necessary infrastructure is needed to inquire about plane tickets from different airlines. Let us assume that we are dealing with three airlines, Iran Air, Mahan and Ata, and we need to inquire about the price of plane tickets from these airlines by providing the date, origin and destination, and then show the results of the inquiry to the user in order of cheap to expensive.

One view is that the user directly connects to each airline and performs the inquiry process. In this case, as in the Figure 3.8, there will be communication complexity and the user will be involved in how to connect and communicate with the airlines, which will increase the complexity and make the code more difficult to maintain.

According to the Figure 3.9, by providing a simple interface to the user, the way of communicating with the airline can be hidden from the user's view, and the user can only work with the provided interface and make ticket inquiries. With these explanations, the following class diagram can be imagined:

Figure3.10.png
Figure 3.10: Facade design pattern UML diagram

As shown in preceding figure diagram, the user submits the inquiry request to the TicketInquiry class, and this class communicates with each of the servers, collects the results, and returns them to the user. For the preceding design, the following code can be imagined:

public class Ticket

{

public DateTime FlightTime { get; set; }

public string FlightNumber { get; set; }

public string From { get; set; }

public string To { get; set; }

public int Price { get; set; }

public override string ToString()

=> $"{From}-{To}, {FlightTime}, Number:{FlightNumber},Price:{Price}";

}

public class TicketInquiry

{

public List Inquiry(DateTime date, string from, string to)

{

var iranAirFlights = new IranAir().SearchFlights(date, from, to);

var mahanFlights = new Mahan().Search(date, from, to);

var ataFlights = new ATA().Find(date, from, to);

List result = new();

result.AddRange(iranAirFlights);

result.AddRange(mahanFlights);

result.AddRange(ataFlights);

return result.OrderBy(x => x.Price).ToList();

}

}

public class IranAir

{

readonly Ticket[] iranairTickets = new[]

{

new Ticket() { FlightNumber = "IA1000", FlightTime = new DateTime(2021,01,02,11,20,00), Price = 800000, From="Tehran",To="Urmia" },

new Ticket() { FlightNumber = "IA2000", FlightTime = new DateTime(2021,01,02,12,45,00), Price = 750000, From="Tehran",To="Rasht" },

new Ticket() { FlightNumber = "IA3000", FlightTime = new DateTime(2021,01,03,09,10,00), Price = 700000, From="Tehran",To="Urmia" },

new Ticket() { FlightNumber = "IA4000", FlightTime = new DateTime(2021,01,02,18,45,00), Price = 775000, From="Tehran",To="Tabriz" },

new Ticket() { FlightNumber = "IA5000", FlightTime = new DateTime(2021,01,02,22,00,00), Price = 780000, From="Tehran",To="Ahvaz" },

};

public Ticket[] SearchFlights(DateTime date, string from, string to)

=> iranairTickets.Where(

x => x.FlightTime.Date == date.Date &&

x.From == from && x.To == to).ToArray();

}

public class Mahan

{

readonly Ticket[] mahanTickets = new[]

{

new Ticket() { FlightNumber = "M999", FlightTime = new DateTime(2021,01,03,13,30,00), Price = 1500000, From="Tehran",To="Zahedan" },

new Ticket() { FlightNumber = "M888", FlightTime = new DateTime(2021,01,04,15,00,00), Price = 810000, From="Tehran",To="Urmia" },

new Ticket() { FlightNumber = "M777", FlightTime = new DateTime(2021,01,02,06,10,00), Price = 745000, From="Tehran",To="Rasht" }

};

public Ticket[] Search(DateTime date, string from, string to)

=> mahanTickets.Where(

x => x.FlightTime.Date == date.Date &&

x.From == from && x.To == to).ToArray();

}

public class ATA

{

readonly Ticket[] ataTickets = new[]

{

new Ticket() { FlightNumber = "A123", FlightTime = new DateTime(2021,01,02,07,10,00), Price = 805000, From="Tehran",To="Urmia" },

new Ticket() { FlightNumber = "A456", FlightTime = new DateTime(2021,01,03,09,20,00), Price = 750000, From="Tehran",To="Sari" },

new Ticket() { FlightNumber = "A789", FlightTime = new DateTime(2021,01,02,16,50,00), Price = 700000, From="Tehran",To="Tabriz" },

new Ticket() { FlightNumber = "A159", FlightTime = new DateTime(2021,01,03,23,10,00), Price = 775000, From="Tehran",To="Sanandaj" },

new Ticket() { FlightNumber = "A357", FlightTime = new DateTime(2021,01,02,05,00,00), Price = 780000, From="Tehran",To="Urmia" },

};

public Ticket[] Find(DateTime date, string from, string to)

=> ataTickets.Where(

x => x.FlightTime.Date == date.Date &&

x.From == from && x.To == to).ToArray();

}

As can be seen in the preceding code, the user submits the request to TicketInquiry, and TicketInquiry is in charge of managing the relationship with the airlines and inquiries about the ticket and delivers the result to the user. In order to use this structure, you can proceed as follows:

TicketInquiry ticketInquiry = new TicketInquiry();

foreach (var item in ticketInquiry.Inquiry(new DateTime(2021,01,02),"Tehran","Urmia"))

{

Console.WriteLine(item.ToString());

}

In the preceding code, the user makes an inquiry through the TicketInquiry class and displays the received answer in the output.

Participants:

Facade: In the preceding scenario, it is TicketInquiry and is responsible for handling the client's request. In this responsibility, facade can connect to subsystems and track the request.
Subsystem classes: In the preceding scenario, it is IranAir, Mahan and ATA, which is responsible for implementing the business logic related to each subsystem. These classes should not have any knowledge of facade.
Client: It is the same user who delivers the request to facade and receives the response from facade.
Notes:

In order to prevent Facade from becoming complicated, it may be necessary to use several facades and each facade will implement the task of managing related requests.
If we want to hide the way of instantiating subsystems from the client's view, then we can use abstract factory instead of this design pattern.
Flyweight design pattern tries to create small and light objects, while facade design pattern tries to cover the whole system through one object.
Facade design pattern can be similar to mediator design pattern. Because both of these design patterns try to manage communication between classes. However, these two design patterns also have some differences:
The facade pattern only introduces a simple interface and does not provide any new functionality. Also, the subsystems themselves are unaware of the facade and have the possibility to communicate directly with another subsystems.
The mediator design pattern, on the other hand, depends on communication to a central core, and subsystems are not allowed to communicate directly with each other and can only communicate with each other through this central core.
In most cases, the facade class can be implemented as a singleton.
The facade design pattern has similarities with the proxy pattern. Because both of these patterns maintain a complex entity and create objects from it when necessary. However, unlike facade, the proxy design pattern has an interface that implements different services of this interface and therefore these services can be replaced with each other.
In order to reduce the dependency between client and subsystem, facade can be defined as an abstract class. In this case, different implementations can be provided for each facade and allow the Client to choose and use the most appropriate implementation among these different implementations.
If it is necessary to cut off the client's access to the subsystem, the subsystems can be defined as private classes in the Facade.
Consequences: Advantages

Due to the fact that the subsystem logic is isolated, it reduces the complexity of the current system. In fact, by using this design pattern, the client deals with fewer objects, making it possible to use subsystems easily.
There is a loose coupling between the client and subsystem.
Consequences: Disadvantages

Facade class can become God Object over time.

Applicability:

When there is a need to communicate with a specific and complex subsystem.
Related patterns:

Some of the following design patterns are not related to Decorator design pattern, but in order to implement this design pattern, checking the following design patterns will be useful:

Singleton
Mediator
Abstract factory
Flyweight
Proxy
Flyweight
In this section, the flyweight design pattern is introduced and analyzed, according to the structure presented in GoF design patterns section in Chapter 1, Introduction to Design Pattern.

Name:

Flyweight

Classification:

Structural design patterns

Also known as:

---

Intent:

This design pattern tries to make optimal use of resources by sharing objects. Resources can be computing resources, memory or others.

Motivation, Structure, Implementation, and Sample code:

Suppose we are designing a computer game. In one sequence of this game, we need to design a forest and create and display two million trees in it. Suppose there are only pine trees in this forest. A pine tree has characteristics such as color, size (short, medium and tall), name (in this scenario we only have a pine tree) and coordinates.

To implement this scenario, one type of design is to define a class for the tree and then create the required number of objects from it. Suppose the following table is assumed for tree object properties:

Property

Data Type

Size

Name

String (20 chars)

40 Bytes

Color

String (20 chars)

40 Bytes

Size

String (5 chars)

10 Bytes

Coord_X

Integer

4 Bytes

Coord_Y

Integer

4 Bytes

Sum for only 1 tree

98 Bytes

Sum for all the 2 million trees

2,000,000x98=196,000,000 Bytes

Table 3.1: Detailed required memory

NOTE: For simplicity, Coord_X and Coord_Y are defined as integer. In a real-world example, you might need to use other data types for these attributes

According to the preceding table, to build two million trees, about 196 million bytes of memory will be needed.

But when we look at the preceding properties, we realize that some of these properties are common to all trees. For example, all the trees in the forest have three different sizes and have a fixed color for each size, and only the X and Y coordinates are different for each tree. Now, if there is a way to share this common data between trees, then there will be a significant reduction in memory consumption. For example, we can separate the fixed properties from the variable properties. After receiving the object related to the fixed properties, through the definition of a series of methods, we can set the variable properties. In this case, only one object of fixed properties is always created. So, the amount of memory consumption for fixed properties is as follows: (assume that all trees are made with a fixed size):

Attribute

Data Type

Size

Name

String (20 chars)

40 Bytes

Color

String (20 chars)

40 Bytes

Size

String (5 chars)

10 Bytes

Sum

90 Bytes

Table 3.2: Required memory for fixed properties

And for variable properties we also have:

Attribute

Data Type

Size

Coord_X

Integer

4 Bytes

Coord_Y

Integer

4 Bytes

Sum for only 1 tree

8 Bytes

Sum for all the 2 million trees

2,000,000x8=16,000,000 Bytes

Table 3.3: Required memory for variable properties

Finally, the total memory required will be around 16,000,090 Bytes. This number means about 180 million bytes of reduction in memory consumption. With the preceding explanations, the following class diagram can be considered:

Figure3.11.png
Figure 3.11: Flyweight design pattern UML diagram

As shown in the Figure 3.11 class diagram, the Tree class is used to create a tree and implements the ITree interface. This class has two sets of features. Fixed properties, such as Name, Color and Size, along with variable properties such as Coord_X and Coord_Y. The TreeFactory class is responsible for providing the tree object. In fact, this class is responsible for providing the tree object along with fixed properties. The Client class is responsible for setting the variable properties by receiving the object from the TreeFactory through the SetCoord method.

For Figure 3.11 class diagram, the following codes can be imagined:

public interface Itree

{

void SetCoord(int x, int y);

}

public class Tree : Itree

{

public string Name { get; private set; }

public string Color { get; private set; }

public string Size { get; private set; }

public int Coord_X { get; private set; }

public int Coord_Y { get; private set; }

public Tree(string name, string color, string size)

{

Name = name;

Color = color;

Size = size;

}

public void SetCoord(int x, int y)

{

this.Coord_X = x;

this.Coord_Y = y;

}

}

public class TreeFactory

{

readonly Dictionary _cache = new();

public Itree this[string name, string color, string size]

{

get

{

Itree tree;

string key = $”{name}_{color}_{size}”;

if (_cache.ContainsKey(key))

tree = _cache[key];

else{

tree = new Tree(name, color, size);

_cache.Add(key, tree);

}

return tree;

}

}

}

As can be seen in the preceding code, Name, Size and Color properties are fixed properties and Coord_X and Coord_Y properties are variable properties. An indexer is defined in the TreeFactory class, the job of this indexer is to provide or create a new object from Tree. In this indexer, if a tree with the given name, size and color has already been created, the same object is returned, otherwise, a new object is created and added to the object repository. When the user needs a Tree object, instead of directly creating the object, it is expected that the user will create the object through the TreeFactory class. To use this structure, you can proceed as follows:

TreeFactory treeFactory = new();

Itree tree = treeFactory[“pine”, “green”, “short”];

tree.SetCoord(1, 1);

Itree tree2 = treeFactory[“pine”, “green”, “short”];

tree2.SetCoord(2, 2);

Itree tree3 = treeFactory[“pine”, “green”, “short”];

tree3.SetCoord(3, 3);

As shown in the preceding code, in the second line, the object request for the tree named Pine, in green color and in short size, is registered. Since this object does not exist, TreeFactory creates a new object. In third line, the constructed tree is placed in coordinates x=1 and y=1. In fourth line, the tree with the same properties as before is requested again. The TreeFactory class, instead of creating a new object, provides the same object as the previous one, and this tree is also placed in the coordinates x=2 and y=2.

Participants:

Flyweight: In the preceding scenario, it is the same Itree that is responsible for defining the template for working on variable properties.
ConcreteFlyweight: In the preceding scenario, it is the same as Tree (Name, Size and Color properties), and it implements the interface provided by Flyweight. This class is responsible for maintaining static properties.
UnsharedConcreteFlyweight: In the preceding scenario, it is the same Tree (Coord_X and Coord_Y properties) and is responsible for implementing the interface provided by flyweight. In another design of this structure, you can separate the UnsharedConcreteFlyweight and ConcreteFlyweight classes and put a reference of ConcreteFlyweight in UnsharedConcreteFlyweight. This class is also called Context.
FlyweightFactory: In the preceding scenario, it is the same as TreeFactory and is in charge of maintaining, creating and presenting flyweight objects.
Client: It is the same user and adjusts variable properties through flyweight received from FlyweightFactory.
Notes:

There are two categories of properties related to this design pattern: intrinsic properties and extrinsic properties. Intrinsic properties, which we referred to as fixed properties, are shareable properties and are stored in ConcreteFlyweight, while extrinsic properties, which we refer to as variable properties, are non-shareable properties and are stored in UnsharedConcreteFlyweight. These properties can also be calculated on the client side and sent to ConcreteFlyweight.
Client should not directly create object from ConcreteFlyweight and should only obtain this object through FlyweightFactory. In this case, the object sharing mechanism is always observed and applied.
Although the preceding structure has implemented the flyweight design pattern, it still needs some help. The first problem is that the user has access to the Tree class and can create objects directly from this class. To solve this problem, you can put the Tree class as a private class inside the TreeFactory class. Refer to the following code:
public class TreeFactory

{

private class Tree : Itree

{

public string Name { get; private set; }

Now, with this change, the user will not have direct access to create objects from the Tree class.

The next problem with the preceding structure is that the objects provided by TreeFactory for the same key, are always same. This means that the change of an attribute in an object is reflected in other objects as well.
TreeFactory treeFactory = new();

Itree tree = treeFactory[“pine”, “green”, “short”];

tree.SetCoord(1, 1);

Itree tree2 = treeFactory[“pine”, “green”, “short”];

tree2.SetCoord(2, 2);

Itree tree3 = treeFactory[“pine”, “green”, “short”];

tree3.SetCoord(3, 3);

In the preceding code, in the third line, the SetCoord method sets the values of Coord_X and Coord_Y with the value 1. But in the fifth line, these values are shifted by 2, and this makes tree1 and tree2 objects see both Coord_X and Coord_Y attribute values equal to 2. To solve the problem, it will be necessary to separate the storage location of fixed properties and variable properties. Pay attention to the following code:

public class TreeType : Itree

{

public string Name { get; private set; }

public string Color { get; private set; }

public string Size { get; private set; }

public TreeType(string name, string color, string size)

{

Name = name;

Color = color;

Size = size;

}

public void Draw(Itree tree)

{

var obj = (Tree)tree;

Console.WriteLine($”

TreeType:{GetHashCode()},{Name},

Tree:{obj.GetHashCode()}({obj.Coord_X},{obj.Coord_Y})”);

}

}

In the preceding code, the fixed properties are completely separated from the variable properties. Fixed properties are moved into the TreeType class and variable properties are placed into the Tree class. TreeFactory class creates or provides TreeType object through indexer. To use this structure, you can do the following:

List trees = new();

TreeType type = new TreeFactory()[“pine”, “green”, “short”];

Tree tree1 = new(type, 1, 1);

trees.Add(tree1);

Tree tree2 = new(type, 2, 2);

trees.Add(tree2);

Tree tree3 = new(type, 3, 3);

trees.Add(tree3);

foreach (var item in trees)

{

item.Draw(item);

}

In second line, the TreeFactory object with the provided properties is requested. Then this object has been sent to the Tree class in lines 3, 5 and 7 to create the desired trees in the desired coordinates. Now, the type variable, which is of TreeType type, is shared between all three objects, tree1, tree2, and tree3. The output of the preceding code will be as follows:

TreeType:58225482,pine,Tree:54267293(1,1)

TreeType:58225482,pine,Tree:18643596(2,2)

TreeType:58225482,pine,Tree:33574638(3,3)

As it is clear in the output, only one object of TreeType type is created and it is shared between three objects tree1, tree2 and tree3. This sharing will greatly reduce the consumption of resources such as memory.

This design pattern is very similar to singleton design pattern. However, there are some differences between these two design patterns. Including:
In singleton, there is only one object, but in flyweight, there is one object per fixed property class.
The object created through singleton is mutable while the object created by flyweight is immutable.
Same as the singleton design pattern, concurrency topics should be considered in this pattern.
Consequences: Advantages

Memory consumption is significantly reduced.
Consequences: Disadvantages

Code complexity increases.
Applicability:

When there is a need to create a large number of objects and memory is not available for this amount of created objects.
Related patterns:

Some of the following design patterns are not related to Flyweight design pattern, but in order to implement this design pattern, checking the following design patterns will be useful:

Singleton
State
Strategy
Interpreter
Composite
Facade
Proxy
In this section, the proxy design pattern is introduced and analyzed, according to the structure presented in GoF design patterns section in Chapter 1, Introduction to Design Pattern.

Name:

Proxy

Classification:

Structural design patterns

Also known as:

Surrogate

Intent:

This design pattern tries to control the use of an object by providing a substitute for it.

Motivation, Structure, Implementation, and Sample code:

Suppose we are designing an infrastructure to communicate with a GIS service. For the sake of simplicity. Let us assume that the requirement is such that we need to provide the name of a geographic region (country, city, street, and so on.), and receive information about the latitude and longitude of that region.

In order to implement this mechanism, one option is to connect the user directly to the GIS service. But by doing this, we notice the slow response, because for every request sent to the GIS service, this service tries to find the latitude and longitude from the beginning.

Another way to implement this requirement is to send the requests to an intermediary instead of directly from the user to the GIS service, and then send the request to the GIS service through that intermediary. This is called a proxy. By doing this, we will get several advantages, including being able to cache requests and responses for some popular requests from the proxy side before sending them to GIS. With this, we will no longer need to submit any request to the GIS service. Also, in the presence of the proxy, you can record the request and response logs. In the presence of proxy, requests can be controlled.

According to the preceding explanations, the following class diagram can be imagined:

Figure3.12.png
Figure 3.12: Proxy design pattern UML diagram

As shown in the Figure 3.12 class diagram, the GISService class has a method called GetLatLng, which takes the name of the geographic region and returns the longitude and latitude of that region. We need to connect the user through a proxy instead of directly connecting to the GISService. For this, in order for the user to avoid being involved in changes and design details, it is necessary to design the GISServiceProxy class in such a way that its methods have a proper mapping with GISService. For this, the IGISService interface is defined and both GISService and GISServiceProxy classes have implemented this interface. In this case, the user can easily connect to the proxy, and perform the necessary control and monitoring tasks in the proxy, and finally, the request is sent to GISService if needed, and the response is received by the proxy and delivered to the user. According to these explanations, the following codes can be considered:

public interface IGISService

{

string GetLatLng(string name);

}

public class GISService : IGISService

{

static readonly Dictionary map = new()

{

{ "Tehran", "35.44°N,51.30°E" },

{ "Urmia", "37.54°N,45.07°E" },

{ "Khorramabad", "33.46°N,48.33°E" },

{ "Shahrekord", "32.32°N,50.87°E" },

{ "Zahedan", "29.45°N,60.88°E" },

{ "Ilam", "33.63°N,46.41°E" },

{ "Yasuj", "30.66°N,51.58°E" },

{ "Ahvaz", "31.31°N,48.67°E" },

{ "Rasht", "37.26°N,49.58°E" },

{ "Sari", "36.56°N,53.05°E" },

{ "Sanandaj", "35.32°N,46.98°E" },

{ "Ardabil", "37.54°N,45.07°E" }

};

public string GetLatLng(string name)

{

Thread.Sleep(5000);

return map.FirstOrDefault(x => x.Key == name).Value;

}

}

public class GISServiceProxy : IGISService

{

static readonly Dictionary mapCache = new();

static readonly GISService _gisService = new();

public string GetLatLng(string name)

{

var requestOn = DateTime.Now.TimeOfDay;

if (!mapCache.ContainsKey(name))

{

string latlng = _gisService.GetLatLng(name);

if (!string.IsNullOrWhiteSpace(latlng))

mapCache.TryAdd(name, latlng);

else

throw new Exception("Given Geo not found!");

}

var responseOn = DateTime.Now.TimeOfDay;

return

$"Geo:{name},

Sent:{requestOn},

Received:{responseOn},

Response:{mapCache[name]}";

}

}

In the preceding code, for the purpose of simulation, within the GetLatLng method in the GISService class, a five-second delay is considered for each incoming request. However, if we pay attention to the implementation of this method in the GISServiceProxy class, we will notice that the responses received from the GISService are somehow cached, and subsequent requests for a fixed geographic region are answered without referring to the GISService, so we face an increase in the response speed. In order to execute the preceding code, you can do as follows:

IGISService gisService = new GISServiceProxy();

Console.WriteLine(gisService.GetLatLng(“Urmia”));

Console.WriteLine(gisService.GetLatLng(“Tehran”));

Console.WriteLine(gisService.GetLatLng(“Urmia”));

In the preceding code, when the Urmia query is sent for the first time, the response to this query is received with a delay of five seconds. This is while Urmia's re-query is no longer delayed and the direct response from the cache is returned by the proxy. Therefore, for the preceding code, we will have the following output:

Geo:Urmia,Sent:16:56:47.5390793,Received:16:56:52.5762895,
Response:37.54°N,45.07°E

Geo:Tehran,Sent:16:56:52.5932198,Received:16:56:57.6020481,
Response:35.44°N,51.30°E

Geo:Urmia,Sent:16:56:57.6052643,Received:16:56:57.6053073,
Response:37.54°N,45.07°E

In the preceding output, it can also be seen that, the Urmia query was sent at 16:56:47 and its response was received at 16:56:52, that is, about five seconds later, while the second query of the same city, without a five second delay, sent at 16:56:57 and received a response within a few milliseconds.

Participants:

Subject: which is the same as IGISService in the preceding scenario, is responsible for defining the format based on RealSubject. Using the provided template, wherever RealSubject is needed, it can be replaced with proxy.
Proxy: which is the GISServiceProxy in the preceding scenario, is in charge of communicating with RealSubject based on the format provided by Subject. This communication can be done for various purposes such as access control, logging, and so on. Proxy, methods are defined like RealSubject methods, so that the client does not notice possible changes.
RealSubject: In the preceding scenario, it is the same as GISService, it is responsible for defining the real object based on the format provided by subject. The proxy object is supposed to be replaced with this object.
Client: It is the same user who executes the code through proxy.
Notes:

Different types of proxy can be defined, including:
Remote proxy: In this type of proxy, subject and proxy are located in two different places, and in fact, we are trying to define a local proxy for the subject that is in another place. This type of proxy is very similar to the concept of using a web service. For example, suppose the client intends to use two web services provided at an external address. In this case, the task of communicating with the external service and related settings can be defined in a remote proxy, and the client communicates with the proxy instead of needing to communicate with the web service while repeating the code. Therefore, the following code can be considered for it:
public class RemoteProxy : IRemoteService

{

readonly HttpClient _client;

public RemoteProxy() => _client = new HttpClient

{

BaseAddress = new Uri("https://jinget.com/api")

};

public async Task GetAsync(int id)

=>await (await

_client.GetAsync($"/users/{id}")).Content.ReadAsStringAsync();

public async Task GetAllAsync()

=>await (await

_client.GetAsync("/users")).Content.ReadAsStringAsync();

}

In the preceding code, the Subject is located at a remote address at https://jinget.com.

Virtual proxy: This type of proxy can be used when the process of creating an object from a subject is time-consuming or expensive. In this case, the object is created from the subject only when it is really needed. This type of proxy is also called Lazy initialization proxy.
Protection proxy: This type of proxy can be used when we intend to control access to the requested resource. For example, by using this type of proxy, it is possible to check whether the client has the right to access this resource before connecting to the subject.
Smart proxy: This type of proxy is usually used when we need to do some specific tasks before accessing the subject object. For example, we may need to prevent the creation of multiple objects from subject through singleton implementation.
Logging proxy: There are times when we need to log all references to subject.
Caching proxy: There are times when we need to save the results of references to subject. In the scenario mentioned preceding, this type of proxy has been used.
When using this design pattern, not necessarily all requests may be sent to subject.
By using the copy on write technique, you can prevent the creation of additional objects. Using this technique, a new object is created when there is a need to change the object, otherwise the existing object is shared among all.
Using this design pattern, proxy can only sometimes know the type of subject. When the proxy needs to create an object from the subject, it needs to know its type, otherwise it does not need to know the subject type.
Consequences: Advantages

The object related to RealSubject can be controlled without the client noticing. This control can even include the control of the life span of the object.
Without the need to touch RealSubject codes and change them, a new Proxy can be created, and therefore OCP has been observed.
By using a remote proxy, the fact that the subject is located in another place is hidden from the client's view.
By using a virtual proxy, you can create the subject object if needed or reuse the existing objects. This can improve performance.
Consequences: Disadvantages

Due to the definition of many classes and methods, it can lead to an increase in code complexity.
This template can cause a delay in receiving the final response.
Client may use RealSubject in some places and proxy in some other places, which causes confusion in the structure and code.
Applicability:

When it is necessary to perform a series of tasks such as: access control, caching, logging, and so on before connecting to the subject, then this design pattern can be used.
When we need to use a series of services located at an external address (such as web service, connection with Windows services, and so on), we can use this design pattern.

Related patterns:

Some of the following design patterns are not related to Proxy design pattern, but in order to implement this design pattern, checking the following design patterns will be useful:

Adapter
Decorator
façade
Conclusion
In this chapter, you learned about structural design patterns and learned how to manage the connections between classes and objects for different scenarios. In the next chapter, you will learn about behavioral design patterns and learn how to deal with behavior of objects and classes.

1 Adapter design pattern is also known as Wrapper too.

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors:

https://discord.bpbonline.com

NET 7 Design Patterns In-Depth 2. Creational Design Patterns

Chapter 2 Creational Design Patterns
第2章 创建式设计模式

Introduction

简介

As the name suggests, creational design patterns deal with object construction and how to create instances. In C# .NET programming language, a new keyword is used to create an object. To create an object, it is necessary to provide the name of the class. In other words, wherever an object is needed, it can be created using the new keyword and the class name. However, there are instances where it is necessary to hide the way the object is made from the user's view. In this case, creational design patterns can be useful.

顾名思义,创建式设计模式处理对象构造以及如何创建实例。在 C# .NET 编程语言中,new 关键字用于创建对象。要创建对象,必须提供类的名称。换句话说,只要需要对象,就可以使用 new 关键字和类名来创建它。但是,在某些情况下,有必要从用户的视图中隐藏对象的创建方式。在这种情况下,创建式设计模式可能很有用。

Structure

结构

In this chapter, we will cover the following topics:

在本章中,我们将介绍以下主题:

  • Creational design pattern
    创建式设计模式

  • Factory method
    工厂方法

  • Abstract factory
    抽象工厂

  • Builder
    Builder

  • Prototype
    原型

  • Singleton
    单例

  • Objectives
    目标

By the end of this chapter, you will be familiar with creational design patterns and be able to understand their differences. It is also expected that by using the tips provided in this chapter, you can use each creational design pattern in its proper place.

在本章结束时,您将熟悉创建式设计模式并能够理解它们的差异。此外,通过使用本章中提供的提示,您可以在适当的位置使用每个创建性设计模式。

Creational design pattern

创建式设计模式

Using creational design patterns, you can understand which object was made by whom, how, and when. In the category of creational design patterns in GoF, five design patterns have been introduced, which are:

使用创建性设计模式,您可以了解哪个对象是由谁、如何以及何时制造的。在 GoF 的创建性设计模式类别中,引入了五种设计模式,它们是:

  • Factory method: Tries to select a class from several existing classes and create an object.
    Factory method:尝试从多个现有类中选择一个类并创建一个对象。

  • Abstract factory: Tries to create objects from classes of the same family.
    抽象工厂:尝试从同一系列的类创建对象。

  • Builder: Tries to separate the object’s construction from its presentation.
    生成器:尝试将对象的构造与其表示分开。

  • Prototype: Tries to make a copy of an existing object.
    原型:尝试创建现有对象的副本。

  • Singleton: Tries to have only one object of the class.
    Singleton:尝试只具有类的一个对象。

These five design patterns can be used interchangeably and can be complementary in some situations. For example, the prototype and abstract factory design patterns may be useful in some situations. Or the singleton pattern may be used in the prototype implementation to make the prototype implementation more complete.

这五种设计模式可以互换使用,并且在某些情况下可以互补。例如,原型和抽象工厂设计模式在某些情况下可能很有用。或者可以在 prototype 实现中使用 singleton 模式,以使 prototype 实现更加完整。

Usually, the creational design patterns can be used for the following conditions:
通常,创建性设计模式可用于以下条件:

  • When the system needs to be independent of how the objects are made
    当系统需要独立于对象的创建方式时

  • When a set of objects are to be used together
    当一组对象一起使用时

  • When there is a need to hide the class implementation from the user's view
    当需要从用户视图中隐藏类实现时

  • When there is a need for different presentations of a complex object.
    当需要对复杂对象进行不同的表示时。

  • Sampling should be clear at the time of execution.
    在执行时应明确抽样。

  • When only one object is required to be provided to the user.
    当只需要向用户提供一个对象时。

Apart from the five GoF patterns, other patterns are related to the creation of objects, such as:

除了五种 GoF 模式外,其他模式与对象的创建有关,例如:

  • Dependency injection pattern: Instead of creating an object, the class receives the required object through the Injector.
    依赖注入模式:该类不是创建对象,而是通过 Injector 接收所需的对象。

  • Object pooling pattern: It prevents the object from being destroyed and recreated and reuses the existing objects in the recovery method.
    对象池模式:它可以防止对象被销毁和重新创建,并在恢复方法中重用现有对象。

  • Lazy initialization pattern: Delays the creation of the object until it is used.
    延迟初始化模式:延迟对象的创建,直到使用它。

Factory method

工厂方法

This section introduces and analyzes the factory method design pattern according to the structure presented in the GoF design patterns section in Chapter 1, Introduction to Design Patterns.

本节根据第 1 章 设计模式简介的 GoF 设计模式部分中介绍的结构介绍和分析工厂方法设计模式。

Name:名字
Factory method工厂方法

Classification:分类
Creational design patterns创建式设计模式

Also known as:也称为
Virtual constructor虚拟构造函数

Intent:意图
This design pattern tries to delegate the creation of objects to child classes in a parent-child structure.此设计模式尝试将对象的创建委托给父子结构中的子类。

Motivation, Structure, Implementation, and Sample code:
动机、结构、实现和示例代码:

Suppose we are building a residential unit and must make a "door". To make a "door", we must refer to a "door manufacturer", but so far, we have no idea what type of door we need (wooden, aluminum, iron, and so on). Secondly, we do not know which manufacturer we should refer to (Wooden or iron door manufacturer, and so on). So, we are facing two abstractions here: "door" and "door manufacturer":

假设我们正在建造一个住宅单元,并且必须制作一扇“门”。要制作“门”,我们必须参考“门制造商”,但到目前为止,我们不知道我们需要什么类型的门(木门、铝门、铁门等)。其次,我们不知道我们应该参考哪个制造商(木门或铁门制造商,等等)。因此,我们在这里面临两个抽象:“door” 和 “door manufacturer”:

alt text
Figure 2.1: Door-Door manufacturer relation
图 2.1: 门-门制造商关系

According to Figure 2.1, the following code could be considered:
根据图 2.1,可以考虑以下代码:


public abstract class Door
{
    public abstract void Design();
    public abstract void Build();
    public abstract void Coloring();
}

public abstract class DoorManufacturer
{
    public Door CreateDoor(){}
}

According to the preceding paragraph, we finally decided to choose a "wooden door" for our unit, so we must go to a "carpenter" to make a "wooden door". It is clear that "wooden door" is a type of "door" and "carpenter" is a type of "door maker". So, we have two classes WoodenDoor for "wooden door" and Carpenter for "carpenter".

根据前一段,我们最后决定为我们的单位选择一扇“木门”,那么我们必须去找一个“木匠”来制作一扇“木门”。很明显,“木门”是“门”的一种,而“木匠”是“造门师”的一种。因此,我们有两个类 WoodenDoor 用于 “wooden door” 和 “carpenter” 的 Carpenter。

On the other hand, we know that Design, Build, and Coloring behaviors can be defined for doors. We still do not know exactly how these behaviors happen for different types of doors (wooden, iron, and so on). To solve this problem in the WoodenDoor class, we need to implement these behaviors to determine how each behavior happens for a wooden door. A similar mechanism should be implemented for other types of doors, such as iron doors.

另一方面,我们知道可以为门定义 Design、Build 和 Coloring 行为。我们仍然不知道这些行为对于不同类型的门(木制、铁制等)是如何发生的。要在 WoodenDoor 类中解决此问题,我们需要实现这些行为来确定木门的每种行为是如何发生的。其他类型的门也应采用类似的机制,例如铁门。

We also know that a door manufacturer must be able to make a door. To make a door, he must design, build, and color, but we have yet to learn of a door manufacturer. To solve this problem, we add a method called GetDoor to the DoorManufacturer class and specify that this method is responsible for creating a "door manufacturer" type object. Finally, every door manufacturer (such as a wooden door manufacturer) implements this method according to his/her requirements and processes. Therefore, through this implementation in the DoorManufacturer class, we find out which manufacturer intends to make the door:

我们也知道,门制造商必须能够制造门。要制造门,他必须设计、建造和着色,但我们还没有了解到门制造商。为了解决此问题,我们将一个名为 GetDoor 的方法添加到 DoorManufacturer 类中,并指定此方法负责创建“门制造商”类型对象。最后,每个门制造商(例如木门制造商)都根据他/她的要求和流程实施这种方法。因此,通过 DoorManufacturer 类中的这个实现,我们找出哪个制造商打算制造门:

public abstract class Door
{
    public abstract void Design();
    public abstract void Build();
    public abstract void Coloring();
}

public class WoodenDoor : Door
{
    public override void Design()
    {
        Console.WriteLine("Wooden door design done");
    }

    public override void Build()
    {
        Console.WriteLine("Wooden door build done");
    }

    public override void Coloring()
    {
        Console.WriteLine("Wooden door coloring done");
    }
}

public abstract class DoorManufacturer
{
    public void CreateDoor()
    {
    Door door = this.GetDoor();
    door.Design();
    door.Build();
    door.Coloring();
    Console.WriteLine("Your door is ready!");
    }

    public abstract Door GetDoor();
}

public class Carpenter : DoorManufacturer
{
    public override Door GetDoor()
    {
        return new WoodenDoor();
    }
}

Finally, we go to the carpenter to make the door and give the desired dimensions and features and ask them to make the door for us:
最后,我们去找木匠制作门,给出所需的尺寸和功能,并要求他们为我们制作门。

DoorManufacturer manufacturer = new Carpenter();
manufacturer.CreateDoor();

In the preceding design, if we need to add an iron door manufacturer, it is enough to consider a class for an iron door (Let us call it, IronDoor) which inherits from the Door class, and consider a class for an iron door manufacturer (Let us call it, IronDoorMaker) that inherits from DoorManufacturer. In this way, new doors and manufacturers can be defined without manipulating the previous codes. Finally, we will have the following class diagram for this design pattern:

在前面的设计中,如果我们需要添加一个铁门制造商,那么考虑一个继承自 Door 类的铁门类(我们称之为 IronDoor)就足够了,并考虑一个继承自 DoorManufacturer 的铁门制造商的类(我们称之为 IronDoorMaker)。通过这种方式,可以在不纵先前代码的情况下定义新的门和制造商。最后,我们将为这个设计模式提供以下类图:

alt text
Figure 2.2: Factory Method UML diagram
图 2.2: 工厂方法 UML 图

Participants:
参与者:

  • Product: Based on the preceding section, the Door is responsible for defining a template for the objects made by the factory method.
    Product:根据上一节,Door 负责为工厂方法生成的对象定义模板。

  • Concrete product: In the preceding scenario, WoodenDoor and IronDoor are responsible for implementing the template defined by the Product.
    具体产品:在前面的场景中,WoodenDoor 和 IronDoor 负责实现 Product 定义的模板。

  • Creator: This is also called the factory. In the preceding scenario, the DoorManufacturer is responsible for defining the Factory Method. This method is responsible for creating and returning a product-type object. In the preceding scenario, the GetDoor method is the same as the factory method. It is unnecessary to define this method as an abstract method, and the creator class can also have a default implementation of this method.
    创建者:这也称为工厂。在前面的方案中,DoorManufacturer 负责定义 Factory Method。该方法负责创建并返回 product-type 对象。在前面的方案中,GetDoor 方法与工厂方法相同。没有必要将此方法定义为抽象方法,并且 creator 类也可以具有此方法的默认实现。

  • Concrete creator: In the preceding scenario, the Carpenter and IronDoorMaker are responsible for rewriting the factory method to make the appropriate object and provide it to the creator.
    具体创建者:在前面的场景中,Carpenter 和 IronDoorMaker 负责重写工厂方法以生成适当的对象并将其提供给创建者。

Notes:
笔记:

  • In the given implementation and design (Figure 2.2), we can use the interface instead of the abstract class for the product.
    在给定的实现和设计(图 2.2)中,我们可以使用接口而不是产品的抽象类。

  • To reuse the created objects, you can use the object pool.
    This design pattern tries to manage the way objects are made, and this is because the factory method design pattern is included in the category of creational design patterns.
    要重用创建的对象,您可以使用对象池。此设计模式试图管理对象的创建方式,这是因为工厂方法设计模式包含在创建设计模式的类别中。

  • You can also use the Parametrized Factory Method to implement the Factory Method. In this implementation method, we send the type of object we want to create to the Factory Method through a parameter, and through that, we proceed to create the desired object. Concrete Creators can extend these methods by overriding them by using this implementation method. To implement this procedure, the GetDoor method in the preceding scenario is changed as follows:
    您还可以使用 Parametrized Factory Method 来实现 Factory Method。在这个实现方法中,我们通过参数将要创建的对象类型发送到 Factory Method,然后通过该参数,我们继续创建所需的对象。Concrete Creators 可以通过使用此实现方法覆盖这些方法来扩展这些方法。为了实现此过程,上述方案中的 GetDoor 方法更改如下:

    public virtual Door GetDoor(string type)
    {
    switch (type)
    {
        case "Wooden":
        return new WoodenDoor();
        case "Iron":
        return new IronDoor();
    }
    return null;
    }

In this case, if we want to add the aluminum door manufacturer, we can cover this new requirement by overriding the following method:
在这种情况下,如果我们想添加铝门制造商,我们可以通过覆盖以下方法来满足这个新要求:

public class AluminumDoorMaker: DoorManufacturer
{
    public override Door GetDoor(string type)
    {
    if (type == "Aluminum")
    return new AluminumDoor();
    else
    return base.GetDoor(type);
    }
}

C# language supports generics from version 2 onwards. This feature of the C# language, instead of creating many subclasses, you can create a product by simply providing a generic type. Using this feature, you can rewrite the concrete creator section as follows:
C# 语言支持版本 2 及更高版本的泛型。C# 语言的这一功能,无需创建许多子类,只需提供泛型类型即可创建产品。使用此功能,您可以按如下方式重写 concrete creator 部分:

public class StarndardDoorManufacturer<T>: DoorManufacturer
    where T: Door, new()
    {
        public override Door GetDoor()
        {
        return new T();
    }

}

Pay attention to the where section. With the help of this section, we can filter the generic types so that only types can be sent that are both Door type and its subset types (where T: Door) and also have a default constructor (where T: new()).

注意 where 部分。在本节的帮助下,我们可以过滤泛型类型,以便只能发送既是 Door 类型又是其子集类型(其中 T: Door)并且还具有默认构造函数(其中 T: new())的类型。

Finally, with the applied changes, it can be used as follows:
最后,通过应用的更改,可以按如下方式使用:

DoorManufacturer obj = new StarndardDoorManufacturer<WoodenDoor>();

obj.CreateDoor();

It is better to use a suitable naming template in design patterns so that the design pattern is understood by the template used. It would have been better in the preceding scenario, instead of using the title DoorManufacturer, titles such as DoorFactory or similar were used.

最好在设计模式中使用合适的命名模板,以便所使用的模板能够理解设计模式。在前面的场景中,最好不要使用标题 DoorManufacturer,而是使用 DoorFactory 或类似标题。

You can start the design with the factory method, then mature the design with the Abstract factory, prototype, or builder.

您可以从 factory 方法开始设计,然后使用 Abstract factory、prototype 或 builder 完善设计。

Consequences: Advantages
后果:优势

A loose connection between the creator and the product increases the ability to maintain and develop the code:

创建者和产品之间的松散连接增加了维护和开发代码的能力:

The principle of Single Responsibility Principal (SRP) has been observed. By moving the creation of the object outside the factory method, we made each method responsible for doing only one thing.
遵守单一责任委托人 (SRP) 的原则。通过将对象的创建移到工厂方法之外,我们使每个方法只负责做一件事。

The principle of Open/Close Principle (OCP) has been observed. As mentioned in the preceding scenario, new products and creators can be added to the program without manipulating the existing codes.
已遵守开/关原则 (OCP) 原则。如前面的场景所述,可以将新产品和创作者添加到程序中,而无需作现有代码。

Consequences: Disadvantages
后果:缺点

If the generic feature is not used, then each concrete product and creator must be defined as a subclass. This structure can become complicated to maintain with the increase in the number of classes.
如果未使用通用特征,则必须将每个具体产品和创建者定义为一个子类。随着类数量的增加,维护此结构可能会变得复杂。

Applicability:
适用性:

  • When a class does not know the exact type of objects it should create, and need to delegate this task to its child classes.
    当一个类不知道它应该创建的对象的确切类型,并且需要将此任务委托给其子类时。

  • When we are developing a software development framework and providing the ability to add new features to the framework.
    当我们开发软件开发框架并提供向框架添加新功能的能力时。

  • When we need to control the number of created objects and reuse created objects as much as possible.
    当我们需要控制已创建对象的数量并尽可能多地重用已创建的对象时。

Related patterns:
相关模式:

Some of the following design patterns are not related to the Factory Method design pattern, but to implement this design pattern, checking the following design patterns will be useful:
以下一些设计模式与 Factory Method 设计模式无关,但要实现此设计模式,检查以下设计模式将很有用:

  • Object pool 对象池
  • Abstract factory 抽象工厂
  • Template method 模板方法
  • Singleton 单例
  • Iterator 迭代器

Abstract factory

抽象工厂

The abstract factory design pattern is introduced and analyzed in this section, according to the structure presented in the GoF design patterns section in Chapter 1, Introduction to Design Patterns.
本节根据第 1 章 设计模式简介的 GoF 设计模式部分介绍的结构,介绍和分析抽象工厂设计模式。

Name:
Abstract factory
抽象工厂
Classification:
Creational design patterns
创建式设计模式
Also known as:
Kit
工具箱
Intent:
This design pattern tries to create a group of related objects through an interface without referring to their class.
此设计模式尝试通过接口创建一组相关对象,而不引用它们的类。

Motivation, Structure, Implementation, and Sample code:
动机、结构、实现和示例代码:
We have several car factories in the country that produce cars. The cars produced by these factories are divided into two categories: gasoline and diesel cars. So far, we are facing two types of abstractions in the scenario. The first abstraction is related to the car manufacturing plant, and the second is related to the type of car. So far in the scenario, we are facing the following class diagram:
我们在该国有几家生产汽车的汽车工厂。这些工厂生产的汽车分为汽油车和柴油车两大类。到目前为止,我们在场景中面临两种类型的抽象。第一个抽象与汽车制造厂有关,第二个抽象与汽车类型有关。到目前为止,在该方案中,我们面对的是以下类图:

alt text

Figure 2.3: Car-Car factory initial relation
图 2.3: 汽车-汽车工厂的初始关系

According to Figure 2.3, the following code could be considered:
根据图 2.3,可以考虑以下代码:

public abstract class CarFactory
{
    public abstract PetrolCar CreatePetrolCar();
    public abstract DieselCar CreateDieselCar();
}

public abstract class PetrolCar
{
    public abstract void AssembleSeats();
}

public abstract class DieselCar
{
    public abstract void AssembleDieselEngine();
}

According to the class (Figure 2.3 and the preceding code), we must bring classes that use these abstractions to complete the design. For example, the CreatePetrolCar method is in the CarFactory abstract class, but it is unclear how this method is implemented. Logically, it is also true because we still need to know which car manufacturer we are talking about. When it is clear which car manufacturer we are talking about, we can determine how CreatePetrolCar should be implemented. According to the presented scenario, we assume that we have two automobile factories in the country named IranKhodro and Saipa. These factories should be able to produce their own gasoline and diesel products. With this explanation, the preceding design changes as the following:

根据类(图 2.3 和前面的代码),我们必须引入使用这些抽象的类来完成设计。例如,CreatePetrolCar 方法位于 CarFactory 抽象类中,但尚不清楚此方法是如何实现的。从逻辑上讲,这也是正确的,因为我们仍然需要知道我们正在谈论的是哪个汽车制造商。当清楚我们谈论的是哪个汽车制造商时,我们可以确定应该如何实施 CreatePetrolCar。根据所呈现的情景,我们假设我们在该国有两家名为 IranKhodro 和 Saipa 的汽车工厂。这些工厂应该能够生产自己的汽油和柴油产品。通过此说明,前面的设计将更改如下:

alt text
Figure 2.4: IranKhodro-Saipa factories relations to CarFactory
图 2.4:IranKhodro-Saipa 工厂与 CarFactory 的关系

According to Figure 2.4, the following code could be considered:
根据图 2.4,可以考虑以下代码:

public abstract class CarFactory {
    public abstract PetrolCar CreatePetrolCar();
    public abstract DieselCar CreateDieselCar();
}

public class IranKhodro : CarFactory
{
    public override DieselCar CreateDieselCar()=> throw NotImplementedException();
    public override PetrolCar CreatePetrolCar() =>throw NotImplementedException();
}

public class Saipa : CarFactory
{
    public override DieselCar CreateDieselCar() =>throw NotImplementedException();
    public override PetrolCar CreatePetrolCar() =>throw NotImplementedException();
}

public abstract class PetrolCar
{
    public abstract void AssembleSeats();
}

public abstract class DieselCar
{
    public abstract void AssembleDieselEngine();
}

So far, it has been found that car factories are making gasoline and diesel cars. We accepted that Iran Khodro and Saipa are car factories, and they produce gasoline and diesel cars. Now the question remains unanswered here: which diesel car does the Iran Khodro or Saipa factory produce? To answer this question, we must first answer which car is gasoline and which is diesel. We can determine which car manufacturer produces gasoline and diesel cars when we answer this question. So, with these conditions, the class diagram changes as the following:

到目前为止,已经发现汽车工厂正在生产汽油和柴油汽车。我们接受了伊朗 Khodro 和 Saipa 是汽车工厂,他们生产汽油和柴油汽车。现在问题仍未得到解答:伊朗 Khodro 或 Saipa 工厂生产哪种柴油车?要回答这个问题,我们首先要回答哪辆车是汽油车,哪辆车是柴油车。当我们回答这个问题时,我们可以确定哪个汽车制造商生产汽油和柴油汽车。因此,在这些条件下,类图将更改如下:

alt text
Figure 2.5: Factories and cars relations

According to Figure 2.5, the following code could be considered:
根据图 2.5,可以考虑以下代码:

public abstract class CarFactory
{
    public abstract PetrolCar CreatePetrolCar();
    public abstract DieselCar CreateDieselCar();
}

public class IranKhodro : CarFactory
{
    public override DieselCar CreateDieselCar() => new Arena();
    public override PetrolCar CreatePetrolCar() => new Peugeot206();
}

public class Saipa : CarFactory
{
    public override DieselCar CreateDieselCar() => new Foton();
    public override PetrolCar CreatePetrolCar() => new Pride();
}

public abstract class PetrolCar
{
    public abstract void AssembleSeats();
}

public class Pride : PetrolCar
{
    public override void AssembleSeats()=>
    Console.WriteLine("Pride seats assembled");
}

public class Peugeot206 : PetrolCar
{
    public override void AssembleSeats() =>
    Console.WriteLine("Peugeot206 seats assembled");
}

public abstract class DieselCar
{
    public abstract void AssembleDieselEngine();
}

public class Foton : DieselCar
{
    public override void AssembleDieselEngine()=>
    Console.WriteLine("Foton engine assembled");
}

public class Arena : DieselCar
{
    public override void AssembleDieselEngine()=>
    Console.WriteLine("Arena engine assembled");
}

Now, it is clear which car manufacturers produce which cars and how each of these is produced. The design is almost finished, and we only need a user to build the desired car using our defined abstractions. So, with these conditions, the class diagram changes as follows:

现在,哪些汽车制造商生产哪些汽车以及每辆汽车是如何生产的,这一点就很清楚了。设计几乎完成,我们只需要一个用户使用我们定义的抽象来构建所需的汽车。因此,在这些条件下,类图将发生如下变化:

alt text
Figure 2.6: Abstract Factory UML diagram
图 2.6: 抽象工厂 UML 图

According to Figure 2.6, the following code could be considered:
根据图 2.6,可以考虑以下代码:

public class Client
{
    private readonly CarFactory factory;
    public Client(CarFactory factory) => this.factory = factory;

    public void CreatePetrolCar()
    {
        var petrol = factory.CreatePetrolCar();
        petrol.AssembleSeats();
    }
        public void CreateDieselCar()
    {
        var diesel = factory.CreateDieselCar();
        diesel.AssembleDieselEngine();
    }
}

As it is clear in the preceding code, the Client receives the car factory in the constructor and creates the cars using CreatePetrolCar and CreateDieselCar methods.

如前面的代码中所示,客户端在构造函数中接收汽车工厂,并使用 CreatePetrolCar 和 CreateDieselCar 方法创建汽车。

Participants:
参与者:

  • AbstractFactory: In the preceding scenario, the CarFactory has the task of defining a template for tasks whose purpose is to create an AbstractProduct.
    AbstractFactory:在前面的场景中,CarFactory 的任务是为创建 AbstractProduct 的任务定义模板。

  • ConcreteFactory: In the preceding scenario, it is IranKhodro and Saipa which has the task of implementing that aim to create a ConcreteProduct.
    ConcreteFactory:在前面的场景中,IranKhodro 和 Saipa 的任务是实现创建 ConcreteProduct 的目标。

  • AbstractProduct: In the preceding scenario, PetrolCar and DieselCar are responsible for defining the format for all kinds of products.
    AbstractProduct:在前面的场景中,PetrolCar 和 DieselCar 负责定义各种产品的格式。

  • ConcreteProduct: In the preceding scenario, it is Pride, Peugeot206, Arena, and Foton which is responsible for defining the product.
    ConcreteProduct:在前面的场景中,是 Pride、Peugeot206、Arena 和 Foton 负责定义产品。

  • Client: This is also called Client, in the preceding scenario, is responsible for using abstractions.
    Client:这也称为 Client,在前面的场景中,它负责使用抽象。

Notes:
笔记:

  • In the preceding design, instead of the abstract class for CarFactory, PetrolCar, and DieselCar, we can also use interfaces.
    在前面的设计中,我们还可以使用接口,而不是 CarFactory、PetrolCar 和 DieselCar 的抽象类。

  • Usually, the classes in this design pattern can be implemented with the factory method or Prototype design pattern.
    通常,此设计模式中的类可以使用工厂方法或 Prototype 设计模式实现。

  • If the only purpose is to hide the sampling method, then an abstract factory can be considered an alternative to a Facade.
    如果唯一目的是隐藏采样方法,那么抽象工厂可以被视为 Facade 的替代方案。

  • The difference between the builder design pattern and abstract factory is that the Builder design pattern can be used to create complex objects. Also, it is possible to do additional tasks along with the sampling process when using the Builder design pattern. Still, the created sample is returned immediately in the abstract factory design pattern.
    Builder 设计模式和抽象工厂之间的区别在于 Builder 设计模式可用于创建复杂对象。此外,在使用 Builder 设计模式时,可以在采样过程中执行其他任务。尽管如此,创建的示例仍会立即以抽象的 Factory 设计模式返回。

  • Abstract factory design patterns can also be implemented as Singleton.
    抽象工厂设计模式也可以作为 Singleton 实现。

  • All products of the same family are implemented and integrated into a concrete factory related to that family.
    同一系列的所有产品都被实施并集成到与该系列相关的混凝土工厂中。

  • In C# language, to implement this design pattern, Generics can be used to design Factory and Product.
    在 C# 语言中,为了实现此设计模式,可以使用泛型来设计 Factory 和 Product。

Consequences: Advantages
后果:优势

  • There is always a guarantee that the products received from the factory are of the same family.
    始终保证从工厂收到的产品属于同一系列。

  • There is a loose connection between the Client and concrete products.
    客户和混凝土产品之间存在松散的联系。

  • The principle of SRP has been observed.
    已遵守 SRP 的原则。

  • The principle of OCP has been observed. New products can be defined without changing the existing codes.
    OCP 的原理已被遵守。可以在不更改现有代码的情况下定义新产品。

Consequences: Disadvantages
后果:缺点

  • This structure can become complicated to maintain over time and with the increase in the number of classes.
    随着时间的推移和类数量的增加,维护这种结构可能会变得复杂。

  • With the addition of a new product, the abstract factory must be changed, which leads to the change of all concrete factories.
    随着新产品的加入,抽象工厂必须改变,这导致了所有具体工厂的改变。

Applicability:
适用性:

  • When dealing with a family of products and do not want to make the code dependent on concrete products, we can use this pattern.
    当处理一系列产品并且不想使代码依赖于具体产品时,我们可以使用这种模式。

  • When faced with a class that consists of a collection of factory methods, the abstract factory pattern can be useful.
    当面对由工厂方法集合组成的类时,抽象工厂模式可能很有用。

Related patterns:
相关模式:
Some of the following design patterns are not related to abstract factory design patterns, but to implement this design pattern, checking the following design patterns will be useful:
以下一些设计模式与抽象工厂设计模式无关,但要实现此设计模式,检查以下设计模式将很有用:

  • Prototype 原型
  • Factory method 工厂方法
  • Singleton 单例
  • Facade 立面
  • Builder 生成器
  • Bridge Bridge

Builder

生成器模式

The builder design pattern is introduced and analyzed in this section, according to the structure presented in the GoF design patterns section in Chapter 1, Introduction to Design Pattern.

本节根据第 1 章 设计模式简介的 GoF 设计模式部分介绍的结构,介绍和分析了构建器设计模式。

Name:
Builder
生成器模式
Classification:
Creational design patterns
创建式设计模式
Also known as:
-
Intent:
意图:
This design pattern tries to create complex objects according to the requirement.
此设计模式尝试根据要求创建复杂对象。
Motivation, Structure, Implementation, and Sample code:
动机、结构、实现和示例代码:
Suppose we are building a residential unit. Residential units can have different construction processes according to the use case. A residential unit can be a villa or an apartment, which can be concrete, iron, or prefabricated. To model the residential unit, there are different ways. For example, we can define a parent class for a residential unit and connect different types of residential units to each other using the parent-child relationship. Although this method solves the design problem, in the end, it causes us to face many parent-child relationships that threaten the readability and maintainability of the code.

假设我们正在建造一个住宅单元。根据用例,住宅单元可以有不同的施工流程。住宅单元可以是别墅或公寓,可以是混凝土、铁或预制的。要对住宅单元进行建模,有多种方法。例如,我们可以为住宅单元定义一个父类,并使用父子关系将不同类型的住宅单元相互连接。虽然这种方法解决了设计问题,但最终还是导致我们面临很多父子关系,威胁到代码的可读性和可维护性。

We can use a class to define the residential unit and send all the necessary parameters to define the residential unit through the constructor, in the form of optional parameters, to the class. Although this method solves the design problem, too, it obviously causes the emergence of strange constructors with all the input parameters, which are not given a value when many of them are used.
我们可以使用一个类来定义住宅单元,并通过构造函数以可选参数的形式将定义住宅单元所需的所有参数发送到类。虽然这种方法也解决了设计问题,但它显然会导致出现具有所有输入参数的奇怪构造函数,当使用许多参数时,这些参数没有被赋予值。

For example, we come across the codes like the following:
例如,我们遇到了如下代码:

House house1 = new House(p1,p2,null,null,null,null,null,p3,null,null);

House house2 = new House(null,null,null,null,null,p4,null,null,null,null);

We can put different constructors in the definition of a class, for the residential unit, for different states. Although this method solves the problem, it causes the appearance of telescopic constructors and makes it difficult to maintain the class:
我们可以在类的定义中为住宅单元和不同的状态放置不同的构造函数。虽然这种方法解决了这个问题,但它会导致伸缩构造器的出现,并且很难维护类:

public class House{
Public House(string p1){}
Public House(string p1, string p2){}
Public House(string p1, string p2, string p3){}
Public House(string p1, string p2, string p3, string p4){}
// …
}

The builder design pattern tries to solve these problems. In the preceding example, this design pattern helps to define the stages and steps of building a residential unit so that it is possible to build a residential unit using these stages and steps. For example, separate the wall construction process from the door construction process and present each one separately.

构建器设计模式尝试解决这些问题。在前面的示例中,此设计模式有助于定义构建住宅单元的阶段和步骤,以便可以使用这些阶段和步骤构建住宅单元。例如,将墙构造过程与门构造过程分开,并分别呈现每个过程。

To make the matter clear, let us pay attention to another scenario in this regard:

为了弄清楚这个问题,让我们注意这方面的另一种情况:

A requirement is raised, and it is requested to provide the ability to make a mobile phone. Based on this requirement, we notice that mobile phones have different models. For example, we consider Samsung and Apple mobile phones. We understand that regardless of the manufacturing company, every mobile phone needs to be manufactured for the screen, body, camera, and so on, and the appropriate operating system is installed on the mobile phone. It is also clear that the manufacturing methods of both Samsung and Apple companies are completely different for each of these parts.

提出了一个要求,并要求提供制作手机的能力。根据这个要求,我们注意到移动电话的型号不同。例如,我们考虑三星和苹果手机。我们明白,无论制造公司是谁,每部手机都需要针对屏幕、机身、相机等进行制造,并在手机上安装适当的作系统。同样明显的是,三星和苹果公司的制造方法对于这些部件中的每一个都完全不同。

With these explanations, we are faced with two different elements in the design:
通过这些解释,我们在设计中面临着两个不同的元素:

  • The first element: The product we want to produce
    第一个元素:我们要生产的产品

  • The second element: The method of making the product
    第二个元素:制作产品的方法

For the first element, we can define two classes for Samsung and Apple, and for the second element, we can use an interface to format the steps of making a mobile phone. With these explanations, we can consider the following class figure:

对于第一个元素,我们可以为 Samsung 和 Apple 定义两个类,对于第二个元素,我们可以使用接口来格式化制作手机的步骤。通过这些解释,我们可以考虑以下类图:

alt text
Figure 2.7: Cellphone construction initial relations
图 2.7: 手机构造初始关系

According to Figure 2.7, the following code could be considered:
根据图 2.7,可以考虑以下代码:

public class CellPhone
{
    public string CameraResolution { get; set; }
    public string MonitorSize { get; set; }
    public string BodyMaterial { get; set; }
    public string OSName { get; set; }
}

public interface ICellPhoneBuilder
{
    public CellPhone Phone { get; }
    void BuildMonitor();
    void BuildBody();
    void BuildCamera();
    void PrepareOS();
}

public class Samsung : ICellPhoneBuilder
{
    public CellPhone Phone { get; private set; }
    public Samsung() => Phone = new CellPhone();
    public void BuildBody() => Phone.BodyMaterial = "Titanium";
    public void BuildCamera() => Phone.CameraResolution = "10 MP";
    public void BuildMonitor() => Phone.MonitorSize = "10 Inch";
    public void PrepareOS() => Phone.OSName = "iOS";
}

public class Apple : ICellPhoneBuilder
{
    public CellPhone Phone { get; private set; }
    public Apple() => Phone = new CellPhone();
    public void BuildBody() => Phone.BodyMaterial = "Aluminum";
    public void BuildCamera() => Phone.CameraResolution = "12 MP";
    public void BuildMonitor() => Phone.MonitorSize = "9.8 Inch";
    public void PrepareOS() => Phone.OSName = "Android 10";
}

Up to this design point, we have been able to define and implement the necessary steps to make a mobile phone. Now, what is the process of making a mobile phone? How should the defined steps be combined with each other to finally make a mobile phone? To answer these questions, we use the CellPhoneDirector class and define the work process in its format. With these explanations, the class diagram changes as follows:

到目前为止,我们已经能够定义和实施制造手机的必要步骤。那么,制作手机的过程是怎样的呢?定义的步骤应该如何相互结合,最终制作出手机呢?为了回答这些问题,我们使用 CellPhoneDirector 类并按其格式定义工作流程。通过这些说明,类图将更改如下:

alt text
Figure 2.8: Builder design pattern UML diagram
图 2.8.. Builder 设计模式 UML 图

According to Figure 2.8, the following code could be considered:
根据图 2.8,可以考虑以下代码:

public class CellPhoneDirector
{
private ICellPhoneBuilder builder;

public CellPhoneDirector(ICellPhoneBuilder builder) => this.builder = builder;

public CellPhone Construct()
{
    builder.BuildBody();
    builder.BuildMonitor();
    builder.BuildCamera();
    builder.PrepareOS();
    return builder.Phone;
}

}

According to the preceding code, it is clear that to build a mobile phone (Construct), its body should be built first (BuildBody), then the screen (BuildMonitor), and then the camera should be built (BuildCamera), and finally, the operating system should be prepared (PrepareOS). With these explanations, the design is finished, and to use this structure, you can use the following code:

根据前面的代码,很显然,要构建一部手机(Construct),首先要构建它的机身(BuildBody),然后构建屏幕(BuildMonitor),然后构建摄像头(BuildCamera),最后准备作系统(PrepareOS)。通过这些说明,设计就完成了,要使用此结构,您可以使用以下代码:

CellPhoneDirector director = new CellPhoneDirector(new Samsung());

var phone = director.Construct();
Console.WriteLine($"
    Body: {phone.BodyMaterial},
    Camera: {phone.CameraResolution},
    Monitor: {phone.MonitorSize},
    OS: {phone.OSName}"
);

Participants:
参与者:

  • Builder: In the preceding scenario, it is ICellPhoneBuilder, which is responsible for defining the format of the steps to build an object. In other words, it has the task of determining what stages and steps should be defined to build an object.
    Builder:在前面的场景中,它是 ICellPhoneBuilder,它负责定义构建对象的步骤的格式。换句话说,它的任务是确定应该定义哪些阶段和步骤来构建对象。

  • ConcreteBuilder: In the preceding scenario, it is Apple and Samsung that are responsible for implementing the steps announced by the builder and specifying how each step should be implemented.
    ConcreteBuilder:在前面的场景中,Apple 和 Samsung 负责实施生成器宣布的步骤,并指定每个步骤的实施方式。

  • Director: In the preceding scenario, it is CellPhoneDirector, which is responsible for implementing the object creation process. During the implementation of this process, the director uses the steps declared by the builder and produces the object.
    Director:在前面的场景中,是 CellPhoneDirector,它负责实现对象创建过程。在此过程的实施过程中,director 使用生成器声明的步骤并生成对象。

  • Product: In the preceding scenario, it is the CellPhone that is responsible for defining the complex object that we are trying to build. In fact, the product is built by the Director within ConceretBuilders through the steps defined in the Builder.
    Product:在前面的场景中,CellPhone 负责定义我们尝试构建的复杂对象。事实上,该产品是由 ConceretBuilders 中的 Director 通过 Builder 中定义的步骤构建的。

Notes:
笔记:

  • An abstract class can also be used to define Builder.
    抽象类也可用于定义 Builder。

  • Using this design pattern, the details of the object construction are hidden from the user's view. If we need to create an object differently, all we need to do is define a new builder.
    使用此设计模式,对象构造的细节对用户来说是隐藏的。如果我们需要以不同的方式创建对象,我们需要做的就是定义一个新的构建器。

  • Using this design pattern, and the director's use, there is more control over the object creation process.The Builder design pattern can also be implemented as a Singleton.
    使用此设计模式以及 director 的使用,可以更好地控制对象创建过程。Builder 设计模式也可以作为 Singleton 实现。

  • Builder and bridge design patterns can be combined. In this case, the director plays the role of abstraction, and various builders play the role of implementation.
    Builder 和 Bridge 设计模式可以组合使用。在这种情况下,director 扮演抽象的角色,而各种 builder 扮演 implementation 的角色。

Consequences: Advantages
后果:优势

  • The step-by-step construction of the object allows for better control over the construction process.
    对象的逐步构造可以更好地控制构造过程。

  • Single Responsibility Principle (SRP) is met. The code related to the construction of a complex object is separated from the business logic.
    满足单一责任原则 (SRP)。与复杂对象构造相关的代码与业务逻辑分离。

Consequences: Disadvantages
后果:缺点

  • With the increase in the number of concrete builders, the complexity of the code increases due to the increase in the volume of coding.
    随着混凝土构建器数量的增加,由于编码量的增加,代码的复杂性也随之增加。

Applicability:
适用性:

  • When we have telescopic constructors in our classes, this design pattern will likely improve the quality of code and design.
    当我们的类中有伸缩构造器时,这种设计模式可能会提高代码和设计的质量。

  • The product manufacturing has the same steps, but the output can be different, then this design pattern will be useful. For example, in the preceding scenario, creating the product in question had the same steps, but the output was different (the Apple company had its own output, and the Samsung company had its own output).
    产品制造具有相同的步骤,但输出可能不同,那么这种设计模式将很有用。例如,在前面的场景中,创建相关产品的步骤相同,但输出不同(Apple 公司有自己的输出,而 Samsung 公司有自己的输出)。

Related patterns:
Some of the following design patterns are not related to the Builder design pattern, but in order to implement this design pattern, checking the following design patterns will be useful:
以下一些设计模式与 Builder 设计模式无关,但为了实现此设计模式,检查以下设计模式将非常有用:

  • Singleton
  • Composite
  • Bridge
  • Abstract Factory

Prototype

原型

In this section, the prototype design pattern is introduced and analyzed according to the structure presented in the GoF design patterns section in Chapter 1, Introduction to Design Patterns.
在本节中,根据第 1 章 设计模式简介的 GoF 设计模式部分中介绍的结构介绍和分析原型设计模式。

Name:
Prototype
原型
Classification:
Creational design patterns
创建式设计模式
Also Known As:
-
Intent:
意图:
This design pattern tries to create objects using a prototype.
此设计模式尝试使用原型创建对象。
Motivation, Structure, Implementation, and Sample code:
动机、结构、实现和示例代码:
Suppose that we have a clothing factory, and we intend to produce different types of pants. The usual method is that the pants are designed first. Then, a pair of pants is produced as a sample based on this design. Finally, the rest of the pants are mass-produced. Pants that are designed and produced at the beginning are called prototypes. According to this scenario, it is very important to define a method to mass-produce the rest of the pants from the prototype.
假设我们有一家服装厂,我们打算生产不同类型的裤子。通常的方法是先设计裤子。然后,根据此设计生产一条裤子作为样品。最后,其余的裤子都进行了批量生产。一开始设计和生产的裤子称为原型。根据这个场景,定义一种方法来从原型中批量生产其余裤子是非常重要的。

There are different types of pants, such as jeans, linen, and cloth pants. The production of each prototype is different from the other. But in the meantime, having a method for mass production of any pants (copying) is common among different types of pants. According to this scenario, we face the following class diagram:
有不同类型的裤子,例如牛仔裤、亚麻布和布裤。每个原型的生产都不同。但与此同时,拥有一种大规模生产任何裤子(复制)的方法在不同类型的裤子中很常见。根据此方案,我们面对以下类图:

alt text
Figure 2.9: Pants initial relation

According to Figure 2.9, the following code could be considered:
根据图 2.9,可以考虑以下代码:

public interface IPant
{
    IPant Clone();
}

public class FabricPant : IPant
{
    public IPant Clone() => this.MemberwiseClone() as IPant;
}

public class CottonPant : IPant
{
    public IPant Clone() => this.MemberwiseClone() as IPant;
}

public class JeanPant : IPant
{
    public IPant Clone() => this.MemberwiseClone() as IPant;
}

According to the preceding structure, to copy the pants, the clone method is defined in the IPant interface, and each class copies the object using the MemberwiseClone method.

根据前面的结构,要复制 pants,在 IPant 接口中定义了 clone 方法,每个类都使用 MemberwiseClone 方法复制对象。

Now that the design is finished, the following structure can be used:
现在设计已完成,可以使用以下结构:

IPant jean1 = new JeanPant();

IPant jean2 = jean1.Clone();

The point in the preceding code is the method of copying the object. The objects are copied using the shallow method. In shallow copying, value types are copied bit by bit, but in the case of reference types, only the address is copied, and not the object, which causes another object to be affected by changing the value. To prevent this from happening, the deep method should be used to copy objects. Refer to the following figure:
上述代码中的要点是复制对象的方法。使用 shallow 方法复制对象。在浅层复制中,值类型是逐位复制的,但在引用类型的情况下,只复制地址,而不复制对象,这会导致另一个对象受到更改值的影响。为了防止这种情况发生,应该使用 deep 方法来复制对象。参考下图:

alt text
Figure 2.10: Shallow Copy
图 2.10.. 浅拷贝

The preceding figure shows the shallow copy, in which object X1 refers to B1 and object B1 also refers to C1. After copying, a new object named X2 is created, which still refers to B1. Therefore, applying a change in B1 through X1 will cause this change to be felt by object X2 as well:
上图显示了浅拷贝,其中对象 X1 引用 B1,对象 B1 也引用 C1。复制后,将创建一个名为 X2 的新对象,该对象仍引用 B1。因此,在 B1 到 X1 中应用更改将导致对象 X2 也能感受到此更改:

alt text
Figure 2.11: Deep Copy

The preceding figure also shows the copy by deep method, in which after the creation of object X2, new objects B2 and C2 are also created, and X2 refers to new objects B2 and C2. Therefore, applying changes in B1 through X1 will not cause these changes to be felt in object X2.

上图还显示了 deep 复制方法,其中在创建对象 X2 之后,还会创建新的对象 B2 和 C2,X2 引用新的对象 B2 和 C2。因此,在 B1 到 X1 中应用更改不会导致在对象 X2 中感觉到这些更改。

According to the preceding explanations, and to clarify the differences between the shallow and deep methods, we also define another method called DeepClone. There are different methods to implement deep copy. Here we use the following method. Also, we define a class called Cloth, and we define the characteristics of the fabric through this class. The following code can be considered:

根据前面的解释,为了明确 shallow 和 deep 方法之间的区别,我们还定义了另一种名为 DeepClone 的方法。有多种方法可以实现 Deep Copy。这里我们使用以下方法。此外,我们定义了一个名为 Cloth 的类,我们通过这个类来定义织物的特性。可以考虑以下代码:

public class Cloth
{
    public string Color { get; set; }
}

public interface IPant
{
    public int Price { get; set; }
    public Cloth ClothInfo { get; set; }

    IPant Clone();
    IPant DeepClone();
}

public class JeanPant : IPant
{
    public int Price { get; set; }
    public Cloth ClothInfo { get; set; }

    public IPant Clone() => this.MemberwiseClone() as IPant;

    public IPant DeepClone()
    {
    JeanPant pant = this.MemberwiseClone() as JeanPant;
    pant.ClothInfo = new Cloth() { Color = this.ClothInfo.Color };
    return pant;
    }

    public override string ToString() =>
    $"Color:{this.ClothInfo.Color}, Price: {this.Price}";
}

Now, to check the output of each method, we will test the copy methods using the following code. For Shallow copying, we have:
现在,为了检查每个方法的输出,我们将使用以下代码测试 copy 方法。对于浅拷贝,我们有:

IPant jean1 = new JeanPant()
{
    Price = 10000,
    ClothInfo = new Cloth(){ Color = "Red"}
};

IPant jean2 = jean1.Clone() ;//Shallow Copy
jean2.Price = 11000;
jean2.ClothInfo.Color = "Geen";
Console.WriteLine($"jean1: {jean1}");

Console.WriteLine($"jean2: {jean2}");

By running the preceding code, we get the following output:
通过运行上述代码,我们得到以下输出:

jean1: Color: Geen, Price: 10000
jean2: Color: Geen, Price: 11000

In this copy method, for reference types, only the reference address is copied; changing the value in ClothInfo through jean2 causes the ClothInfo value in jean1 to change as well.
在此 copy 方法中,对于引用类型,仅复制引用地址;通过 jean2 更改 ClothInfo 中的值会导致 jean1 中的 ClothInfo 值也发生更改。

But if we use DeepClone instead of the clone method in the preceding code, then we will see the following output:
但是,如果我们在前面的代码中使用 DeepClone 而不是 clone 方法,那么我们将看到以下输出:

jean1: Color: Red, Price: 10000
jean2: Color: Geen, Price: 11000

Now, if we return to the proposed scenario, we can slightly change the design provided for the Prototype design pattern.
现在,如果我们返回到建议的场景,我们可以稍微更改为 Prototype 设计模式提供的设计。

In the current design,It is not yet clear how the user can communicate.
目前尚不清楚用户如何进行通信。

Also, in the current design, when the number of prototypes is not a fixed value, and the prototypes can be dynamically created and destroyed, no solution has been provided.

此外,在当前设计中,当原型的数量不是固定值,并且可以动态创建和销毁原型时,没有提供任何解决方案。

To answer the first requirement, the user communication to produce objects in the preceding scenario should be through IPant. For the second requirement, another class can be used as Registry, and the user's relationship with the prototype can be promoted through the registry as the following:
为了满足第一个要求,在前面的方案中生成对象的用户通信应通过 IPant 进行。对于第二个要求,可以使用另一个类作为 Registry,并且用户与原型的关系可以通过 Registry 进行提升,如下所示:

alt text
Figure 2.12: Prototype design pattern UML diagram
图 2.12: 原型设计模式 UML 图

According to Figure 2.12, the following code could be considered:
根据图 2.12,可以考虑以下代码:

public class PantRegistry
{
    public PantRegistry() => Pants = new List<IPant>();
    public List<IPant> Pants { get; private set; }
    public void Add(IPant obj) => Pants.Add(obj);
    public IPant GetByColor(string color) =>
        Pants
        .OrderBy(x => Guid.NewGuid())
        .FirstOrDefault(x => x.ClothInfo.Color == color)
        .DeepClone();

    public IPant GetByType(Type type) =>
        Pants
        .OrderBy(x => Guid.NewGuid())
        .FirstOrDefault(x => x.GetType() == type)
        .DeepClone();

}

In the preceding code, a feature called Pants is presented, which has the role of a reservoir, and objects are stored in this reservoir. The Add method is used to store the new object in the repository, and the GetByColor and GetByType methods are used to search the repository and find the desired object and copy it. To use this code, you can do the following:
在上面的代码中,提出了一个名为 Pants 的功能,它具有 reservoir 的角色,并且对象存储在此 reservoir 中。Add 方法用于将新对象存储在存储库中,GetByColor 和 GetByType 方法用于搜索存储库并查找所需对象并复制它。要使用此代码,您可以执行以下作:

IPant jean1 = new JeanPant()
{
    Price = 10000,
    ClothInfo = new Cloth() { Color = "Red" }
};

IPant cotton1 = new CottonPant()
{
    Price = 7000,
    ClothInfo = new Cloth() { Color = "Red" }
};

IPant fabric1 = new FabricPant()
{
    Price = 12000,
    ClothInfo = new Cloth() { Color = "Blue" }
};

PantRegistry registry = new PantRegistry();
registry.Add(jean1);
registry.Add(cotton1);
registry.Add(fabric1);
Console.WriteLine($"{jean1}");

// Get a pair of jeans
Console.WriteLine($"{registry.GetByType(typeof(JeanPant))}");

// Get a pair of red pants regardless of the type of pants
Console.WriteLine($"{registry.GetByColor("Red")}");

As you can see in the preceding code, the objects are added to the repository after being created, searched, and copied through different methods.

如前面的代码所示,对象在通过不同的方法创建、搜索和复制后会添加到存储库中。

Participants:
参与者:

  • Prototype: In the preceding scenario, it is IPant that is responsible for defining the template for copying the object
    原型:在前面的场景中,是 IPant 负责定义用于复制对象的

  • Concrete prototype: In the preceding scenario, it is CottonPant, JeanPant, and FabricPant, which is responsible for implementing the provided template for copying the object.
    模板具体原型:在前面的场景中,是 CottonPant、JeanPant 和 FabricPant 负责实现提供的用于复制对象的模板。

  • Prototype registry: which is also called Prototype Manager. In the preceding scenario, it is PantRegistry whose task is to facilitate access to objects.
    Prototype registry:也称为 Prototype Manager。在前面的场景中,PantRegistry 的任务是促进对对象的访问。

  • Client: It is the same user who sends a request to create a new object through Prototype or Prototype Registry.
    客户端:通过 Prototype 或 Prototype Registry 发送创建新对象的请求的同一用户。

Notes:

  • In .NET, the ICloneable interface can also be used as a prototype.
    在 .NET 中,ICloneable 接口也可以用作原型。

  • Singleton can also be used in the implementation of the prototype design pattern.
    Singleton 也可以用于原型设计模式的实现。

  • In the command design pattern, if you need to save history, you can use the prototype design pattern.
    在命令设计模式中,如果需要保存历史记录,可以使用 prototype 设计模式。

  • If there is a need to store the state of the object and this object:
    如果需要存储对象和此对象的 state:

    • It is not a complex object.
      它不是一个复杂对象。
    • Does not have complex references or can be defined easily.
      没有复杂的引用或可以轻松定义。
    • Then the prototype design pattern can be used instead of the memento design pattern.
      然后,可以使用 prototype 设计模式来代替 memento 设计模式。
  • Prototype registry can also be implemented using generics:
    原型注册表也可以使用泛型实现:

    public interface IPrototypeRegistry
    {
    void Add<T>(T obj) where T : ICloneable;
    T Get<T>() where T : ICloneable;
    }
  • Consequences: Advantages
    后果:优势

  • Repetitive codes for creating objects are removed, and we are faced with a smaller amount of code.
    用于创建对象的重复代码被删除,我们面临的代码量较小。

  • Due to the presence of the copying method, making complex objects can be easy.
    由于存在复制方法,制作复杂对象很容易。

  • Objects can be copied, and a new object can be created without the need for a concrete class.
    可以复制对象,并且可以在不需要具体类的情况下创建新对象。

  • Consequences: Disadvantages
    结果:缺点

  • The process of copying objects can be complicated.
    复制对象的过程可能很复杂。

  • Implementing the copy process for classes that have circular dependencies can be complicated.
    为具有循环依赖关系的类实现复制过程可能很复杂。

Applicability:
适用性:

  • When the type of object to be created is determined at runtime. For example, Dynamic Loading
    在运行时确定要创建的对象的类型时。例如,Dynamic Loading
  • When the objects of a class have almost the same data content
    当一个类的对象具有几乎相同的数据内容时

Related patterns:
相关模式:
Some of the following design patterns are not related to prototype design patterns, but to implement this design pattern, checking the following design patterns will be useful:
以下一些设计模式与原型设计模式无关,但要实现此设计模式,检查以下设计模式将很有用:

  • Singleton
  • Command
  • Memento
  • Abstract factory

Singleton

单例

In this section, the singleton design pattern is introduced and analyzed according to the structure presented in the GoF design patterns section in Chapter 1, Introduction to Design Pattern.
在本节中,根据第 1 章 设计模式简介的 GoF 设计模式部分中介绍的结构介绍和分析了单例设计模式。

Name:
Singleton
单例
Classification:
Creational design patterns
创建式设计模式
Also known as:
-
Intent:
This design pattern tries to provide a structure in which there is always an object of the class. One of the most important reasons for having only one object of a class is to control access to common resources such as databases and the like.
此设计模式尝试提供一个结构,其中始终存在类的对象。只有一个类对象的最重要原因之一是控制对公共资源(如数据库等)的访问。
Motivation, Structure, Implementation, and Sample code:
动机、结构、实现和示例代码:
Suppose we want to design the appropriate infrastructure for communication with the SQL Server database. For this purpose, we have created the DbConnectionManager class. During the program, to use this class, we need to always have only one object of it. In this scenario, the database is the shared resource. With these explanations, the following design can be imagined:
假设我们想要设计适当的基础结构来与 SQL Server 数据库进行通信。为此,我们创建了 DbConnectionManager 类。在程序过程中,要使用这个类,我们只需要只有一个对象。在此方案中,数据库是共享资源。通过这些解释,可以想象出以下设计:

alt text
Figure 2.13: Database connection manager initial relation
图 2.13.. 数据库连接管理器初始关系

According to Figure 2.13, the following code could be considered:
根据图 2.13,可以考虑以下代码:

public class DbConnectionManager
{
    private DbConnectionManager() { }
    public static DbConnectionManager GetInstance() => new();
}

As you can see in Figure 2.13 class diagram and preceding code, the constructor of the class is defined with a private access modifier. The reason for this is to remove access to the constructor from outside the class. Because when the constructor of the class is called, an object must be returned, which can ruin the design. Therefore, to prevent this from happening, access to the class’s constructor is limited. Within the class, the GetInstance method is defined as public static. So, it is possible to access this method outside the class without creating an object of the desired class type. In fact, this method will be responsible for presenting the object to the outside.

如图 2.13 类图和前面的代码所示,类的构造函数是使用 private 访问修饰符定义的。这样做的原因是从类外部删除对构造函数的访问。因为当调用类的构造函数时,必须返回一个对象,这可能会破坏设计。因此,为了防止这种情况发生,对类的构造函数的访问受到限制。在该类中,GetInstance 方法定义为 public static。因此,可以在类外部访问此方法,而无需创建所需类类型的对象。实际上,此方法将负责将对象呈现给外部。

The preceding code still does not cover the main requirement, which is to have only one object. Because every time the GetInstance method is called, a new object is always created and returned. The following code shows how to use this class and the existing problem:

前面的代码仍然没有涵盖主要要求,即只有一个对象。因为每次调用 GetInstance 方法时,总是会创建并返回一个新对象。下面的代码演示如何使用此类和现有问题:

DbConnectionManager obj1 = DbConnectionManager.GetInstance();

DbConnectionManager obj2 = DbConnectionManager.GetInstance();

Console.WriteLine($"obj1: {obj1.GetHashCode()}, obj2: {obj2.GetHashCode()}");

Output:
输出:

obj1: 58225482, obj2: 54267293

As shown in the preceding code and its corresponding output, two different objects have been created. To prevent this, we need to slightly change the GetInstance method:
如前面的代码及其相应的输出所示,已经创建了两个不同的对象。为了防止这种情况,我们需要稍微更改 GetInstance 方法:

  • The first step of the change is to return the object directly to the user after it is created and save it in a variable, and
    更改的第一步是在创建对象后直接将对象返回给用户,并将其保存在变量中,
  • The second step is to check before creating the object; if this variable contains the object, no more new objects will be created.
    第二步是在创建对象之前进行检查;如果此变量包含对象,则不会再创建新对象。
  • Therefore, the class diagram and code will change as the following:
    因此,类图和代码将更改如下:

alt text
Figure 2.14: Singleton design pattern UML diagram
图 2.14: 单例设计模式 UML 图

According to Figure 2.14, the following code could be considered:

根据图 2.14,可以考虑以下代码:

public class DbConnectionManager
{
    private static DbConnectionManager _instance;
    private DbConnectionManager() { }
    public static DbConnectionManager GetInstance()
    {
        if (_instance == null)
        _instance = new();
        return _instance;
    }
}

Now if we run these codes again, we have the following output:
现在,如果我们再次运行这些代码,我们将得到以下输出:

obj1: 58225482, obj2: 58225482

As you can see, only one object of DbConnectionManager class is created and available. This way of implementing the singleton design pattern is suitable for single and multi-thread environments. This implementation needs changes, and this design pattern should be implemented in a thread-safe manner. If this design pattern is implemented using the early initialization method, the problem raised for multi-thread environments will not exist, and the desired object will be created for each AppDomain.

如您所见,仅创建一个 DbConnectionManager 类的对象且可用。这种实现单例设计模式的方式适用于单线程和多线程环境。此实现需要更改,并且此设计模式应以线程安全的方式实现。如果此设计模式是使用早期初始化方法实现的,则为多线程环境引发的问题将不存在,并且将为每个 AppDomain 创建所需的对象。

Before dealing with the implementation of thread Safe, let us first observe the problem by simulating the multi-thread environment:
在处理线程 Safe 的实现之前,我们先通过模拟多线程环境来观察问题:

Parallel.Invoke(() =>
    {
    DbConnectionManager obj1 = DbConnectionManager.GetInstance();
    Console.WriteLine($"obj1: {obj1.GetHashCode()}");
    }, () =>
        {
        DbConnectionManager obj2 = DbConnectionManager.GetInstance();
        Console.WriteLine($"obj2: {obj2.GetHashCode()}");
        });

The Invoke method in the Parallel class tries to perform the provided tasks in parallel. By running the preceding code, we can see the output as follows:
Parallel 类中的 Invoke 方法尝试并行执行提供的任务。通过运行上述代码,我们可以看到如下输出:

obj2: 6044116

obj1: 27252167

Pay attention that these outputs may be different on different machines. The preceding output shows that the first obj2 is prepared, and then obj1 is prepared. Since these two objects have different HashCodes, we can conclude that even though the singleton pattern is used, in the multi-thread environment, instead of just one object, we are faced with several objects. Now, to solve this problem, as mentioned before, we need to implement Singleton as thread Safe. There are different ways to implement Singleton as thread Safe. We do this using the lock block. For this purpose, the DbConnectionManager class is changed as follows:

请注意,这些输出在不同计算机上可能有所不同。从上述输出可以看出,先准备了第一个 obj2,然后准备了 obj1。由于这两个对象具有不同的 HashCodes,我们可以得出结论,即使使用了单例模式,在多线程环境中,我们面对的不仅仅是一个对象,而是多个对象。现在,为了解决这个问题,如前所述,我们需要将 Singleton 实现为 thread Safe。有多种方法可以将 Singleton 实现为 thread Safe。我们使用 lock 块来做到这一点。为此,DbConnectionManager 类更改如下:

public class DbConnectionManager
{
    private static DbConnectionManager _instance;
    private static object locker = new();
    private DbConnectionManager() { }
    public static DbConnectionManager GetInstance()
    {
        lock (locker)
            {
                if (_instance == null)
                _instance = new();
                return _instance;
            }
    }
}

In the preceding code, by using the lock block, the first thread that launches the method takes over the locker resource and enters the lock block. The remaining threads that enter the method, because the locker resource is not available are stopped so that the previous thread releases the resource. Immediately after releasing the resource, the next thread receives the resource and enters the lock block. The problem of not being thread-safe is solved, but the code still can be improved. The important problem of this implementation method is efficiency.

在上面的代码中,通过使用 lock 块,启动该方法的第一个线程将接管 locker 资源并进入 lock 块。由于 locker 资源不可用,因此进入该方法的其余线程将停止,以便前一个线程释放资源。释放资源后,下一个线程立即收到资源并进入 lock 块。线程不安全的问题得到了解决,但代码仍然可以改进。这种实现方法的重要问题是效率。

For example, thread number 1 enters the GetInstance method and receives the locker. Thread number 2 enters and stops behind the lock block. Thread number 1 creates the object and exits the block, and thread number 2 enters the lock block. Meanwhile, thread number 3 enters the GetInstance method, and since the locker is owned by thread number 2, it must wait. But the waiting is useless, and thread number 3 can directly receive the object and return it. With this explanation, the preceding code can be optimized as follows:

例如,线程 1 输入 GetInstance 方法并接收保险箱。线程 2 进入并停止在锁块后面。线程 1 创建对象并退出块,线程 2 进入 lock 块。同时,线程 3 进入 GetInstance 方法,并且由于锁由线程 2 拥有,因此它必须等待。但是等待没用,线程 3 可以直接接收对象并返回。通过此说明,可以按如下方式优化上述代码:

public static DbConnectionManager GetInstance()
{
    if (_instance != null)
    return _instance;
    lock (locker)
    {
        if (_instance == null)
        _instance = new();
        return _instance;
    }
}

This method is called double-check locking and can improve overall performance.
此方法称为 double-check locking,可以提高整体性能。

The second method to implement thread-safe is to mark the GetInstance method as Synchronized. Using this method, the GetInstance method is implemented as follows:
实现线程安全的第二种方法是将 GetInstance 方法标记为 Synchronized。使用此方法,GetInstance 方法的实现方式如下:

[MethodImpl(MethodImplOptions.Synchronized)]
public static DbConnectionManager GetInstance()
{
    if (_instance == null)
    _instance = new();
    return _instance;
}

This method has an important difference compared to the lock method; that is, only one thread is allowed to enter the method. The object is protected by using a lock, and the whole method is protected by using a synchronized method.

与 lock 方法相比,此方法具有重要区别;也就是说,只允许一个线程进入该方法。使用锁保护对象,并使用同步方法保护整个方法。

Apart from the preceding methods, there are other methods, such as using Lazy or Monitor class to implement a thread-safe singleton.
除了上述方法外,还有其他方法,例如使用 Lazy 或 Monitor 类来实现线程安全的单例。

Participants:
参与者:

  • Singleton: In the preceding scenario, it is the DbConnectionManager that is responsible for defining the object creation process. This class should always try to create only one object of the target class.
    Singleton:在前面的场景中,DbConnectionManager 负责定义对象创建过程。此类应始终尝试仅创建目标类的一个对象。

Notes:

  • By using this design pattern, in the object creation process, any changes which are needed can be applied by applying changes in the GetInstance method. For example, by having this structure, it is possible to provide conditions so that there are always two objects from a class. To implement this logic, it is enough to change the GetInstance method.
    通过使用这种设计模式,在对象创建过程中,可以通过在 GetInstance 方法中应用更改来应用所需的任何更改。例如,通过具有此结构,可以提供条件,以便一个类中始终有两个对象。要实现此逻辑,只需更改 GetInstance 方法即可。

  • In using this design pattern, you should pay attention to its difference from static classes. Among the most important differences between the static class and the use of the singleton design pattern, we can mention the impossibility of using static classes in inheritance, which makes static classes not extensible.
    在使用此设计模式时,应注意它与 static 类的区别。在 static 类和使用 singleton 设计模式之间最重要的区别中,我们可以提到在继承中使用 static 类是不可能的,这使得 static 类不可扩展。

  • Abstract factory, factory method, and builder design patterns can be implemented using Singleton.
    抽象工厂、工厂方法和构建器设计模式可以使用 Singleton 实现。

  • The initialization process can be implemented in both Lazy and Early. The difference between these two methods is when the desired object is initialized. In the Lazy method, the object is initialized when it is needed. In the Early method, the object is initialized when the class is loaded. The lazy method is used in the preceding scenario. To implement the early method, you can do the following:
    初始化过程可以在 Lazy 和 Early 中实现。这两种方法之间的区别在于初始化所需对象的时间。在 Lazy 方法中,对象在需要时初始化。在 Early 方法中,对象在加载类时初始化。在前面的场景中使用了 lazy 方法。要实现早期方法,您可以执行以下作:

    public class DbConnectionManager
    {
    private static DbConnectionManager _instance = new();
    private DbConnectionManager() { }
    public static DbConnectionManager GetInstance() => _instance;
    }

Consequences: Advantages
后果:优势

  • It can be ensured that there is one object in the class.
    可以确保类中有一个对象。

  • It is possible to access the object. Since a method is responsible for providing the object, it is possible to have better control over who requested the object and when.
    可以访问该对象。由于方法负责提供对象,因此可以更好地控制请求对象的人员和时间。

Consequences: Disadvantages
后果:缺点

  • The implementation of the thread-safe structure can be complicated, and this task can affect performance due to the use of locking mechanisms.
    线程安全结构的实现可能很复杂,并且由于使用锁定机制,此任务可能会影响性能。

  • Writing unit tests can be complicated.
    编写单元测试可能很复杂。

Applicability:
适用性:

  • When we need only one object of the class to be available to the user.
    当我们只需要 class 的一个对象可供用户使用时。
  • When we need to have more control over the object and its data content, we can use this design pattern instead of using Global Variables.
    当我们需要对对象及其数据内容进行更多控制时,我们可以使用这种设计模式而不是使用 Global Variables。

Related patterns:
相关模式:
Some of the following design patterns are not related to the Singleton design pattern, but to implement this design pattern, checking the following design patterns will be useful:
以下一些设计模式与 Singleton 设计模式无关,但要实现此设计模式,检查以下设计模式将很有用:

  • Facade
  • Abstract Factory
  • Factory Method
  • Builder
  • Prototype
  • Flyweight
  • Observer
  • State

Conclusion

总结

In this chapter, you got acquainted with various creational design patterns and learned how to manage the initialization of objects according to different conditions.
在本章中,您熟悉了各种创建设计模式,并学习了如何根据不同条件管理对象的初始化。

In the next chapter, you will learn about structural design patterns and learn how to manage relationships between classes and objects.
在下一章中,您将了解结构设计模式,并学习如何管理类和对象之间的关系。

NET 7 Design Patterns In-Depth 1. Introduction to Design Patterns

Chapter 1 Introduction to Design Patterns

Introduction

简介

One of the problems in understanding and using design patterns is the need for proper insight into software architecture and the reason for using design patterns. When this insight does not exist, design patterns will increase complexity. As they are not used in their proper place, the use of design patterns will be considered a waste of work. The reason for this is that the design patterns will not be able to have a good impact on quality because they need to be placed in the right place.

理解和使用设计模式的问题之一是需要正确了解软件体系结构以及使用设计模式的原因。当这种洞察力不存在时,设计模式将增加复杂性。由于它们没有在适当的位置使用,因此使用设计模式将被视为浪费工作。这样做的原因是,设计模式无法对质量产生好的影响,因为它们需要放在正确的位置。

In this chapter, an attempt has been made to briefly examine the software architecture and design patterns. The enterprise applications architecture has been introduced, and the relationship between software design problems and design patterns has been clarified. In the rest of the chapter, a brief look at .NET, some object-oriented principles, and the UML is given because, throughout the book, UML is used for modeling, and the .NET framework and C# language are used for sample codes.
在本章中,我们尝试简要地研究了软件体系结构和设计模式。介绍了企业应用程序体系结构,并阐明了软件设计问题和设计模式之间的关系。在本章的其余部分,简要介绍了 .NET、一些面向对象的原则以及 UML,因为在整本书中,UML 用于建模,而 .NET 框架和 C# 语言用于示例代码。

Structure

结构

In this chapter, we will cover the following topics:
在本章中,我们将介绍以下主题:

  • What is software architecture
  • What are design patterns
  • GoF design patterns
  • Enterprise application and its design patterns
    • Different types of enterprise applications
  • Design patterns and software design problems
    • Effective factors in choosing a design pattern
  • .NET
    • Introduction to object orientation in .NET
  • Object orientation SOLID principles
  • UML class diagram
  • Conclusion

Objectives

目标

By the end of this chapter, you will be able to understand the role and place of design patterns in software design, be familiar with software architecture, and evaluate software design problems from different aspects. You are also expected to have a good view of SOLID design principles at the end of this chapter and get to know .NET and UML.

通过本章的结尾,您将能够理解设计模式在软件设计中的作用和地位,熟悉软件架构,并从不同方面评估软件设计问题。在本章的末尾,您还应该对 SOLID 设计原则有一个很好的了解,并了解 .NET 和 UML。

What is software architecture

什么是软件架构

Today, there are various definitions for software architecture. The system’s basic structure, related to design decisions, must be made in the initial steps of software production. The common feature in all these definitions is their importance. Regardless of our attitude towards software architecture, we must always consider that suitable architecture can be developed and maintained. Also, when we want to look at the software from an architectural point of view, we must know what elements and items are of great importance and always try to keep those important elements and items in the best condition.

今天,软件架构有多种定义。系统的基本结构与设计决策相关,必须在软件生产的初始步骤中制定。所有这些定义的共同特征是它们的重要性。无论我们对软件架构的态度如何,我们都必须始终考虑可以开发和维护合适的架构。此外,当我们想从架构的角度来看软件时,我们必须知道哪些元素和项目非常重要,并始终尝试使这些重要的元素和项目处于最佳状态。

Consider software that needs to be better designed, and its essential elements must be identified. During the production and maintenance of this software, we will need help with various problems, including implementing changes, which will reduce the speed of providing new features and increase the volume of software errors and bugs. For example, pay attention to the following figure:

考虑需要更好设计的软件,并且必须确定其基本元素。在该软件的制作和维护过程中,我们将需要帮助解决各种问题,包括实施更改,这将降低提供新功能的速度并增加软件错误和错误的数量。例如,请注意下图:

alt text

Figure 1.1: An example of software without proper architecture
图 1.1: 没有适当架构的软件示例

In the preceding figure, full cells are the new features provided, and empty cells are the design and architectural problems and defects.

在上图中,full cells 是提供的新功能,而 empty cells 是设计和体系结构问题和缺陷。

If we consider one row of Figure 1.1, the following figure will be seen:
如果我们考虑图 1.1 的一行,将看到下图:

alt text

Figure 1.2: Sample feature delivery in software without proper architecture
图 1.2: 没有适当架构的软件中的功能交付示例

We see how much time it takes to provide three different features. If the correct design and architecture were adopted, new features would be delivered more quickly. The same row could be presented as the following figure:

我们了解提供三种不同功能需要多少时间。如果采用正确的设计和架构,新功能将更快地交付。同一行可以显示为下图:

alt text

Figure 1.3: Sample feature delivery in software WITH proper architecture
图 1.3: 具有适当架构的软件中的示例功能交付

The difference in length in the preceding two forms (Figure 1.2 and Figure 1.3) is significant. This shows the importance of the right design and architecture in the software. A high-quality infrastructure in the short term may indicate that production speed decreases. This natural and high-quality infrastructure will show its effect in the long run.

前两种形式(图 1.2 和图 1.3)的长度差异很大。这表明了软件中正确设计和架构的重要性。短期内高质量的基础设施可能表明生产速度会降低。从长远来看,这种天然和高质量的基础设施将显示出其效果。

The following figure shows the relationship between Time and Output:
下图展示了 Time 和 Output 之间的关系:

alt text

Figure 1.4: Time-Output Relation in Software Delivery
图 1.4: 软件交付中的时间-输出关系

In Figure 1.4, at the beginning of the work, reaching the output with a low-quality Infrastructure is faster than with a high-quality Infrastructure. However, with the passage of time and the increase in the capabilities and complexity of the software, the ability to maintain and apply software change is accelerated with better quality infrastructure. This will reduce costs, increase user satisfaction, and improve maintenance.

在图 1.4 中,在工作开始时,使用低质量的 Infrastructure 比使用高质量的 Infrastructure 更快地达到输出。但是,随着时间的推移以及软件功能和复杂性的增加,维护和应用软件更改的能力会随着更高质量的基础设施而加速。这将降低成本、提高用户满意度并改善维护。

In this regard, Gerald Weinberg, the late American computer science scientist, has a quote that says,

在这方面,已故的美国计算机科学科学家杰拉尔德·温伯格 (Gerald Weinberg) 有一句话说:

“If builders-built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.”

“如果建筑商按照程序员编写程序的方式建造建筑物,那么出现的第一只啄木鸟就会摧毁文明。”

Weinberg tried to express the importance of infrastructure and software architecture. According to Weinberg’s quote, paying attention to maintainability in the design and implementation of software solutions is important. Today, various principles can be useful in reaching a suitable infrastructure.

Weinberg 试图表达基础设施和软件架构的重要性。根据 Weinberg 的引述,在软件解决方案的设计和实施中注意可维护性很重要。今天,各种原则都有助于实现合适的基础设施。

Some of these principles are as follows:
其中一些原则如下:

  • Separation of concerns: Different software parts should be separated from each other according to their work.
    关注点分离:不同的软件部分应根据其工作情况相互分离。

  • Encapsulation: This is a way to restrict the direct access to some components of an object, so users cannot access state values for all the variables of a particular object. Encapsulation can hide data members, functions, or methods associated with an instantiated class or object. Users will have no idea how classes are implemented or stored, and the users will only know that the values are being passed and initialized (Data Hiding). Also, it would be easy to change and adapt to new requirements (ease of use) using Encapsulation.
    封装:这是一种限制对对象某些组件的直接访问的方法,因此用户无法访问特定对象的所有变量的状态值。封装可以隐藏与实例化的类或对象关联的数据成员、函数或方法。用户将不知道类是如何实现或存储的,用户只知道值正在传递和初始化(数据隐藏)。此外,使用 Capsulation 很容易更改和适应新的需求(易用性)。

  • Dependency inversion: High-level modules should not depend on low-level modules, and the dependence between these two should only happen through abstractions. To clarify the issue, consider the following example:
    We have two different times in software production: compile and run time. Suppose that in a dependency graph at compile-time, the following relationship exists between classes A, B, and C:
    依赖反转:高级模块不应该依赖低级模块,这两者之间的依赖关系只能通过抽象来实现。为了澄清这个问题,请考虑以下示例:
    我们在软件生产中有两个不同的时间:编译和运行时。假设在编译时的依赖关系图中,类 A、B 和 C 之间存在以下关系:

alt text

Figure 1.5: Relationship between A, B, and C in compile-time
图 1.5: 编译时 A、B 和 C 之间的关系

As you can see, at compile-time, A is directly connected to B to call a method in B, and the exact relationship is true for the relationship between B and C. This connection will be established in the same way during runtime as follows:

如你所见,在编译时,A 直接连接到 B 以调用 B 中的方法,并且 B 和 C 之间的关系正是如此。此连接将在运行时以相同的方式建立,如下所示:

alt text
Figure 1.6: Relationship between A, B, and C in runtime
图 1.6.. 运行时 A、B 和 C 之间的关系

The problem in this type of communication is that there is no loose coupling between A-B and B-C, and these parts are highly dependent on each other and cause problems in maintainability. To solve this problem, instead of the direct connection between A and B, we consider the connection at compile-time based on abstractions as shown in the following figure:

这种类型的通信的问题在于 A-B 和 B-C 之间没有松散的耦合,并且这些部分彼此高度依赖,并导致可维护性问题。为了解决这个问题,我们在编译时根据抽象来考虑连接,而不是 A 和 B 之间的直接连接,如下图所示:

alt text

Figure 1.7: Relationship between A, B, and C based on abstractions
图 1.7: 基于抽象的 A、B 和 C 之间的关系

In the prior connection, A depends on an abstraction from B at the compile-time, and B has implemented the corresponding abstraction. This change in communication type will ultimately remain the same as at runtime. But it will cause a loose coupling in the sense that the implementation of B can be changed without changing A.

在前面的连接中,A 依赖于编译时来自 B 的抽象,并且 B 已经实现了相应的抽象。通信类型的这种更改最终将与运行时相同。但它会导致松散耦合,因为 B 的实现可以在不改变 A 的情况下改变。

The communication during runtime in the prior mode is shown in the following figure:

previous 模式下运行时的通信如下图所示:

alt text
Figure 1.8: Relationship between A, B, and C based on abstractions in runtime
图 1.8: 运行时中基于抽象的 A、B 和 C 之间的关系

  • Explicit dependencies: Classes and methods must be honest with their users. For example, attribute X must have a correct value for a class to function properly. This condition can be applied through the class constructor, and objects that cannot be used can be prevented from being created.
    显式依赖项:类和方法必须对其用户诚实。例如,属性 X 必须具有正确的值,类才能正常工作。可以通过类构造函数应用此条件,并且可以阻止创建无法使用的对象。

  • Single responsibility: This principle is proposed in object-oriented design as one of the architectural principles. This principle is like the separation of concerns and states that an object must have one task and reason for the change.
    单一责任:此原则在面向对象设计中作为架构原则之一提出。此原则类似于关注点分离,并声明对象必须具有一个更改的任务和原因。

  • DRY: The behavior related to a specific concept should not be given in several places. Failure to comply with this principle will cause all the code to change its behavior, increasing the probability of errors and bugs.
    DRY:与特定概念相关的行为不应在多个地方给出。不遵守此原则将导致所有代码改变其行为,从而增加错误和错误的可能性。

  • Persistence ignorance: Different business models in data sources should be able to store regardless of the type. These models are often called Plain Old CLR Object (POCO)s in .NET. This is because the storage resource can change over time (for example, from SQL Server to Azure Cosmos DB), and this should not affect the rest of the sections. Some signs of violation of this principle can be introduced as the following:
    持久性无知:数据源中的不同业务模型应该能够存储,而不管类型如何。这些模型在 .NET 中通常称为普通旧 CLR 对象 (POCO)。这是因为存储资源可能会随时间而变化(例如,从 SQL Server 更改为 Azure Cosmos DB),这不应影响其余部分。违反此原则的一些迹象可以引入如下:

    Binding to a specific parent class
    绑定到特定的父类

    The requirement to implement a specific interface
    实现特定接口的要求

    Requiring the class to store itself (as in Active Record)
    要求类存储自身(如 Active Record 中)

    The presence of mandatory parametric constructors
    存在强制性参数构造函数

    The presence of virtual features in the class
    类中存在虚拟特征

    The presence of unique attributes related to storage technology
    存在与存储技术相关的独特属性

    The preceding cases are introduced as violations of the principle of persistence ignorance because these cases often create a dependency between models and storage technology, making it difficult to adapt to new storage technology in the future.
    上述情况违反了持久化无知原则,因为这些情况通常会在模型和存储技术之间产生依赖关系,使其将来难以适应新的存储技术。

  • Bounded contexts: A more significant problem can be divided into smaller conceptual sub-problems. In other words, each sub-problem represents a context that is independent of other contexts. Communication between different contexts is established through programming interfaces. Any communication or data source shared between contexts should be avoided, as it will cause tight coupling between contexts.
    有界上下文:更重要的问题可以划分为更小的概念子问题。换句话说,每个子问题都代表一个独立于其他上下文的上下文。不同上下文之间的通信是通过编程接口建立的。应避免在上下文之间共享任何通信或数据源,因为这会导致上下文之间紧密耦合。

What are design patterns

什么是设计模式

As can be seen from the title of the word "design pattern", it is simply a pattern that can be used to solve an upcoming problem. This means that the completed design is not a finished design that can be directly converted into source code or machine code. During the design and production of software, we face various problems in design and implementation, which are repetitive. Therefore, the answer to these often has a fixed format. For example, developing a feature to send messages to the end user may be necessary for software production. Therefore, a suitable infrastructure must be designed and implemented for this requirement. On the other hand, there are different ways to send messages to end users, such as via email, SMS, and so on. The mentioned problem has fixed generalities in most software, and the answer often has a fixed design and format.
从“设计模式”这个词的标题可以看出,它只是一种可以用来解决即将到来的问题的模式。这意味着完成的设计不是可以直接转换为源代码或机器代码的已完成设计。在软件的设计和生产过程中,我们在设计和实现中面临各种问题,这些问题是重复的。因此,这些问题的答案通常具有固定的格式。例如,开发一项功能以向最终用户发送消息对于软件生产可能是必要的。因此,必须针对此要求设计和实施合适的基础设施。另一方面,有多种方法可以向最终用户发送消息,例如通过电子邮件、SMS 等。上述问题在大多数软件中具有固定的通用性,而答案通常具有固定的设计和格式。

A design pattern is a general, repeatable solution to common problems in software design. Therefore, if we encounter a new issue during software production, there may be no pattern introduced for that, and we need to solve it without the help of existing practices. This needs to be solved by designing a correct structure.

设计模式是针对软件设计中常见问题的通用、可重复的解决方案。因此,如果我们在软件生产过程中遇到新的问题,可能没有引入任何模式,我们需要在没有现有实践帮助的情况下解决它。这需要通过设计正确的结构来解决。

Using design patterns has several advantages:
使用设计模式有几个优点:

  • Increasing scalability
    提高可扩展性

  • Increasing expandability
    提高可扩展性

  • Increased flexibility
    提高灵活性

  • Increase the speed of development
    提高开发速度

  • Reduce errors and problems
    减少错误和问题

  • Reducing the amount of coding
    减少编码量

The important thing about design patterns is that they are not a part of the architecture of software systems, and they only provide the correct method of object-oriented coding. You can choose and implement the right way to solve a problem.
设计模式的重要一点是,它们不是软件系统架构的一部分,它们只提供正确的面向对象编码方法。您可以选择并实施正确的方法来解决问题。

GoF design patterns

GoF 设计模式

In the past years, Christopher Alexander introduced the design pattern. He was an architect and used patterns to build buildings. This attitude and thinking of Alexander made Eric Gama use design patterns to develop and produce software in his doctoral dissertation. After a short period, Chard Helm started working with Eric Gama. Later, John Vlissides and Ralph Johnson also joined this group. The initial idea was to publish the design patterns as an article, and due to its length, the full text was published as a book. This four-person group, which is also called Gang of Four (GoF), published a book called “Elements of Reusable Object-Oriented Software”, and they classified and presented 23 different design patterns in the form of 3 different categories (structural, behavioral, and creational). They tried to categorize it from the user's perspective. To fully present this, the GoF group developed a general structure to introduce the design patterns, which consisted of the following sections:

在过去的几年里,Christopher Alexander 引入了设计模式。他是一名建筑师,使用图案来建造建筑物。Alexander 的这种态度和思考使 Eric Gama 在他的博士论文中使用设计模式来开发和生产软件。不久之后,Chard Helm 开始与 Eric Gama 合作。后来,John Vlissides 和 Ralph Johnson 也加入了这个团体。最初的想法是将设计模式作为一篇文章发布,由于它的长度,全文被作为一本书发布。这个四人小组,也被称为 Gang of Four (GoF),出版了一本名为《Elements of Reusable Object-Oriented Software》的书,他们以 3 个不同类别(结构、行为和创建)的形式分类和呈现了 23 种不同的设计模式。他们试图从用户的角度对其进行分类。为了充分呈现这一点,GoF 小组开发了一个通用结构来介绍设计模式,其中包括以下部分:

  • Name and Classification: It shows the design pattern's name and specifies each design pattern's category.
    Name and Classification:它显示设计模式的名称并指定每个设计模式的类别。

  • Also Known As: If other names know the design pattern, they are introduced in this section.
    也称为:如果其他名称知道设计模式,则本节将介绍它们。

  • Intent: This section gives brief explanations about the design pattern.
    Intent:本节提供有关设计模式的简要说明。

  • Motivation, Structure, Implementation, and Sample Code: A description of the problem, main structure, implementation steps, and the source code of design patterns are presented.
    动机、结构、实现和示例代码:提供了问题描述、主要结构、实现步骤和设计模式的源代码。

  • Participants: This section introduces and describes different participants (in terms of classes and objects involved) in the design pattern.
    参与者:本节介绍和描述设计模式中的不同参与者(根据涉及的类和对象)。

  • Notes: Significant points are given in this section regarding the design and implementation of each design pattern.
    注意:本节中给出了有关每个设计模式的设计和实现的重要要点。

  • Consequences: Advantages and disadvantages of the discussed design pattern are given.
    结果:给出了所讨论的设计模式的优缺点。

  • Applicability: Places, where the discussed design pattern can be helpful, are briefly stated.
    适用性:简要说明所讨论的设计模式可能有用的位置。

  • Related Patterns: The relationship of each design pattern with other design patterns is mentioned.
    Related Patterns:提到了每种设计模式与其他设计模式的关系。

The 23 presented patterns can be divided in the form of the following table in terms of scope (whether the pattern is applied to the class or its objects) and purpose (what the pattern does):
提供的 23 种模式可以按范围(模式是应用于类还是其对象)和目的(模式的作用)以下表的形式进行划分:

Behavioral Structural Creational
Class Interpreter、Template Method Class Adapter Factory Method
Object Chain of Responsibility、Command、Iterator、Mediator、Memento、Observer、State、Strategy、Visitor Object Adapter、Bridge、Composite、Decorator、Façade、Flyweight、Proxy Abstract Factory、Builder Prototype、Singleton

Table 1.1: Classification of GoF Design Patterns
表 1.1:GoF 设计模式的分类

Every design pattern has four essential features as follows:
每个设计模式都有四个基本特征,如下所示:

  • Name: Every template must have a name. The name of the design pattern should be such that the application, problem, or solution provided can be reached from the name of the design pattern.
    名称:每个模板都必须有一个名称。设计模式的名称应使所提供的应用程序、问题或解决方案可以从设计模式的名称中访问。

  • Problem: The problem indicates how the design pattern can be applied.
    问题:该问题指示如何应用设计模式。

  • Solution: It deals with the expression of the solution, the involved elements, and their relationships.
    解决方案:它处理解决方案的表达式、涉及的元素及其关系。

  • Consequences: It expresses the results, advantages, disadvantages, and effects of using the design pattern.
    后果:它表示使用设计模式的结果、优点、缺点和效果。

The relationship of all these 23 patterns can be seen in the following figure:

alt text

Figure 1.9: Relationships of GoF Design Patterns
图 1.9: GoF 设计模式的关系

The design patterns provided by GoF are not the only design patterns available. Martin Fowler has also introduced a series of other design patterns with a different look at software production problems called Patterns of Enterprise Application Architecture (PofEAA). He tried to introduce suitable solutions for everyday problems in producing enterprise software. Although there is a rea meter and criteria for using design patterns, a small software may need to use PofEAA design patterns. Martin Fowler has also divided the provided design patterns into different categories, which include the following:

GoF 提供的设计模式并不是唯一可用的设计模式。Martin Fowler 还引入了一系列其他设计模式,这些模式对软件生产问题有着不同的看法,称为企业应用程序架构模式 (PofEAA)。他试图为生产企业软件中的日常问题引入合适的解决方案。尽管使用设计模式有严格的标准和标准,但小型软件可能需要使用 PofEAA 设计模式。Martin Fowler 还将提供的设计模式分为不同的类别,其中包括:

  • Domain-logic patterns
    域逻辑模式
  • Data-source architectural patterns
    数据源架构模式
  • Object-relational behavioral patterns
    对象关系行为模式
  • Object-relational structural patterns
    对象关系结构模式
  • Object-relational metadata-mapping patterns
    对象关系元数据映射模式
  • Web presentation patterns
    Web 表示模式
  • Distribution patterns
    分布模式
  • Offline concurrency patterns
    脱机并发模式
  • Session-state patterns
    会话状态模式
  • Base patterns
    基本模式

In this chapter, an attempt has been made to explain GoF and PofEAA design patterns with a simple approach, along with practical examples.
在本章中,我们尝试用简单的方法解释 GoF 和 PofEAA 设计模式,并提供了实际示例。

Enterprise application and its design patterns

企业应用程序及其设计模式

People construct types of different applications. Each of these has its challenges and complexities. For example, in one software, concurrency issues may be significant and critical, and in another category, the complexity of data structures might be necessary. The term enterprise application or information systems refers to systems in which we face the complexity of data processing and storage. To implement this software, special design patterns will be needed to manage business logic and data. It is important to understand that a series of design patterns can be useful for different types of software. However, a series will also be more suitable for enterprise applications.

人们构建不同类型的应用程序。每一项都有其挑战和复杂性。例如,在一个软件中,并发问题可能是重大和关键的,而在另一个类别中,数据结构的复杂性可能是必要的。术语企业应用程序或信息系统是指我们面临数据处理和存储复杂性的系统。要实现此软件,需要特殊的设计模式来管理业务逻辑和数据。了解一系列设计模式对于不同类型的软件非常有用,这一点很重要。但是,系列也将更适合企业应用程序。

Among the most famous enterprise applications, we can mention accounting software, toll payment, insurance, customer service, and so on. On the other hand, software such as text processors, operating systems, compilers, and even computer games are not part of the enterprise application category.

在最著名的企业应用程序中,我们可以提到会计软件、通行费支付、保险、客户服务等。另一方面,文本处理器、作系统、编译器甚至计算机游戏等软件不属于企业应用程序类别。

The important characteristic of enterprise applications is the durability of data. This data may be stored in data sources for years. The reason for the durability of these data will be needed at different times in different parts of the program at different steps of the process. During the lifetime of the data, we may encounter small and significant changes in operating systems, hardware, and compilers. The volume of data we face in an enterprise application is often large, and different databases will often be needed for storing them.

企业应用程序的重要特征是数据的持久性。此数据可能会在数据源中存储数年。这些数据持久性的原因将在程序的不同时间、流程的不同步骤中需要。在数据的生命周期内,我们可能会遇到作系统、硬件和编译器的微小而重大的变化。我们在企业应用程序中面临的数据量通常很大,并且通常需要不同的数据库来存储这些数据。

When we have a lot of data and have to present it to the users, graphic interfaces and different pages will be needed. The users who use these pages are different from each other and have different knowledge levels of software and computers. Therefore, we will use different methods and procedures to provide users with better data.

当我们有大量数据并且必须将其呈现给用户时,将需要图形界面和不同的页面。使用这些页面的用户彼此不同,并且对软件和计算机的知识水平不同。因此,我们将使用不同的方法和程序为用户提供更好的数据。

Enterprise application often needs to communicate with other software. Each software may have its technology stack. However, we face different interaction, communication, and software integration methods. Even at the level of business analysis, each software may have different analyses for a specific entity, leading to the emergence of different data structures. From another point of view, business logic can be complex, and it is very important to organize these effectively and change them over time.

企业应用程序通常需要与其他软件通信。每个软件都可能有其技术堆栈。但是,我们面临着不同的交互、通信和软件集成方法。即使在业务分析层面,每个软件也可能对特定实体有不同的分析,从而导致出现不同的数据结构。从另一个角度来看,业务逻辑可能很复杂,有效地组织这些逻辑并随着时间的推移改变它们非常重要。

When the word enterprise application is used, a mentality arises that we are dealing with a big software. In reality, this is not correct. A small software can create more value than a large software for the end user. One of the ways to deal with a big problem is to break and divide it into smaller problems. When these smaller issues are solved, they will lead to the solution of the bigger problem. This principle is also true about large software.

当使用企业应用程序这个词时,就会产生一种心态,即我们正在处理一个大型软件。实际上,这是不正确的。小型软件可以比大型软件为最终用户创造更多价值。处理大问题的方法之一是将其分解并划分为较小的问题。当这些较小的问题得到解决时,它们将导致更大问题的解决。此原则也适用于大型软件。

Different types of enterprise applications

不同类型的企业应用程序

It should always be kept in mind that every enterprise application has its own challenges and complexities. Therefore, one solution can be generalized for types of enterprise applications. Consider the following two examples:

应始终牢记,每个企业应用程序都有自己的挑战和复杂性。因此,对于企业应用程序类型,可以通用化一种解决方案。请考虑以下两个示例:

Example 1: In an online selling software, we face many concurrent users. In this case, the proposed solution should have good scalability in addition to the effective use of resources so that with the help of hardware enhancement, the volume of incoming requests can increase the volume of supported concurrent users. In this type of software, the end user can efficiently work with it, so it will be necessary to design a web application that can run on most browsers.

示例 1:在在线销售软件中,我们面临许多并发用户。在这种情况下,除了有效利用资源外,所提出的解决方案还应具有良好的可扩展性,以便在硬件增强的帮助下,传入请求的数量可以增加支持的并发用户的数量。在这种类型的软件中,最终用户可以有效地使用它,因此有必要设计一个可以在大多数浏览器上运行的 Web 应用程序。

Example 2: We may face software in which the volume of concurrent users is low, but the complexity of the business is high. For these systems, more complex graphical interfaces will be needed, which is necessary to manage more complex transactions.

示例 2:我们可能遇到并发用户量较低但业务复杂性较高的软件。对于这些系统,将需要更复杂的图形界面,这对于管理更复杂的事务是必要的。

As evident in the preceding two examples, having a fixed architectural design for every type of enterprise software will not be possible. As mentioned before, the choice of architecture depends on the precise understanding of the problem.

从前面的两个例子中可以明显看出,不可能为每种类型的企业软件都采用固定的架构设计。如前所述,架构的选择取决于对问题的精确理解。

One of the important points in dealing with enterprise applications and their architecture is to pay attention to efficiency, which can be different among teams. One team may pay attention to the performance issues from the beginning, and another may prefer to produce the software first and then identify and fix performance issues by monitoring various metrics. At the same time, a team might use a combination of these two methods. Whichever method is used to improve performance, the following factors are usually important to address:

处理企业应用程序及其架构的重要一点是关注效率,这可能因团队而异。一个团队可能从一开始就关注性能问题,而另一个团队可能更愿意先生产软件,然后通过监控各种指标来识别和修复性能问题。同时,团队可能会结合使用这两种方法。无论使用哪种方法提高性能,通常都需要解决以下因素:

  • Response time: The time it takes to process a request and return the appropriate response to the user.
    响应时间:处理请求并将适当的响应返回给用户所需的时间。

  • Responsiveness: For example, suppose the user is uploading a file. The response rate will be better if the user can work with the software during the upload operation. Another mode is that the user has to wait while performing the upload operation. In this case, the response rate will be equal to the time rate.
    响应能力:例如,假设用户正在上传文件。如果用户可以在上传作期间使用该软件,则响应率会更好。另一种模式是用户在执行上传作时必须等待。在这种情况下,响应率将等于时间率。

  • Latency: The minimum time it takes to receive any response. For example, suppose we are connected to another system through Remote Desktop. The time it takes for the appropriate request and response to move through the network and reach us will indicate the delay rate.
    Latency(延迟):接收任何响应所需的最短时间。例如,假设我们通过 Remote Desktop 连接到另一个系统。适当的请求和响应通过网络到达我们所需的时间将指示延迟率。

  • Throughput: It specifies the amount of work that can be done in a certain period. For example, when copying a file, the throughput can be set based on the number of bytes copied per second. Metrics such as the number of transactions per second or TPS can also be used for enterprise applications.
    吞吐量:指定在一定时间内可以完成的工作量。例如,在复制文件时,可以根据每秒复制的字节数设置吞吐量。每秒事务数或 TPS 等指标也可用于企业应用程序。

  • Load: Specifies the amount of pressure on the system. For example, the number of online users can indicate Load. The load is often an important factor in setting up other factors. For example, the response time for ten users may be 1 second, and for 20 users, it may be 5 seconds.
    负载:指定系统上的压力大小。例如,在线用户数可以指示 Load (负载)。负载通常是设置其他因素的重要因素。例如,10 个用户的响应时间可能是 1 秒,而 20 个用户的响应时间可能是 5 秒。

    • Load sensitivity: A proposition through which the change of response time based on load is specified. For example, assume that system A has a response time of 1 second for several 10-20 users. System B also has a response time of 0.5 seconds for ten users, while if the number of users becomes 20, its response time increases to 2 seconds. In this case, A has less load sensitivity than B.
      负载敏感度:一个命题,通过该命题指定基于负载的响应时间变化。例如,假设系统 A 对几个 10-20 个用户的响应时间为 1 秒。系统 B 对 10 个用户的响应时间也是 0.5 秒,而如果用户数量变为 20 个,则其响应时间将增加到 2 秒。在这种情况下,A 的负载敏感度低于 B。
  • Efficiency: Performance divided by resources. A system with a TPS volume equal to 40 on 2 CPU cores has better efficiency than a system that brings a TPS volume equal to 50 with 6 CPU cores.
    效率:性能除以资源。在 2 个 CPU 内核上 TPS 卷等于 40 的系统比在 6 个 CPU 内核上将 TPS 卷等于 50 的系统效率更高。

  • Capacity of system: A measure that shows the maximum operating power or the maximum effective load that can be tolerated.
    系统容量:显示可以承受的最大运行功率或最大有效负载的度量。

  • Scalability: A measure that shows how efficiency is affected by increasing resources. Often, two vertical (Scale Up) and horizontal (Scale Out) methods are used for scalability.
    可扩展性:显示增加资源如何影响效率的度量。通常,使用两种垂直 (纵向扩展) 和水平 (横向扩展) 方法来实现可扩展性。

The critical point is that design decisions will not necessarily have similar effects on different efficiency factors. Usually, when producing enterprise applications, an effort is made to give higher priority to scalability. Because it can have a more significant effect on efficiency and will be easier to implement. In some situations, a team may prefer to increase the volume rate by implementing a series of complex tasks so they do not have to bear the high costs of purchasing hardware.

关键是,设计决策不一定会对不同的效率因素产生类似的影响。通常,在生成企业应用程序时,会努力提高可伸缩性的优先级。因为它可以对效率产生更显着的影响,并且更容易实施。在某些情况下,团队可能更愿意通过实施一系列复杂的任务来提高卷率,这样他们就不必承担购买硬件的高成本。

The PofEAA presented in this book is inspired by the patterns presented in the Patterns of Enterprise Applications Architecture book written by Martin Fowler. The following structure is used in presenting PofEAA patterns:

本书中介绍的 PofEAA 受到 Martin Fowler 撰写的 Patterns of Enterprise Applications Architecture 一书中介绍的模式的启发。以下结构用于呈现 PofEAA 模式:

  • Name and Classification: It shows the design pattern's name and specifies each design pattern's category.
    Name and Classification:它显示设计模式的名称并指定每个设计模式的类别。

  • Also Known As: If the design pattern is known by other names, they are introduced in this section.
    也称为:如果设计模式有其他名称,则本节将介绍它们。

  • Intent: In this section, brief explanations about the design pattern are given.
    意图:本节简要介绍了设计模式。

  • Motivation, Structure, Implementation, and Sample Code: A description of the problem, main structure, implementation steps, and the source code of design patterns are presented.
    动机、结构、实现和示例代码:提供了问题描述、主要结构、实现步骤和设计模式的源代码。

  • Notes: Regarding the design and implementation of each design pattern, significant points are given in this section.
    注意:关于每种设计模式的设计和实现,本节中给出了重要的要点。

  • Consequences: Advantages and disadvantages of the discussed design pattern are given.
    结果:给出了所讨论的设计模式的优缺点。

  • Applicability: Places, where the discussed design pattern can be helpful, are briefly stated.
    适用性:简要说明所讨论的设计模式可能有用的位置。

  • Related Patterns: The relationship of each design pattern with other design patterns is mentioned.
    相关模式:提到了每种设计模式与其他设计模式的关系。

Design patterns and software design problems

设计模式和软件设计问题

When we talk about software design, we are talking about the plan, map, or structural layout on which the software is supposed to be placed. During a software production process, various design problems need to be identified and resolved. This behavior exists in the surrounding world and in real life. For example, when we try to present a solution, it is in line with a specific problem. The same point of view is also valid in the software production process. As mentioned earlier, in a software production process, design patterns solve many different problems. In order to identify and apply a suitable design pattern and a working method for a problem, it is necessary to determine the relationship between the design patterns and the upcoming software problem in the first step. In order to better understand this relationship, you can pay attention to the following:

当我们谈论软件设计时,我们谈论的是应该放置软件的计划、地图或结构布局。在软件生产过程中,需要识别和解决各种设计问题。这种行为存在于周围的世界和现实生活中。例如,当我们尝试提出解决方案时,它与特定问题一致。同样的观点也适用于软件生产过程。如前所述,在软件生产过程中,设计模式解决了许多不同的问题。为了识别并应用适合问题的设计模式和工作方法,有必要在第一步中确定设计模式与即将到来的软件问题之间的关系。为了更好地理解这种关系,您可以注意以下几点:

1.
Finding the right objects: In the world of object-oriented programming, there are many different objects. Each contains a set of data and performs certain tasks. The things that the object can do are called the behavior of the object or its methods. In order to change the content of the data that the object carries, it is necessary to act through methods. One of the most important and most difficult parts of designing and implementing an object-oriented program is decomposing a system into a set of objects. This is difficult because this analysis requires the boundaries of encapsulation, granularity, dependence, flexibility, efficiency, and so on.
查找正确的对象:在面向对象编程的世界中,有许多不同的对象。每个 VPN 都包含一组数据并执行某些任务。对象可以执行的作称为对象的行为或其方法。为了更改对象携带的数据内容,必须通过方法进行作。设计和实现面向对象的程序最重要和最困难的部分之一是将系统分解为一组对象。这很困难,因为这种分析需要封装、粒度、依赖性、灵活性、效率等界限。
When a problem arises, there are different ways to transform the problem into an object-oriented design. One of the ways is to pay attention to the structure of the sentences, convert the nouns into classes, and present the verbs in the form of methods. For example, in the phrase:
当出现问题时,有多种方法可以将问题转换为面向对象的设计。其中一种方法是注意句子的结构,将名词转换为类,并以方法的形式呈现动词。例如,在短语中:
"A user can log in to the system by entering the username and password."
“用户可以通过输入用户名和密码来登录系统。”
"User" has the role of the noun in the sentence, and "login" is the verb of the sentence. Therefore, you can create a class called User, which has a method called Login as the following output:
“User” 在句子中具有名词的角色,“login” 是句子的动词。因此,您可以创建一个名为 User 的类,该类具有一个名为 Login 的方法,输出如下:

public class User {
  public void Login(/*Inputs*/) {}
}

Another way is to pay attention to the connections, tasks, and interactions and thereby identify the classes, methods, and so on. No matter what method is used, at the end of the design, we may encounter classes for which we need help finding an equivalent in the real world or business environment. Design patterns help in abstractions, and classes can be placed in their proper place and used. For example, the class used to implement the sorting algorithm may not be identified in the early stages of analysis and design, but different design patterns can be designed correctly and connected with the rest of the system.
另一种方法是关注连接、任务和交互,从而识别类、方法等。无论使用哪种方法,在设计结束时,我们都可能会遇到需要帮助在现实世界或业务环境中找到等效项的类。设计模式有助于抽象,并且可以将类放置在适当的位置并使用。例如,在分析和设计的早期阶段可能无法识别用于实现排序算法的类,但可以正确设计不同的设计模式并与系统的其余部分连接。

2.
Recognizing the granularity of objects: An object has a structure and can be accompanied by various details, and the depth of these details can be very high or low. This factor can affect the size and the number of objects. It is an important decision to decide what boundaries and limits the object structure should have. Design patterns can help form these boundaries and limits accurately.
识别对象的颗粒度:一个对象有一个结构,可以伴随着各种细节,这些细节的深度可以很高,也可以很低。此因素会影响对象的大小和数量。决定对象结构应具有哪些边界和限制是一个重要的决定。设计模式可以帮助准确地形成这些边界和限制。

3.
Knowing the interface of objects: The behavior of an object consists of the name, input parameters, and output type. These three components together form the signature of a behavior. The set of signatures provided by an object is called a connection or interface of the object. The object interface specifies under what conditions and in what ways a request can be sent to the object. These interfaces are required to communicate with an object, although having information about these does not mean having information about how to implement them. Being able to connect a request to the appropriate object and appropriate behavior at the time of execution is called dynamic binding.
了解对象的接口:对象的行为由名称、输入参数和输出类型组成。这三个组件共同构成了行为的特征。对象提供的签名集称为对象的连接或接口。对象接口指定在什么条件下以及以什么方式可以向对象发送请求。这些接口是与对象通信所必需的,尽管拥有有关这些接口的信息并不意味着拥有有关如何实现它们的信息。能够在执行时将请求连接到适当的对象和适当的行为称为动态绑定。

public class Sample {
  public int GetAge(string name){}
  public int GetAge(string nationalNo, string name){}
}

Mentioning a request at the time of coding does not mean connecting the request for implementation. This connection will happen at the time of execution, which expresses its dynamic binding. This provides the ability to replace objects with each other at runtime. This is called polymorphism in object orientation. Design patterns also help in shaping such communications and interactions. This design pattern assistance may happen, for example, by placing a constraint on the structure of classes.
在编码时提及请求并不意味着连接 request 以进行实现。这个连接将在执行时发生,这表示它的动态绑定。这提供了在运行时将对象相互替换的功能。这在面向对象中称为多态性。设计模式还有助于塑造此类通信和交互。例如,通过对类的结构施加约束,可以实现这种设计模式帮助。

4.
Knowing how to implement objects: Objects are created by instantiating from a class which leads to the allocation of memory to the internal data of the object. New classes can also be created as a subset or child of a class using inheritance. In this case, the child class will contain all the accessible data and behaviors of its parent class. If the definition of a class is necessary to leave the implementation of behavior to the children (abstract behavior), then the class can be defined as an abstract class. Since this class is only an abstraction, it cannot be instantiated. If a class is not abstract, then it is called a real or intrinsic class.
知道如何实现对象:对象是通过从类实例化来创建的,这会导致将内存分配给对象的内部数据。还可以使用继承将新类创建为类的子集或子类。在这种情况下,子类将包含其父类的所有可访问数据和行为。如果类的定义是必要的,以便将行为的实现留给子类(抽象行为),则可以将该类定义为抽象类。由于此类只是一个抽象,因此无法实例化。如果一个类不是抽象的,那么它被称为实类或内部类。

public abstract class Sample {}// Abstract class 抽象类
public class Sample {}// Intrinsic class 内部类
public abstract class Sample {
  public abstract void Get() ;//Abstract method 抽象方法
}

How the objects are instantiated, and classes are formed and implemented are very important points that should be paid attention to. Several design patterns are useful in these situations. For example, one design pattern may help to create static implementations for classes, and another design pattern may help define static structure.
如何实例化对象,如何形成和实现类是应该注意的非常重要的点。在这些情况下,有几种设计模式很有用。例如,一种设计模式可能有助于为类创建静态实现,而另一种设计模式可能有助于定义静态结构。

5.
Development based on interfaces: With the help of inheritance, a class can access the accessible behavior and data of the parent class and reuse them. Being able to reuse an implementation and having a group of objects with a similar structure are two different stories, which is very important and shows its importance in polymorphism. This usually happens with the help of abstract classes or interfaces.
基于接口的开发:在继承的帮助下,类可以访问父类的可访问行为和数据并重用它们。能够重用一个实现和拥有一组具有相似结构的对象是两个不同的故事,这非常重要,并显示了它在多态性中的重要性。这通常是在抽象类或接口的帮助下发生的。
The use of abstract classes and interfaces makes the user unaware of the exact type of object used in the class. Because the object adheres to the provided abstraction and interface. Also, users are unaware of the classes that implement these objects and only know the abstraction that created the class. This makes it possible to write code based on interfaces and abstractions.
使用抽象类和接口会使用户不知道类中使用的对象的确切类型。因为对象遵循提供的抽象和接口。此外,用户不知道实现这些对象的类,而只知道创建该类的抽象。这使得基于接口和抽象编写代码成为可能。
The main purpose of creational design patterns is to provide different ways to communicate between interfaces and implementations. This category of design patterns tries to provide this communication in an inconspicuous way at the time of instantiating.
创建性设计模式的主要目的是提供不同的方式来在接口和实现之间进行通信。此类别的设计模式尝试在实例化时以不显眼的方式提供此通信。

6.
Attention to reuse: Another important problem in software design and implementation is to benefit from reusability and provide appropriate flexibility to the codes. For example, you should pay attention to the differences between inheritance and composition and use each one in the right place. These two are one of the most widely used methods to provide code reusability. Using inheritance, one class can be implemented based on another class. Reusability, in this case, is formed in the form of a child class definition. This type of reuse is called White Box Reuse:
注意重用:软件设计和实现中的另一个重要问题是从可重用性中受益,并为代码提供适当的灵活性。例如,您应该注意 inheritance 和 composition 之间的区别,并在正确的地方使用它们。这两种是提供代码可重用性的最广泛使用的方法之一。使用继承,一个类可以基于另一个类实现。在这种情况下,可重用性以子类定义的形式形成。这种类型的重用称为 White Box Reuse:

public class Parent {
  public void Show_Parent(){}
}

public class Child: Parent { // Inheritance
  public void Show_Child(){}
}

On the other hand, Composition provides reusability by installing an object in a class and adding new functionality in that class. This type of reuse is also called Black Box Reuse:
另一方面,Composition 通过在类中安装对象并在该类中添加新功能来提供可重用性。这种类型的重用也称为黑盒重用:

public class Engine {
  public void Get(){}
}

public class Car {
  private Engine _engine;
  public Car(Enging engine)=>_engine = engine;//Composition
}

Both inheritance and composition structures have advantages and disadvantages that should be considered while using them. However, empirically, most programmers overuse inheritance in order to provide reusability, and this causes problems in code development. Using composition can be very helpful in many scenarios. By using delegation, you can give double power to composition. Today, there are other ways that help to reach a code with suitable reusability. For example, in a language like C#, there is a feature called Generic, which can be very useful in this direction. Generics are also called parametrized types. With all these explanations, a series of design patterns help to provide reusability and flexibility well in the code.
继承结构和组合结构都有优点和缺点,使用它们时应考虑这些优点和缺点。但是,从经验上讲,大多数程序员过度使用继承以提供可重用性,这会导致代码开发出现问题。在许多情况下,使用组合可能非常有用。通过使用委派,您可以为组合提供双倍的能力。今天,还有其他方法可以帮助获得具有适当可重用性的代码。例如,在像 C# 这样的语言中,有一个叫做 Generic 的功能,它在这个方向上可能非常有用。泛型也称为参数化类型。通过所有这些解释,一系列设计模式有助于在代码中很好地提供可重用性和灵活性。

6.
Design for change: It is a suitable and good design that can predict future changes and is not vulnerable to those changes. If the design cannot make a good prediction of the future, it should be ready to apply extensive changes in the future. One of the functions and advantages of design patterns is that it allows the design to be flexible to future changes.
为变化而设计:这是一种合适且良好的设计,可以预测未来的变化,并且不会受到这些变化的影响。如果设计不能对未来做出良好的预测,它应该准备好在未来应用广泛的更改。设计模式的功能和优点之一是它允许设计灵活地适应未来的变化。

Effective factors in choosing a design pattern

选择设计模式的有效因素

When first faced with a list of 23 GoF design patterns, it can be difficult to know which pattern to choose for a particular problem. This difficulty increases when we add the PofEAA design patterns to this list of 23 design patterns. It is enough to make the selection process difficult and confusing. In order to make a suitable choice, it is recommended to consider the following points:

当第一次面对 23 个 GoF 设计模式的列表时,可能很难知道为特定问题选择哪种模式。当我们将 PofEAA 设计模式添加到这个 23 种设计模式列表中时,这种难度会增加。这足以使选择过程变得困难和混乱。为了做出合适的选择,建议考虑以下几点:

  • Understanding the problem space and how the design pattern can solve the problem: The first step in choosing a design pattern is to identify the problem correctly. Once the problem becomes clear, think about how the presence of the design pattern can help the problem.
    了解问题空间以及设计模式如何解决问题:选择设计模式的第一步是正确识别问题。一旦问题变得清晰,就想想设计模式的存在如何帮助解决问题。

  • Examining the generalities of design patterns using the purpose and scope: By doing this review, you can understand the degree of compatibility of the problem ahead with the design patterns.
    使用目的和范围检查设计模式的通用性:通过进行此审查,您可以了解问题与设计模式的兼容性程度。

  • Examining the interconnections of design patterns: For example, if the Abstract Factory design pattern is to be used by combining Singleton with this pattern, only one instance of Abstract Factory can be created. In order to apply dynamics to it, a Prototype can be used.
    检查设计模式的互连:例如,如果要通过将 Singleton 与此模式组合来使用 Abstract Factory 设计模式,则只能创建一个 Abstract Factory 实例。为了对其应用动力学,可以使用 Prototype。

  • Examining the similarities and differences of each design pattern: For example, if the problem ahead is a behavioral problem, you can choose the appropriate behavioral pattern among all the behavioral patterns.
    检查每种设计模式的相似之处和不同之处:例如,如果前面的问题是行为问题,则可以在所有行为模式中选择合适的行为模式。

  • Knowing the reasons that lead to redesign: In this step, the factors that can cause redesign should be known.
    了解导致重新设计的原因: 在此步骤中,应了解可能导致重新设计的因素。

  • Knowing the design variables: In this step, you should understand what can be changed in the design.
    了解设计变量:在此步骤中,您应该了解设计中可以更改的内容。

When the appropriate design pattern is chosen, it should be implemented. In order to use and implement a design pattern, you must first study that pattern completely. In this study, the application cases and consequences of the model should be carefully studied and examined. After understanding the generalities of the pattern, the details should be examined, and these details ensure that we know the elements involved have sufficient information about the interactions between these elements.

当选择了适当的设计模式时,应该实现它。为了使用和实现设计模式,您必须首先完整地研究该模式。在本研究中,应仔细研究和检查该模型的应用案例和后果。在了解了模式的一般性之后,应该检查细节,这些细节确保我们知道所涉及的元素有足够的信息来了解这些元素之间的交互。

In the next step, the way to implement the design pattern will be examined by the existing code samples. Then, we will select the appropriate names for each of the involved elements, taking into account the problem and the business ahead. The choice of name should be made according to the purpose of each element in the upcoming business. After choosing the name, various classes, interfaces, and relationships are implemented. During the implementation, there may be a need to change the codes in different parts of the system. Choosing appropriate names for methods and their implementation are the next steps that should be considered while implementing a design pattern.

在下一步中,将通过现有的代码示例来研究实现设计模式的方法。然后,我们将考虑到问题和未来的业务,为每个涉及的元素选择合适的名称。名称的选择应根据即将到来的业务中每个元素的目的进行。选择名称后,将实现各种类、接口和关系。在实施过程中,可能需要更改系统不同部分的代码。为方法及其实现选择合适的名称是实现设计模式时应考虑的下一步。

alt text

Figure 1.10: Choosing Design Pattern Process
图 1.10.选择 Design Pattern Process(设计模式流程)

.NET

In 2002, Microsoft released .NET Framework, a development platform for creating Windows apps. Today .NET Framework is at version 4.8 and remains fully supported by Microsoft. In 2014, Microsoft introduced .NET Core as a cross-platform, open-source successor to .NET Framework. This new implementation of .NET kept the name .NET Core through version 3.1. The next version was named .NET 5. The new versions continue to be released annually, with each version number higher. They include significant new features and often enable new scenarios.

2002 年,Microsoft 发布了 .NET Framework,这是一个用于创建 Windows 应用程序的开发平台。目前,.NET Framework 的版本为 4.8,并且仍然受到 Microsoft 的完全支持。2014 年,Microsoft 推出了 .NET Core 作为 .NET Framework 的跨平台开源后继产品。此 .NET 的新实现在版本 3.1 之前一直保留名称 .NET Core。下一个版本被命名为 .NET 5。新版本每年都会继续发布,每个版本号都更高。它们包括重要的新功能,并且通常支持新方案。

There are multiple variants of .NET, each supporting a different type of app. The reason for multiple variants is part historical and technical.

.NET 有多种变体,每种变体都支持不同类型的应用程序。多个变体的原因部分是历史和技术方面的。

.NET implementations (historical order):
.NET 实现 (历史顺序):

  • .NET Framework: It provides access to the broad capabilities of Windows and Windows Server. Also extensively used for Windows-based cloud computing. The original .NET.
    .NET Framework:它提供对 Windows 和 Windows Server 的广泛功能的访问。也广泛用于基于 Windows 的云计算。原始 .NET.

  • Mono: A cross-platform implementation of .NET Framework. The original community and open-source .NET used for Android, iOS, and Wasm apps.
    Mono:.NET Framework 的跨平台实现。用于 Android、iOS 和 Wasm 应用程序的原始社区和开源 .NET。

  • .NET (Core): A cross-platform and open-source implementation of .NET, rethought for the cloud age while remaining significantly compatible with the .NET Framework. Used for Linux, macOS, and Windows apps.
    .NET(核心):.NET 的跨平台开源实现,针对云时代进行了重新思考,同时保持与 .NET Framework 的显著兼容性。用于 Linux、macOS 和 Windows 应用程序。

According to the Microsoft .NET website, it is a free, cross-platform, open-source developer for building many different types of applications. With .NET, you can use multiple languages, editors, and libraries to build for web, mobile, desktop, games, IoT, and more. You can write .NET apps in C#, F#, or Visual Basic. C# is a simple, modern, object-oriented, and type-safe programming language. F# is a programming language that makes it easy to write succinct, robust, and performant code. Visual Basic is an approachable language with a simple syntax for building type-safe, object-oriented apps.

根据 Microsoft .NET 网站,它是一个免费的、跨平台的开源开发人员,用于构建许多不同类型的应用程序。借助 .NET,您可以使用多种语言、编辑器和库来构建 Web、移动、桌面、游戏、IoT 等。您可以使用 C#、F# 或 Visual Basic 编写 .NET 应用程序。C# 是一种简单、现代、面向对象且类型安全的编程语言。F# 是一种编程语言,可以轻松编写简洁、可靠且高性能的代码。Visual Basic 是一种易于使用的语言,具有简单的语法,用于构建类型安全、面向对象的应用程序。

Whether you are working in C#, F#, or Visual Basic, the code will run natively on any compatible operating system. You can build many types of apps with .NET. Some are cross-platform and target a specific set of operating systems and devices.

无论您是使用 C#、F# 还是 Visual Basic,代码都可以在任何兼容的作系统上本地运行。您可以使用 .NET 构建多种类型的应用程序。有些是跨平台的,面向一组特定的作系统和设备。

.NET provides a standard set of base class libraries and APIs that are common to all .NET applications. Each app model can also expose additional APIs that are specific to the operating systems it runs on and the capabilities it provides. For example, ASP.NET is a cross-platform web framework that provides additional APIs for building web apps that run on Linux or Windows.

.NET 提供了一组标准的基类库和 API,这些库和 API 是所有 .NET 应用程序通用的。每个应用程序模型还可以公开特定于其运行的作系统及其提供的功能的其他 API。例如,ASP.NET 是一个跨平台的 Web 框架,它提供其他 API 来构建在 Linux 或 Windows 上运行的 Web 应用程序。

.NET helps you develop high-quality applications faster. Modern language constructs like generics, Language Integrated Query (LINQ), and asynchronous programming make developers productive. Combined with the extensive class libraries, common APIs, multi-language support, and the powerful tooling provided by the Visual Studio family, it is the most productive platform for developers.

.NET 可帮助您更快地开发高质量的应用程序。泛型、语言集成查询 (LINQ) 和异步编程等现代语言结构使开发人员能够提高工作效率。结合 Visual Studio 系列提供的大量类库、通用 API、多语言支持和强大的工具,它是开发人员最高效的平台。

.NET 7, the successor to .NET 6, is Microsoft .NET’s latest version which is built for modern cloud-native apps, mobile clients, edge services, and desktop technologies. Creates mobile experiences using a single codebase without compromising native performance using .NET MAUI.

.NET 7 是 .NET 6 的继任者,是 Microsoft 。NET 的最新版本,专为现代云原生应用程序、移动客户端、边缘服务和桌面技术而构建。使用 .NET MAUI 使用单个代码库创建移动体验,而不会影响本机性能。

.NET apps and libraries are built from source code and project files using the .NET CLI or an Integrated Development Environment (IDE) like Visual Studio.

.NET 应用程序和库是使用 .NET CLI 或集成开发环境 (IDE)(如 Visual Studio)从源代码和项目文件构建的。

The following example is a minimal .NET app:
以下示例是一个最小的 .NET 应用程序:

Project file:
项目文件:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net7.0</TargetFramework>
  </PropertyGroup>
</Project>

Source Code:
源代码:

Console.WriteLine("Welcome to .NET 7 Design Patterns, in Depth!");

The app can be built and run with the .NET CLI:
可以使用 .NET CLI 构建和运行该应用程序:

% dotnet run

It can also be built and run as two separate steps. The following example is for an app that is named app:
它还可以作为两个单独的步骤构建和运行。以下示例适用于名为 app 的应用程序:

% dotnet build

% ./bin/Debug/net6.0/app

According to the Microsoft .NET website, new versions are released annually in November. .NET released in odd-numbered years are Long-Term Support (LTS) and are supported for three years. Versions that are released in even-numbered years are Standard-Term Support (STS) and are kept for 18 months. The quality level, breaking change policies, and all other aspects of the releases are the same. The .NET Team at Microsoft works collaboratively with other organizations such as Red Hat (for Red Hat Enterprise Linux) and Samsung (for Tizen Platform) to distribute and support .NET in various ways.

根据 Microsoft .NET 网站,新版本每年 11 月发布。奇数年发布的 .NET 是长期支持 (LTS),支持期限为三年。在偶数年发布的版本是标准期限支持 (STS),保留 18 个月。质量级别、中断性变更策略和版本的所有其他方面都是相同的。Microsoft 的 .NET 团队与其他组织合作,例如 Red Hat(用于 Red Hat Enterprise Linux)和 Samsung(用于 Tizen 平台),以各种方式分发和支持 .NET。

Introduction to object orientation in .NET

.NET中的面向对象简介

An object in the real world is a thing. For example, John's car, Paul's mobile, Sara's table, and so on are all objects in the real world. There is a similar view in the programming world where an object is a representation of something in the real world. For example, Tom's bank account in the financial software is the same representative of Tom's bank account in the real world. Dealing with the details of object orientation and object-oriented programming is beyond the scope of this chapter, but in the following, we will get to know some important concepts of object orientation.

现实世界中的对象是一个事物。例如,John 的汽车、Paul 的移动设备、Sara 的桌子等等都是现实世界中的对象。在编程世界中也有类似的观点,其中对象是现实世界中某物的表示。例如,Tom 在财务软件中的银行账户与现实世界中 Tom 的银行账户是同一个代表。处理面向对象和面向对象编程的细节超出了本章的范围,但在下文中,我们将了解面向对象的一些重要概念。

In the C# programming language, the class or struct keywords are used to define the type of an object that is actually the outline and format of the object. Object orientation has a series of main and fundamental concepts that are briefly discussed in the following:

在 C# 编程语言中,class 或 struct 关键字用于定义对象的类型,该类型实际上是对象的轮廓和格式。面向对象具有一系列主要和基本概念,下面将简要讨论这些概念:

Encapsulation: Deals directly with the data and methods associated with the object. By using encapsulation, we control access to data and methods and assert how the internal state of an object can be changed.

封装:直接处理与对象关联的数据和方法。通过使用封装,我们可以控制对数据和方法的访问,并断言如何更改对象的内部状态。

public class DemoEncap
{
  private int studentAge;

  // You can access the field only by using the following methods.
  //So, this field is encapsulated & access to it, is controlled
  public int Age
  {
    get => studentAge;
    set => studentAge = value;
  }
}

Composition: Describes what an object is made of. For example, a car consists of four wheels.
构图:描述对象的构成。例如,一辆汽车由四个轮子组成。

Aggregation: States what things can be mixed with the object. For example, a human is not part of a car, but a human can sit inside the car and try to drive.
聚合:说明哪些内容可以与对象混合。例如,人类不是汽车的一部分,但人类可以坐在车内并尝试驾驶。

Inheritance: By using inheritance, existing codes can be reused. This reuse happens in the form of defining a child class based on the parent class. In this case, all access methods and features of the parent class are available to the child class. Also, with the help of inheritance, you can develop the capabilities of the parent class. When using inheritance, two types of casting can occur.
继承:通过使用继承,可以重用现有代码。这种重用以基于父类定义子类的形式发生。在这种情况下,父类的所有访问方法和功能都可供子类使用。此外,在继承的帮助下,您可以开发父类的功能。使用继承时,可能会发生两种类型的强制转换。

Implicit casting: Means to store the child class object in a parent class variable
隐式强制转换:表示将子类对象存储在父类变量中

Explicit casting: In this type of casting, the type of destination should be stated explicitly. In this method, there is a possibility of an exception, so it is better to check whether Casting can be done or not by using the keyword before doing Casting.
显式强制转换:在这种类型的强制转换中,应显式说明目标的类型。在这种方法中,有出现异常的可能,所以最好在做 Casting 之前,先用关键词检查一下是否可以做 Casting。

Abstraction: By using abstraction, the main idea of the object is identified, and the details are ignored. The child classes have the chance to implement the details based on their own problem space. In C# language, you can use the abstract keyword to define an abstract class or method, which is usually considered as classes that continue to implement the introduced abstractions using the inheritance of child classes. The volume and extent of abstraction of a class are important points that should be taken into account. The more abstract the class, the more we can use it, but there will be less code to share.
抽象:通过使用抽象,可以识别对象的主体思想,忽略细节。子类有机会根据自己的问题空间实现细节。在 C# 语言中,可以使用 abstract 关键字定义抽象类或方法,该类或方法通常被视为使用子类的继承继续实现引入的抽象的类。类的抽象量和范围是应该考虑的重要点。类越抽象,我们可以使用它就越多,但要共享的代码会更少。

Polymorphism: By using polymorphism, the child class has the ability to change the implementation of its parent class. In order to change the parent class, the child class can change the implementation of the method using the override keyword in the C# programming language. In order for the implementation of the method to be changeable, the parent class must define the method as virtual. The members that are defined as abstract in the parent class will use the override keyword in the child class for implementation. If a method is defined in the parent class and a method with the same signature is defined in the child class, it is said that the process is hidden (Method Hiding). This type of inheritance is called non-polymorphic inheritance. In order to define this type of method, the new keyword can be used, although the use of this keyword is optional.
多态性:通过使用多态性,子类能够更改其父类的实现。为了更改父类,子类可以使用 C# 编程语言中的 override 关键字更改方法的实现。为了使方法的实现是可更改的,父类必须将方法定义为 virtual。在父类中定义为 abstract 的成员将使用子类中的 override 关键字进行实现。如果在父类中定义了方法,并且在子类中定义了具有相同签名的方法,则称该进程是隐藏的(方法隐藏)。这种类型的继承称为非多态继承。为了定义这种类型的方法,可以使用 new 关键字,尽管此关键字的使用是可选的。

P class A
{
  public void Print() => Console.WriteLine("I am Parent");
}

public class B: A
{
  public new void Print() => Console.WriteLine("I am Child");
}

When we are dealing with a large class, the implementation of the class can be written in several formats. In this case, the class is called partial. A class in C# can have different members, including the following:
当我们处理一个大型类时,类的实现可以用多种格式编写。在这种情况下,该类称为 partial。C# 中的类可以具有不同的成员,包括:

Field: The field is used to store data. Fields have three different categories:
字段:该字段用于存储数据。字段有三个不同的类别:
Constant: The data that is placed in these types of fields will never change, and the compiler copies the relevant data, where the constants are called.
常量:放置在这些类型的字段中的数据永远不会更改,编译器会复制调用常量的相关数据。

For example, consider the following code:
例如,请考虑以下代码:

public class A
{
  public const string SampleConst = ".NET Design Patterns";
}

public class B
{
  public B()
  {
    string test = A.SampleConst;
  }
}

After compiling the code, the compiler will generate the following code: (The generated IL code is captured by ILSpy software)
编译代码后,编译器会生成如下代码:(生成的 IL 代码被 ILSpy 软件捕获)

public class A
{
  public const string SampleConst = ".NET Design Patterns";
}

public class B
{
  public B()
  {
    string test =".NET Desig Patterns";
  }
}

As you can see, the compiler copies the value of SampleConst wherever the constant is used.
如您所见,编译器会在使用常量的位置复制 SampleConst 的值。
Read Only: The data in these types of fields cannot be changed after creating the object.
只读:创建对象后,无法更改这些类型字段中的数据。
Event: In these types of fields, the available data is actually a reference to one or more methods that are supposed to be executed when a specific event occurs.
事件:在这些类型的字段中,可用数据实际上是对一个或多个方法的引用,这些方法应该在特定事件发生时执行。
Method: These are used to execute expressions. The method defines and implements the expected behavior of the object. It has a name, input parameters, and output type. If two methods have the same name but different input parameters, they are said to be overloaded. Methods also have four different types:
方法:这些用于执行表达式。该方法定义并实现对象的预期行为。它具有名称、输入参数和输出类型。如果两个方法具有相同的名称但不同的输入参数,则称它们被重载。方法也有四种不同的类型:
Constructor: The constructor allocates memory to the object and initializes it. When the new keyword is used in the C# programming language, the associated constructor will be executed.
构造函数:构造函数为对象分配内存并对其进行初始化。在 C# 编程语言中使用 new 关键字时,将执行关联的构造函数。
Finalizer: These methods, also called destructors, are rarely used in the C# language. During execution, when an object is disposing and reclaiming memory, then these types of methods are executed.
终结器:这些方法也称为析构函数,在 C# 语言中很少使用。在执行期间,当对象释放和回收内存时,将执行这些类型的方法。

class Car
{
  ~Car() // finalizer
  {
    // cleanup statements...
  }
}

In the preceding code, the Finalizer implicitly calls the Finalize method in the Object class. So, calling Finalizer will result in calling the following manner:
在上面的代码中,Finalizer 隐式调用 Object 类中的 Finalize 方法。因此,调用 Finalizer 将导致以下方式调用:

protected override void Finalize()
  {
    try
  {
    // Cleanup statements...
  }
    finally
  {
    base.Finalize();
  }
}

Property: Statements in this type of method will be executed while setting or reading data. Behind the scenes of property, data is usually stored in Fields. There is no requirement for this purpose, and the data can be stored in an external data source or calculated during execution. Usually, the Property can be used for field encapsulation.
Property:在设置或读取数据时,将执行此类方法中的语句。在属性的幕后,数据通常存储在 Fields 中。没有此目的的要求,数据可以存储在外部数据源中或在执行期间进行计算。通常,Property 可用于字段封装。

  public string FirstName { get; set; }

Indexer: The expressions in this type of method are executed using “[]” indicator when setting or receiving data
索引器:在设置或接收数据时,此类方法中的表达式使用 “[]” 指示符执行

class StringDataStore
{
  private string[] strArr = new string[10]; // internal data storage
  public string this[int index]
  {
    get => strArr[index];
    set => strArr[index] = value;
  }
}

Operator: The expressions in this type of method are executed when operators like + are used on class operands.
运算符:当对类作数使用类似 + 的运算符时,将执行此类方法中的表达式。

  public static Box operator + (Box b, Box c) {
Box box = new Box();
box.length = b.length + c.length;
box.breadth = b.breadth + c.breadth;
box.height = b.height + c.height;
return box;
}

Apart from the preceding code, a class also contains an inner class:
除了前面的代码外,类还包含一个内部类:

 public class A{
public string GetName()=> $“Vahid is {new B().GetAge()} years old”;
private class B{
public int GetAge()=>10;
}
}

Regardless of the members of a class, part of encapsulation is to assign appropriate access levels to the class or its members. In C# language, there are different access levels which are:

无论类的成员如何,封装的一部分都是为类或其成员分配适当的访问级别。在 C# 语言中,有不同的访问级别,它们是:

  • Public: Members with this access level are available everywhere.
    公共:具有此访问级别的成员在任何地方都可用。

  • Private: Members with this access level are only available inside the class. This access level is the default for class members.
    Private:具有此访问级别的成员只能在类内使用。此访问级别是类成员的默认访问级别。

  • Protected: Members with this access level are only available inside the class, and inside classes are derived from this class.
    受保护:具有此访问级别的成员仅在类内部可用,并且内部类派生自此类。

  • Internal: Members with this access level are only available inside the same assembly.
    内部:具有此访问级别的成员仅在同一程序集中可用。

  • Internal protected: Members with this access level are available within the same class, assembly, or classes derived from this class. This access is internal or protected.
    Internal protected:具有此访问级别的成员在同一个类、程序集或从此类派生的类中可用。此访问权限是内部访问权限或受保护访问权限。

  • Private protected: Members with this access level are available within the same class or classes derived within the same assembly. This access is internal and protected.
    Private protected:具有此访问级别的成员在同一类或同一程序集中派生的类中可用。此访问权限是内部的,并且受到保护。

In addition to access levels, C# language also has a series of Modifiers through which you can slightly change the definition of the class or its members. For example, using sealed makes it impossible to inherit from a class or override a method. When a class is defined as closed, extension methods can be used to expand its capabilities.

除了访问级别之外,C# 语言还具有一系列修饰符,通过这些修饰符可以稍微更改类或其成员的定义。例如,使用 sealed 使得无法从类继承或重写方法。当类定义为 closed 时,可以使用扩展方法来扩展其功能。

When the class is defined statically, it is no longer possible to create an instance, and the class is always available to everyone. Also, the abstract is a modifier, when applied to a class, turns the class into an abstract class. When it is attributed to other members, such as methods, it eliminates the possibility of providing an implementation, and child classes are required to provide implementations.

当类是静态定义的时,就不再可能创建实例,并且该类始终可供所有人使用。此外,抽象是一个修饰符,当应用于类时,会将类转换为抽象类。当它归属于其他成员(如方法)时,它消除了提供实现的可能性,并且需要子类来提供实现。

Along with classes in C#, there are interfaces that are very similar to abstract classes. All members of interfaces are abstract. Among the similarities between the abstract class and interface, it can be mentioned that neither can be sampled. Along with all the similarities, they also have differences, including the following:

除了 C# 中的类外,还有一些与抽象类非常相似的接口。接口的所有成员都是抽象的。在抽象类和接口之间的相似之处中,可以提到两者都不能采样。除了所有相似之处外,它们也有不同之处,包括:

  • Interfaces can only inherit from interfaces, while abstract classes can inherit from other classes and implement different interfaces.
    接口只能继承自接口,而抽象类可以继承自其他类并实现不同的接口。

  • Abstract classes can include constructors and destructors, while this possibility is not available for interfaces
    抽象类可以包含构造函数和析构函数,但这种可能性不适用于接口

Since C# version 8, interfaces can have default implementations for methods, just like abstract classes.
从 C# 版本 8 开始,接口可以具有方法的默认实现,就像抽象类一样。

public interface IPlayable
{
  void Play();
  void Pause();
  void Stop() // default implementation 默认实现
  {
    WriteLine("Default implementation of Stop.");
  }
}

In fact, interfaces are a way to connect to each other. When a class implements an interface, it guarantees to provide a set of capabilities. The use of interfaces and abstract classes is very widely used in design patterns.
事实上,接口是一种相互连接的方式。当类实现接口时,它保证提供一组功能。接口和抽象类的使用在设计模式中得到了非常广泛的应用。

Object orientation SOLID principles

面向对象 SOLID 原则

C# programming language is an object-oriented language that provides good facilities for using object-oriented capabilities. Features such as the use of interfaces, inheritance, polymorphism, and so on. The fact that the C# programming language provides such facilities does not guarantee that every code written is by object-oriented principles and has an acceptable quality. Ideally, reaching an appropriate and correct object-oriented design in an extensive system will be challenging and require much scrutiny and precision.

C# 编程语言是一种面向对象的语言,它为使用面向对象的功能提供了良好的工具。功能,例如使用接口、继承、多态性等。C# 编程语言提供此类工具这一事实并不能保证编写的每段代码都遵循面向对象原则并具有可接受的质量。理想情况下,在一个广泛的系统中实现适当和正确的面向对象设计将具有挑战性,并且需要大量的审查和精确性。

Various principles have been introduced to produce the system according to the correct principles and guidelines of object orientation. One of these principles is the SOLID principle. SOLID actually consists of five different principles, which are:

已经引入了各种原则,以根据面向对象的正确原则和准则来生成系统。这些原则之一是 SOLID 原则。SOLID 实际上由五个不同的原则组成,它们是:

  • Single Responsibility Principle (SRP)
    单一责任原则 (SRP)

  • Open/Close Principle (OCP)
    开/关原则 (OCP)

  • Liskov Substitution Principle (LSP)
    里斯科夫替代原则 (LSP)

  • Interface Segregation Principle (ISP)
    接口分离原则 (ISP)

  • Dependency Inversion Principle (DSP)
    依赖关系倒置原则 (DSP)

The title SOLID also consists of the first letters of each of the preceding five principles. These principles help the written code to be of good quality and to maintain the code at an acceptable level. In the following, each of these principles is explained:
标题 SOLID 也由上述五个原则中每个原则的首字母组成。这些原则有助于编写的代码具有良好的质量,并将代码保持在可接受的水平。下面将解释这些原则中的每一个:

Single Responsibility Principle

单一责任原则

This principle states that each class should have only one task, which by nature will have one reason to change the class. When this principle is not followed, a class will contain a large amount of code to be changed if there is a need in the system. Making changes to this class will lead to the re-execution of the tests. On the other hand, by observing this principle, a big problem is divided into several smaller problems, and each issue is implemented in the form of a class. Therefore, making changes in the system will lead to making changes in one of these small classes, and it will only be necessary to run the tests related to this small class again. The principle of SRP is very similar to the principle in object orientation called SoC1.

该原则指出,每个类应该只有一个任务,而该任务本质上只有一个更改类的理由。如果不遵循此原则,如果系统有需要,一个类将包含大量需要更改的代码。对此类进行更改将导致重新执行测试。另一方面,通过遵守这个原则,一个大问题被分成几个小问题,每个问题都以类的形式实现。因此,在系统中进行更改将导致对其中一个小类进行更改,并且只需要再次运行与该小类相关的测试即可。SRP 的原理与面向对象的原理非常相似,称为 SoC1。

For example, consider the following code:
例如,请考虑以下代码:

public class WrongSRP
{

  public string FirstName { get; set; }
  public string LastName { get; set; }
  public string Email { get; set; }
  public static List<WrongSRP> Users { get; set; } = new List<WrongSRP>();

  public void NewUser(WrongSRP User)
  {
  Users.Add(User);
  SendEmail(User.Email, "Account Created", "Your new account created");
  }

  public void SendEmail(string email, string subject, string body)
  {
  //Send email
  }
}

Suppose it is requested to design and implement a mechanism to create a new user. It is necessary to send an email after creating a user account. The preceding code has two methods called NewUser to create a new user and SendEmail to send an email. There are two different behaviors in the same class that are not directly related to each other. In other words, sending an e-mail is not directly related to the user entity, and the presence of this method in this class violates the SRP principle. Because this class is no longer responsible for only one task, and apart from managing user-related requests, it is also responsible for sending emails. The preceding design will cause the codes to change if the email-sending process changes. For example, the email service provider changes. In order to modify this code, the preceding code can be rewritten as follows:

假设请求设计和实现一种机制来创建新用户。创建用户帐户后,需要发送电子邮件。上述代码有两个方法,分别称为 NewUser 来创建新用户,另一个方法称为 SendEmail 来发送电子邮件。同一类中有两种不同的行为,它们彼此之间没有直接关系。换句话说,发送电子邮件与用户实体没有直接关系,并且此类中存在此方法违反了 SRP 原则。因为这个类不再只负责一个任务,除了管理与用户相关的请求外,它还负责发送电子邮件。如果电子邮件发送过程发生变化,上述设计将导致代码发生变化。例如,电子邮件服务提供商会发生变化。为了修改此代码,可以按如下方式重写上述代码:

public class SRP
{
  public string FirstName { get; set; }
  public string Email { get; set; }
  public string LastName { get; set; }
  public static List<WrongSRP> Users { get; set; } = new List<WrongSRP>();

  public void NewUser(WrongSRP User)
  {
  Users.Add(User);
  new EmailService()
  .SendEmail(User.Email,"Account Created","Your new account created");
  }
}

public class EmailService
{
  public void SendEmail(string email, string subject, string body)
  {
    //Send email
  }
}

As can be seen in the preceding code, the task of sending emails has been transferred to the EmailService class, and with this rewrite, the SRP principle has been respected, and it will not have the problems of the previous code.
从前面的代码中可以看出,发送邮件的任务已经转移到了 EmailService 类,通过这次重写,尊重了 SRP 原则,不会有之前代码的问题。

Open/Close Principal

开/关原则 (OCP)

This principle states that a class should be open for extension and closed for modification. In other words, when a class is implemented, and other parts of the system start using this class, it should not be changed. It is clear that making changes in this class can cause problems in the parts of the system. If there is a need to add new capabilities to the class, these should be added to it by expanding the class. In this case, the parts of the system that uses this class will not be affected by the applied changes, and in order to test new codes, only new parts will be needed to be tested.

此原则指出,类应为 open for extension,shut for modification。换句话说,当实现一个类,并且系统的其他部分开始使用这个类时,它不应该被改变。很明显,在此类中进行更改可能会导致系统的某些部分出现问题。如果需要向类添加新功能,则应通过扩展类来将这些功能添加到类中。在这种情况下,使用此类的系统部分将不会受到应用的更改的影响,并且为了测试新代码,只需要测试新部分。

For example, suppose you are asked to write a class to calculate employee salaries. In the initial plan of this requirement, it is stated that the working hours of all employees must be multiplied by 1000, and this way, salaries are calculated. With this explanation, the following code is written:

例如,假设您被要求编写一个类来计算员工工资。在此要求的初始计划中,规定所有员工的工作时间必须乘以 1000,这样就可以计算出工资。通过此说明,编写了以下代码:

public class WrongOCP
{
  public string Name { get; set; }
  public decimal CalculateSalary(decimal hours) => hours * 1000;
}

The preceding code has a method called CalculateSalary which calculates the salary of each person by receiving the working hours. After this code has been used for some time, it is said that a new type of employee called a manager has been defined in the system. For them, the working hours should be multiplied by 1500, and for others, it should be multiplied by 1000. Therefore, to cover this need, we change the preceding code as follows:

前面的代码有一个名为 CalculateSalary 的方法,它通过接收工作时间来计算每个人的工资。此代码使用一段时间后,据说系统中定义了一种称为经理的新型员工。对他们来说,工作时间应该乘以 1500,对其他人来说,应该乘以 1000。因此,为了满足这一需求,我们按如下方式更改了前面的代码:

public class WrongOCP
{
    public string Name { get; set; }
    public string UserType { get; set; }

    public decimal CalculateSalary(decimal hours)
    {
    if (UserType == "Manager")
    return hours * 1500;
    return hours * 1000;
    }
}

To add this new feature to the class, we changed the existing code, and this violates the OCP principle. By making these changes in the class, all parts of the system that use this class will be affected. To cover the requirement raised in the form of the original OCP, the preceding code can be rewritten as follows:

为了将这个新功能添加到类中,我们更改了现有代码,这违反了 OCP 原则。通过在类中进行这些更改,使用此类的系统的所有部分都将受到影响。为了涵盖以原始 OCP 形式提出的要求,可以按如下方式重写前面的代码:

public abstract class OCP
{
  protected OCP(string name) => Name = name;
  public string Name { get; set; }
  public abstract decimal CalculateSalary(decimal hours);
}

public class Manager : OCP
{
  public Manager(string name) : base(name) { }
  public override decimal CalculateSalary(decimal hours) => hours * 1500;
}

public class Employee : OCP
{
  public Employee(string name) : base(name) { }
  public override decimal CalculateSalary(decimal hours) => hours * 1000;
}

In the preceding code, if we want to add the role of a consultant, for example, it is enough to create a new class for the consultant and define the process of calculating his salary without touching the existing codes. With these words, new functionality is added without changing the current codes.

在上面的代码中,例如,如果我们想添加顾问的角色,只需为顾问创建一个新类并定义计算其薪水的过程就足够了,而无需触及现有代码。使用这些词,可以在不更改当前代码的情况下添加新功能。

Liskov Substitution Principle

里斯科夫替代原则 (LSP)

This principle states that the objects of the child class should be able to replace the parent class so there is no change in the final result. To make the matter clear, let us assume that we are asked to design an infrastructure through which the contents of various files can be read and written to these files. It is also stated that a message should be displayed to the user before reading and writing in text files. For this purpose, the following code can be considered:

该原则指出,子类的对象应该能够替换父类,因此最终结果没有变化。为了清楚地说明这个问题,让我们假设我们被要求设计一个基础设施,通过该基础设施,可以读取和写入各种文件的内容。还指出,在读取和写入文本文件之前,应向用户显示一条消息。为此,可以考虑以下代码:

public class FileManager
{
  public virtual void Read()=> Console.WriteLine("Reading from file...");
  public virtual void Write()=> Console.WriteLine("Writting to file...");
  }

  public class TextFileManager : FileManager
  {
    public override void Read()
  {
    Console.WriteLine("Reading text file...");
    base.Read();
  }
  public override void Write()
  {
    Console.WriteLine("Writting to text file...");
    base.Write();
  }
}

After some time, it is stated that the possibility of writing in XML files will be removed, and there is no need to present the writing behavior for XML files to the user. With these conditions, the preceding code changes as the following:

一段时间后,声明将消除写入 XML 文件的可能性,并且无需向用户提供 XML 文件的写入行为。在这些条件下,前面的代码将更改如下:

public class FileManager

{
  public virtual void Read() => Console.WriteLine("Reading from file...");
  public virtual void Write() => Console.WriteLine("Writting to file...");
}

public class TextFileManager : FileManager
  {
  public override void Read()
  {
  Console.WriteLine("Reading from text file...");
  base.Read();
  }

  public override void Write()
  {
  Console.WriteLine("Writting to text file...");
  base.Write();
  }
}

public class XmlFileManager : FileManager
{
  public override void Write()=> throw new NotImplementedException();
}

Now that the preceding class has been added for XmlFileManager, the following problem appears:
现在,已为 XmlFileManager 添加了前面的类,此时会出现以下问题:

FileManager fm = new XmlFileManager();
fm.Read();
fm.Write();// Runtime error 运行时错误

In the preceding code, when we want to call the Write method, we will encounter a NotImplementedException error, so it is not possible to replace the child class object, that is, the XmlFileManager class object, with the parent class object, that is, the FileManager class, and this replacement will change the final result. Because if we worked only with the parent class in the preceding code (FileManager fm = new FileManager()), a result would be obtained. In this case, the LSP principle is violated.

在上面的代码中,当我们要调用 Write 方法时,会遇到一个 NotImplementedException 错误,所以无法将子类对象(即 XmlFileManager 类对象)替换为父类对象(即 FileManager 类),而这种替换会改变最终的结果。因为如果我们只使用前面代码中的父类 (FileManager fm = new FileManager()),就会得到一个结果。在这种情况下,违反了 LSP 原则。

To modify the preceding structure, the code can be changed as the following:
要修改上述结构,可以按如下方式更改代码:

public interface IFileReader
{
  void Read();
}

public interface IFileWriter
{
  void Write();
}

public class FileManager : IFileReader, IFileWriter
{
  public void Read() => Console.WriteLine("Reading from file...");
  public void Write() => Console.WriteLine("Writting to file...");
}

public class TextFileManager : IFileReader, IFileWriter
{
  public void Read() => Console.WriteLine("Reading text file...");
  public void Write() => Console.WriteLine("Writting to text file...");
}

public class XmlFileManager : IFileReader
{
  public void Read() => Console.WriteLine("Reading from file...");
}

In the preceding code, two different interfaces called IFileReader and IFileWriter are introduced. Each class has implemented these interfaces according to its coverage level. Since there was no need to write the Xml files, this class only implemented IFileReader. According to the change in the preceding code, it can be used as the following:

在上面的代码中,引入了两个不同的接口,分别称为 IFileReader 和 IFileWriter。每个类都根据其覆盖率级别实现了这些接口。由于不需要编写 Xml 文件,因此此类仅实现 IFileReader。根据上述代码中的更改,可以按如下方式使用:

IFileReader xmlReader = new XmlFileManager();
xmlReader.Read();

As you can see, in the preceding code, the child class has replaced the parent class, and there has been no change in the result. In the prior structure, since XmlFileManager has not implemented the IFileWriter interface, there is no error or change in the final result. In this way, the LSP principle has been observed.

如您所见,在上面的代码中,子类已替换父类,结果没有变化。在前面的结构中,由于 XmlFileManager 尚未实现 IFileWriter 接口,因此最终结果中没有错误或更改。这样,就遵守了 LSP 原则。

Interface segregation principle

接口隔离原则

This principle states that users of an interface would not have to implement features and methods they do not need. Suppose we are implementing a payroll system. In order to calculate the salaries of employees, a series of attributes are considered for them and written as follows:

该原则指出,接口的用户不必实现他们不需要的功能和方法。假设我们正在实施一个工资单系统。为了计算员工的工资,考虑了一系列属性,并写成如下:

public interface IWorker
{
  public string Name { get; set; }
  public int MonthlySalary { get; set; }
  public int HourlySalary { get; set; }
  public int HoursInMonth { get; set; }
}

On the other hand, there are two types of employees in the system. Full-time and part-time employees. Salaries of full-time employees are calculated by adding 10% to MonthlySalary, and HourlySalary and HoursInMonth are useless for these employees. For part-time employees, salaries are calculated from the product of HourlySalary multiplied by HoursInMonth, and MonthlySalary is useless for this type of employee. To implement these types of employees, the following code is written:

另一方面,系统中有两种类型的员工。全职和兼职员工。全职员工的工资是通过在 MonthlySalary 上增加 10% 来计算的,HourlySalary 和 HoursInMonth 对这些员工毫无用处。对于兼职员工,工资是根据 HourlySalary 乘以 HoursInMonth 的乘积计算的,而 MonthlySalary 对这种类型的员工毫无用处。为了实现这些类型的员工,编写了以下代码:

public class FullTimeWorker: IWorker
{
    public string Name { get; set; }
    public int MonthlySalary { get; set; }
    public int HourlySalary {
    get => throw new NotImplementedException();
    set => throw new NotImplementedException();
    }

  public int HoursInMonth {
    get => throw new NotImplementedException();
    set => throw new NotImplementedException();
  }

  public int CalculateSalary()=>MonthlySalary+(MonthlySalary * 10 / 100);
}

public class PartTimeWorker : IWorker
{
  public string Name { get; set; }
  public int MonthlySalary {
  get => throw new NotImplementedException();
  set => throw new NotImplementedException();
    }

  public int HourlySalary { get; set; }
  public int HoursInMonth { get; set; }

  public int CalculateSalary() => HourlySalary * HoursInMonth;

}

As can be seen in the preceding code, the FullTimeWorker and PartTimeWorker classes have features that are useless for them, but since they need to implement the IWorker interface, these features are placed for them. Hence, the ISP principle is violated. In order to modify this structure, it is necessary to define smaller and more appropriate interfaces. Therefore, the following interfaces can be considered:

从前面的代码中可以看出,FullTimeWorker 和 PartTimeWorker 类具有对它们无用的功能,但由于它们需要实现 IWorker 接口,因此为它们放置了这些功能。因此,违反了 ISP 原则。为了修改此结构,有必要定义更小、更合适的接口。因此,可以考虑以下接口:

public interface IBaseWorker
{
  public string Name { get; set; }
  int CalculateSalary();
}

public interface IFullTimeWorker : IBaseWorker
{
  public int MonthlySalary { get; set; }
}

public interface IPartTimeWorker : IBaseWorker
{
  public int HourlySalary { get; set; }
  public int HoursInMonth { get; set; }
}

Then the FullTimeWorker and PartTimeWorker classes can be implemented as follows:
然后,可以按如下方式实现 FullTimeWorker 和 PartTimeWorker 类:

public class FullTimeWorke : IFullTimeWorker
{
  public string Name { get; set; }
  public int MonthlySalary { get; set; }
  public int CalculateSalary()=>MonthlySalary+(MonthlySalary * 10 / 100);
}

public class PartTimeWorker : IPartTimeWorker
{
  public string Name { get; set; }
  public int HourlySalary { get; set; }
  public int HoursInMonth { get; set; }
  public int CalculateSalary() => HourlySalary * HoursInMonth;
}

Now, the FullTimeWorker class has implemented the IFullTimeWorker interface. It does not need to provide its implementation for the HourlySalary and HoursInMonth features. The same condition is true for PartTimeWorker class and IPartTimeWorker interface. Therefore, with these changes, the ISP principle has been observed.

现在,FullTimeWorker 类已经实现了 IFullTimeWorker 接口。它不需要提供 HourlySalary 和 HoursInMonth 功能的实现。对于 PartTimeWorker 类和 IPartTimeWorker 接口,情况相同。因此,通过这些更改,已经遵守了 ISP 原则。

Dependency Inversion Principle

依赖关系反转原则

This principle states that high-level modules and classes should not depend on low-level modules and classes. In other words, a high-level module should not contain anything from a low-level module, and the bridge between these two modules should only be formed through abstractions. These abstractions should not be dependent on the details, and the details themselves should be dependent on the abstractions. In this way, the code written will be easily expandable and maintainable. For example, consider the following code:
该原则指出,高级模块和类不应依赖于低级模块和类。换句话说,高级模块不应包含来自低级模块的任何内容,并且这两个模块之间的桥梁只能通过抽象形成。这些抽象不应该依赖于细节,细节本身应该依赖于抽象。这样,编写的代码将易于扩展和维护。例如,请考虑以下代码:

public class User
{
  public string FirstName { get; set; }
  public string Email { get; set; }
  public static List<User> Users { get; set; } = new List<User>();
  public void NewUser(User user)
  {
    Users.Add(user);
    new EmailService()
    .SendEmail(user.Email,"Account Created","Your new account created");
  }
}

public class EmailService
{
  public void SendEmail(string email, string subject, string body)
  {
    //Send email
  }
}

In the preceding code, the high-level class User is dependent on the low-level class EmailService, and therefore the maintenance and development of this code always need help. With these specifications, DIP still needs to be met. In order to comply with DIP, the preceding code can be rewritten as the following:

在上面的代码中,高级类 User 依赖于低级类 EmailService,因此此代码的维护和开发始终需要帮助。对于这些规范,仍然需要满足 DIP。为了符合 DIP,可以将上述代码重写为以下内容:

public class User
{
  private readonly IEmailService _emailService;
  public string FirstName { get; set; }
  public string Email { get; set; }
  public static List<User> Users { get; set; } = new List<User>();
  public User(IEmailService emailService)=>this._emailService=emailService;
  public void NewUser(User user)
  {
    Users.Add(user);
    _emailService
    .SendEmail(user.Email,"Account Created","Your new account created");
  }
}

public interface IEmailService
{
  void SendEmail(string email, string subject, string body);
}

public class EmailService : IEmailService
{
  public void SendEmail(string email, string subject, string body)
  {
    //Send email
  }
}

In the preceding code, the User class is dependent on the IEmailService interface, and the EmailService class has also implemented this interface. In this way, while complying with DIP, code maintenance and development are improved.
在上面的代码中,User 类依赖于 IEmailService 接口,并且 EmailService 类也实现了此接口。这样,在遵守 DIP 的同时,代码维护和开发得到了改进。

UML class diagram

UML 类图

UML is a standard modeling language that consists of a set of diagrams. These diagrams help software developers to define software requirements, depict them and document them after construction. The diagrams in UML not only help software engineers during the software production process but also allow business owners and analysts to understand and model their needs more accurately.

UML 是一种由一组图组成的标准建模语言。这些图表可帮助软件开发人员定义软件需求、描述它们并在构建后记录它们。UML 中的图表不仅可以在软件生产过程中帮助软件工程师,还可以让企业主和分析师更准确地理解和建模他们的需求。

UML is very important in the development of object-oriented software, and for this, UML uses a series of graphical symbols. With the help of modeling UML, team members can talk about design and architecture with better and more accuracy and fix possible defects.

UML 在面向对象软件的开发中非常重要,为此,UML 使用一系列图形符号。在建模 UML 的帮助下,团队成员可以更好、更准确地讨论设计和架构,并修复可能的缺陷。

During the past years, UML has undergone various changes, which can be followed in the figure:

在过去的几年里,UML 发生了各种变化,如图所示:

alt text

Figure 1.11: UML versions
图 1.11. UML 版本

When UML is examined and studied, various diagrams can be seen. The reason for this diversity is that different people participate in the production process, and each person sees the product from a different angle according to their role in the team. For example, the use that a programmer makes of UML diagrams is very different from the use made by an analyst.

当检查和研究 UML 时,可以看到各种图表。这种多样性的原因是不同的人参与生产过程,每个人根据他们在团队中的角色从不同的角度看待产品。例如,程序员对 UML 图的使用与分析师对 UML 图的使用非常不同。

In a general classification, UML diagrams can be divided into two main categories:
在一般分类中,UML 图可以分为两大类:

  1. Structural diagrams: These diagrams show the static structure of the system along with different levels of abstraction and implementation and their relationship with each other. The following 7 are structural diagrams in UML:
    结构图:这些图显示了系统的静态结构以及不同级别的抽象和实现以及它们之间的关系。以下 7 个是 UML 中的结构图:
  • Class Diagram 类图
  • Component Diagram 组件图
  • Deployment Diagram 部署图
  • Object Diagram 对象图
  • Package Diagram 打包图
  • Composite Structure Diagram 复合结构图
  • Profile Diagram 轮廓图
  1. Behavioral diagrams: These diagrams show the dynamic behavior of objects in the system. This dynamic behavior can usually be displayed in the form of a series of changes over time. Types of behavioral charts are as follows:
    行为图: 这些图显示了系统中对象的动态行为。这种动态行为通常可以随时间推移的一系列变化的形式显示。行为图的类型如下:
  • Use Case Diagram 用例图
  • Activity Diagram 活动图
  • State Machine Diagram 状态机 图
  • Sequence Diagram 序列图
  • Communication Diagram 通信图
  • Interaction Overview Diagram 交互概述 图
  • Timing Diagram 时序图

Class diagram

类图

This diagram is one of the most popular and widely used UML diagrams. The class diagram describes the different types in the system and the static relationships between them. Also, with the help of this diagram, you can see the characteristics and behaviors of each class and even define limits on the relationship between classes. The following figure shows a class in a class diagram:
该图是最流行和最广泛使用的 UML 图之一。类图描述了系统中的不同类型以及它们之间的静态关系。此外,借助此图,您可以看到每个类的特征和行为,甚至可以定义类之间关系的限制。下图显示了类图中的一个类:

alt text
Figure 1 12: Class in a Class Diagram
图 12:类图中的类

As you can see, each class has a name (Class Name), some characteristics, and behaviors. Properties are given in the upper part of the class (prop1 and prop2). Behaviors are also given in the lower part (op1 and op2).
如您所见,每个类都有一个名称 (Class Name)、一些特征和行为。属性在类的上半部分(prop1 和 prop2)中给出。下半部分还给出了行为 (op1 和 op2)。

Characteristics in the class diagram are divided into two categories:
类图中的特征分为两类:

  • Attributes: This indicator presents the attribute in the form of a written text within the class, which is in the following format. In this format, only a name is required.
    Attributes:此指标在类中以书面文本的形式呈现属性,格式如下。在此格式中,只需要名称。
    visibility name : type multiplicity = default {property-string}
    For example, in the preceding class, the property called prop1 is defined. The access level of this property is private, and its type is an array of int.
    例如,在前面的类中,定义了名为 prop1 的属性。此属性的访问级别为 private,其类型为 int 数组。

  • Relationships: Another way to display features is to use the relationship indicator. Using this indicator, two classes are connected through a line. Relationships can be one-way or two-way.
    关系:显示特征的另一种方法是使用关系指示器。使用此指标,两个类通过一条线连接。关系可以是单向的,也可以是双向的。
    Behaviors are things that an object of the class should be able to do. The methods can be displayed in the following format in the class diagram:
    行为是类的对象应该能够执行的作。这些方法可以在类图中按以下格式显示:
    visibility name (parameter-list): return-type {property-string}
    For example, in the preceding class diagram, a method named op1 is defined with a public access level. Whose return type is Boolean. Also, an input parameter called param1 is defined for the op2 method.
    例如,在前面的类图中,名为 op1 的方法定义了一个 public 访问级别。其返回类型为 Boolean。此外,还为 op2 方法定义了一个名为 param1 的输入参数。

Each class diagram usually consists of several classes or interfaces and connections between them. There may be an inheritance relationship between classes. To show this type of relationship, Generalization is used:
每个类图通常由多个类或接口以及它们之间的连接组成。类之间可能存在继承关系。为了显示这种类型的关系,使用了泛化:

alt text
Figure 1.13: Generalization in Class Diagram
图 1.13.. 类图中的泛化

For example, the preceding diagram shows that Class2 inherits from Class1, so all the features and behaviors available to Class1 are also available to Class2.
例如,上图显示 Class2 继承自 Class1,因此 Class1 可用的所有功能和行为也可用于 Class2。

For another example, a class may use or depend on another class. To display this type of relationship, Dependency must be used. In this type of relationship, changes on the supplier side usually lead to changes on the client side. Classes can depend on each other for different reasons and types. One of the most used dependencies, which has been used many times in this chapter, is use:
再举一个例子,一个类可能使用或依赖于另一个类。要显示这种类型的关系,必须使用 Dependency 。在这种类型的关系中,供应商端的变化通常会导致客户端的变化。类可以由于不同的原因和类型而相互依赖。最常用的依赖项之一是 use:

alt text
Figure 1.14: Use relation in Class Diagram
图 1.14.. 在类图中使用关系

In the preceding figure, Class2 has the role of Supplier, and Class1 has the role of Client. According to the preceding diagram, Class1 is dependent on Class2 through the use of dependency. In other words, Class1 uses Class2.
在上图中,Class2 具有 Supplier 角色,Class1 具有 Client 角色。根据上图,Class1 通过使用依赖关系依赖于 Class2。换句话说,Class1 使用 Class2。

During software development, apart from inherent classes, we may also deal with abstract classes or interfaces:
在软件开发过程中,除了固有的类,我们还可以处理抽象类或接口:

alt text
Figure 1.15: Abstract classes and interfaces in Class Diagram
图 1.15.. 类图中的抽象类和接口

In the preceding diagram, there is an inherent class called Class1, which inherits from the AbstractClass. The name of the abstract class is written in italics. Also, Class1 has implemented the IClass interface. Visually, it is very easy to recognize the interface.

在上图中,有一个名为 Class1 的固有类,它继承自 AbstractClass。抽象类的名称以斜体书写。此外,Class1 还实现了 IClass 接口。从视觉上看,很容易识别界面。

Conclusion

结束语

In this chapter, software architecture and design patterns, the .NET framework, and UML were introduced in general. According to the points mentioned in this chapter, it should be possible to identify good architectural factors and produce software in accordance with some important programming principles.

在本章中,一般介绍了软件体系结构和设计模式、.NET 框架和 UML。根据本章中提到的要点,应该能够识别出好的架构因素,并根据一些重要的编程原则来生产软件。

In the next chapter, the first category of GoF design patterns (Creational design patterns) will be introduced and examined, and it will be investigated how to manage the object initialization according to different creational design patterns.

在下一章中,将介绍和研究 GoF 设计模式的第一类(Creational Design patterns),并研究如何根据不同的创建设计模式管理对象初始化。

NET 7 Design Patterns In-Depth Table of Contents

.NET 7 Design Patterns In-Depth

Enhance code efficiency and maintainability with .NET Design Patterns

Vahid Farahmandian

Table of Contents

目录

  1. Introduction to Design Patterns

  2. 设计模式简介

  3. Creational Design Patterns

  4. 创造式设计模式

  5. Structural Design Patterns

  6. 结构设计模式

  7. Behavioral Design Patterns – Part I

  8. 行为设计模式 – 第一部分

  9. Behavioral Design Patterns – Part II

  10. 行为设计模式 – 第二部分

  11. Domain Logic Design Patterns

  12. 域逻辑设计模式

  13. Data Source Architecture Design Patterns

  14. 数据源架构设计模式

  15. Object-Relational Behaviors Design Patterns

  16. 对象关系行为设计模式

  17. Object-Relational Structures Design Patterns

  18. 对象关系结构设计模式

  19. Object-Relational Metadata Mapping Design Patterns

  20. 对象关系元数据映射设计模式

  21. Web Presentation Design Patterns

  22. Web 表示设计模式

12 . Distribution Design Patterns
12 .分布设计模式

  1. Offline Concurrency Design Patterns

  2. 离线并发设计模式

  3. Session State Design Patterns

  4. 会话状态设计模式

  5. Base Design Patterns

  6. 基本设计模式


About the Author

关于作者
Vahid Farahmandian, who currently works as the CEO of Spoota company, was born in Urmia, Iran, in 1989. He got a BSc in Computer Software Engineering from Urmia University and an MSc degree in Medical Informatics from Tarbiat Modares University. He has more than 17 years of experience in the information and communication technology field and more than a decade of experience in teaching different courses of DevOps, programming languages, and databases in various universities, institutions, and organizations in Iran. Vahid also is an active speaker in international shows and conferences, including Microsoft .NET Live TV, Azure, .NET, and SQL Server conferences. The content published by Vahid was available through YouTube and Medium and had thousands of viewers and audiences.

Vahid Farahmandian 目前担任 Spoota 公司的首席执行官,于 1989 年出生于伊朗乌尔米亚。他获得了乌尔米亚大学的计算机软件工程学士学位和塔尔比亚特莫达雷斯大学的医学信息学硕士学位。他在信息和通信技术领域拥有超过 17 年的经验,并在伊朗的各所大学、机构和组织中教授 DevOps、编程语言和数据库的不同课程方面拥有十多年的经验。Vahid 还是国际节目和会议的积极演讲者,包括 Microsoft .NET Live TV、Azure、.NET 和 SQL Server 会议。Vahid 发布的内容可通过 YouTube 和 Medium 获得,并拥有成千上万的观众和观众。

About the Reviewers

关于审阅者

Kratika Jain is a senior software developer specializing in .NET technologies. She has a strong understanding of C#, ASP.NET, MVC, .NET Core, SQL, and Entity Framework. She has participated in agile project management, employs continuous integration/deployment (CI/CD) using Azure DevOps, and delivered robust and scalable software solutions. As a meticulous technical reviewer, she ensures accuracy and quality in technical content. Her attention to detail allows her to identify potential pitfalls and offer valuable insights for improvement. With her expertise in .NET development and dedication to enhancing technical content, she contributes to empowering developers and enabling their success in mastering the .NET ecosystem. She is a natural problem solver, team player, adaptable, and always seeking new challenges. You can connect with her on LinkedIn at www.linkedin.com/in/kratikajain29/ or on Twitter via @_KratikaJain.

Kratika Jain 是一位专门从事 .NET 技术的高级软件开发人员。她对 C#、ASP.NET、MVC、.NET Core、SQL 和实体框架有很强的理解。她参与了敏捷项目管理,使用 Azure DevOps 采用持续集成/部署 (CI/CD),并提供了强大且可扩展的软件解决方案。作为一名一丝不苟的技术审查员,她确保技术内容的准确性和质量。她对细节的关注使她能够识别潜在的陷阱并提供有价值的改进见解。凭借她在 .NET 开发方面的专业知识和对增强技术内容的奉献精神,她为增强开发人员的能力并帮助他们成功掌握 .NET 生态系统做出了贡献。她是一个天生的问题解决者、团队合作者、适应性强,并且总是寻求新的挑战。您可以通过 LinkedIn at www.linkedin.com/in/kratikajain29/ 或通过 @_KratikaJain 在 Twitter 上与她联系。

Gourav Garg is a Senior Software Engineer from India who has been helping companies to build scalable products. He holds a bachelor’s degree in software engineering and has been programming for 11 years. He is proficient in .net, C#, and Entity Framework. He has experience in delivering several products and many features at his work.

Gourav Garg 是来自印度的高级软件工程师,一直在帮助公司构建可扩展的产品。他拥有软件工程学士学位,从事编程工作已有 11 年。他精通 .net、C# 和 Entity Framework。他在工作中拥有交付多种产品和许多功能的经验。

Gourav has also experience with JavaScript-related tech stacks like Angular and React. He has developed quite a few open-source libraries using ES6 and Angular.

Gourav 还拥有 Angular 和 React 等 JavaScript 相关技术堆栈的经验。他使用 ES6 和 Angular 开发了不少开源库。

Acknowledgement

致谢
There are a few people I want to thank for the continued and ongoing support they have given me during the writing of this book. First and foremost, I would like to thank my parents for continuously encouraging me to write the book — I could have never completed this book without their support.

我想感谢一些人,他们在写这本书期间给予我持续的支持。首先,我要感谢我的父母一直鼓励我写这本书——如果没有他们的支持,我永远不可能完成这本书。

I also need to thank my dear wife, who has always supported me. Finally, I would like to thank all my friends and colleagues who have been by my side and supported me during all these years. I really could not stand where I am today without the support of all of them.

我还需要感谢我一直支持我的亲爱的妻子。最后,我要感谢这些年来一直陪伴在我身边并支持我的所有朋友和同事。如果没有他们所有人的支持,我真的无法站今天。

My gratitude also goes to the team at BPB Publications, who supported me and allowed me to write and finish this book.
我还要感谢 BPB Publications 的团队,他们支持我并允许我编写和完成这本书。

Preface

前言

This book has tried to present important design patterns (including GoF design patterns and Patterns of Enterprise Application Architecture) in software production with a simple approach, along with practical examples using .NET 7.0 and C#.

本书试图用简单的方法呈现软件生产中重要的设计模式(包括 GoF 设计模式和企业应用程序架构模式),以及使用 .NET 7.0 和 C# 的实际示例。

This book will be useful for software engineers, programmers, and system architects. Readers of this book are expected to have intermediate knowledge of C#.NET programming language, .NET 7.0, and UML.

这本书对软件工程师、程序员和系统架构师很有用。本书的读者应具备 C#.NET 编程语言、.NET 7.0 和 UML 的中级知识。

Practical and concrete examples have been used in writing this book. Each design pattern begins with a short descriptive sentence and is then explained as a concrete scenario. Finally, each design pattern's key points, advantages, disadvantages, applicability, and related patterns are stated.

在撰写本书时,使用了实际和具体的例子。每个设计模式都以一个简短的描述性句子开头,然后作为具体场景进行解释。最后,陈述了每种设计模式的关键点、优点、缺点、适用性和相关模式。

This book is divided into 15 chapters, including:

本书分为 15 章,包括:

Chapter 1: Introduction to Design Patterns- In this chapter, an attempt has been made to explain why design patterns are important and their role in software architecture, and basically, what is the relationship between design patterns, software design problems, and software architecture? In this chapter, various topics such as Design Principles, including SOLID, KISS, DRY, etc., and Introduction to .NET and UML are covered too.
第 1 章:设计模式简介 - 在本章中,我们试图解释为什么设计模式很重要以及它们在软件架构中的作用,基本上,设计模式、软件设计问题和软件架构之间的关系是什么?在本章中,还涵盖了各种主题,例如设计原则,包括 SOLID、KISS、DRY 等,以及 .NET 和 UML 简介。

Chapter 2: Creational Design Patterns- Creative design patterns, as the name suggests, deal with the construction of objects and how to create instances. In C# programming language, wherever an object is needed, the object can be created using the “new” keyword along with the class name. However, there are situations where it is necessary to hide the way the object is made from the user's view. In this case, creative design patterns can be useful. In this chapter, creational design patterns, one of the types of GoF design patterns, have been introduced, and it has been said that these design patterns are useful for what issues.
第 2 章:创造性设计模式 - 顾名思义,创意设计模式涉及对象的构造以及如何创建实例。在 C# 编程语言中,只要需要对象,就可以使用 “new” 关键字和类名创建对象。但是,在某些情况下,有必要从用户的视图中隐藏对象的创建方式。在这种情况下,创意设计模式可能很有用。在本章中,介绍了 GoF 设计模式的一种创建设计模式,并且据说这些设计模式对哪些问题很有用。

Chapter 3: Structural Design Patterns- Structural design patterns deal with the relationships between classes in the system. In fact, this category of design patterns determines how different objects can form a more complex structure together. In this chapter, structural design patterns, one of the types of GoF design patterns, have been introduced, and it has been said that these design patterns are useful for what issues.
第 3 章:结构设计模式 - 结构设计模式处理系统中类之间的关系。事实上,这类设计模式决定了不同的对象如何一起形成更复杂的结构。在本章中,介绍了 GoF 设计模式的一种结构设计模式,据说这些设计模式对什么问题很有用。

Chapter 4: Behavioral Design Patterns - Part I- This category of design patterns deals with the behavior of objects and classes. In fact, the main goal and focal point of this category of design patterns is to perform work between different objects using different methods and different algorithms. In fact, in this category of design patterns, not only objects and classes are discussed, but the relationship between them is also discussed. In this chapter, the most popular and famous behavioral design patterns, one of the types of GoF design patterns, have been introduced, and it has been said that these design patterns are useful for what issues.
第 4 章:行为设计模式 – 第一部分 - 这类设计模式涉及对象和类的行为。事实上,这类设计模式的主要目标和焦点是使用不同方法和不同算法在不同对象之间执行工作。事实上,在这类设计模式中,不仅讨论了对象和类,还讨论了它们之间的关系。在本章中,介绍了最流行和最著名的行为设计模式,这是 GoF 设计模式的一种,据说这些设计模式对什么问题很有用。

Chapter 5: Behavioral Design Patterns - Part II- In continuation of the previous chapter, in this chapter, more complex and less used behavioral design patterns are discussed, and it is shown how these design patterns can be useful in dealing with the behavior of objects and classes. Although these patterns are less known or less used, their use can make much more complex problems be solved in a very simple way. In this chapter, less popular or famous behavioral design patterns, one of the types of GoF design patterns, have been introduced, and it has been said that these design patterns are useful for what issues.
第 5 章:行为设计模式 – 第二部分 - 在上一章的延续中,本章讨论了更复杂和较少使用的行为设计模式,并展示了这些设计模式如何用于处理对象和类的行为。尽管这些模式鲜为人知或较少使用,但它们的使用可以以非常简单的方式解决更复杂的问题。在本章中,介绍了不太流行或不太著名的行为设计模式,这是 GoF 设计模式的一种类型,据说这些设计模式对什么问题很有用。

Chapter 6: Domain Logic Design Patterns- To organize domain logic, Domain Logic design patterns can be used. The choice of which design pattern to use depends on the level of logical complexity that we want to implement. The important thing here is to understand when logic is complex and when it is not! Understanding this point is not an easy task, but by using domain experts, or more experienced people, it is possible to obtain a better approximation. In this chapter, it is said how to organize the logic of the domain. And in this way, what are the design patterns that help us have a more appropriate and better design? These design patterns are among the PoEAA design patterns.
第 6 章:域逻辑设计模式 - 为了组织域逻辑,可以使用域逻辑设计模式。选择使用哪种设计模式取决于我们想要实现的逻辑复杂程度。这里重要的是了解逻辑何时复杂,何时不复杂!理解这一点并非易事,但通过使用领域专家或更有经验的人,可以获得更好的近似值。在本章中,将介绍如何组织域的逻辑。而这样一来,有哪些设计模式可以帮助我们有一个更合适、更好的设计呢?这些设计模式属于 PoEAA 设计模式。

Chapter 7: Data Source Architectural Design Patterns- One of the challenges of designing the data access layer is to implement how to communicate with the data source. In this implementation, it is necessary to address issues such as how to categorize SQL codes, how to manage the complexities of communicating with the data of each domain, and the mismatch between the database structure and the domain model. In this chapter, it has been said that in software architecture, communication with data sources can be considered and implemented in a suitable way. These design patterns are among the PoEAA design patterns.
第 7 章:数据源架构设计模式 - 设计数据访问层的挑战之一是实现如何与数据源通信。在此实现中,有必要解决诸如如何对 SQL 代码进行分类、如何管理与每个域的数据进行通信的复杂性以及数据库结构和域模型之间的不匹配等问题。在本章中,已经说过在软件架构中,可以考虑并以适当的方式实现与数据源的通信。这些设计模式属于 PoEAA 设计模式。

Chapter 8: Object-Relational Behaviors Design Patterns- Among the other challenges that exist when communicating with the database is paying attention to behaviors. What is meant by behaviors is how the data should be fetched from the database or how it should be stored in it. For example, suppose a lot of data is fetched from the database, and some of them have changed. It will be very important to answer the question of which of the data has changed or how to store the changes again in the database, provided that the data consistency is not disturbed. Another challenge is that when the Domain Model is used, most of the models have relationships with other models, and reading a model will lead to fetching all its relationships, which will again jeopardize the efficiency. In this chapter, an attempt has been made to explain how to connect business to data sources in a proper way. These design patterns are among the PoEAA design patterns.
第 8 章:对象关系行为设计模式 - 与数据库通信时存在的其他挑战之一是关注行为。行为的含义是应该如何从数据库中获取数据或应该如何将数据存储在数据库中。例如,假设从数据库中获取了大量数据,其中一些数据已更改。回答哪些数据已更改或如何将更改再次存储在数据库中的问题非常重要,前提是数据一致性不受干扰。另一个挑战是,当使用 Domain Model 时,大多数模型都与其他模型有关系,读取一个模型会导致获取它的所有关系,这将再次危及效率。在本章中,我们尝试解释如何以适当的方式将业务连接到数据源。这些设计模式属于 PoEAA 设计模式。

Chapter 9: Object-Relational Structures Design Patterns- Another challenge in mapping the domain to the database is how to map a record in the database to an object. The next challenge is how to implement all types of relationships, including one-to-one, one-to-many and many-to-many relationships. In the meantime, we may face some data that cannot and should not be mapped to any table, and we should think about this problem in our design. Finally, to implement the structure of the database, relationships such as inheritance may be used. In this case, it should be determined how this type of implementation should be mapped to the tables in the database. In this chapter, an attempt has been made to explain how to implement the data source structure in the software. These design patterns are among the PoEAA design patterns.
第 9 章:对象关系结构设计模式 - 将域映射到数据库的另一个挑战是如何将数据库中的记录映射到对象。下一个挑战是如何实现所有类型的关系,包括 1 对 1、1 对多和 many-to-many 关系。同时,我们可能会遇到一些不能也不应该映射到任何 table 的数据,我们应该在设计中考虑这个问题。最后,为了实现数据库的结构,可以使用继承等关系。在这种情况下,应确定如何将这种类型的实现映射到数据库中的表。在本章中,尝试解释如何在软件中实现数据源结构。这些设计模式属于 PoEAA 设计模式。

Chapter 10: Object-Relational Metadata Mapping Design Patterns- When we are producing software, we need to implement the mapping between tables and classes. For the software production process, this will be a process that contains a significant amount of repetitive code, and this will increase the production time. So, it will be necessary to stop writing duplicate codes and extract relationships from metadata. When this challenge can be solved, then it will be possible to generate queries automatically. Finally, when it is possible to automatically extract queries, the database can be hidden from the rest of the program. This chapter describes how to store object metadata in the data source, as well as how to create and manage queries to the data source. These design patterns are among the PoEAA design patterns.
第 10 章:对象关系元数据映射设计模式 - 当我们生产软件时,我们需要实现表和类之间的映射。对于软件生产过程,这将是一个包含大量重复代码的过程,这将增加生产时间。因此,有必要停止编写重复代码并从元数据中提取关系。当这个挑战可以解决时,就可以自动生成查询。最后,当可以自动提取查询时,数据库可以对程序的其余部分隐藏。本章介绍如何在数据源中存储对象元数据,以及如何创建和管理对数据源的查询。这些设计模式属于 PoEAA 设计模式。

Chapter 11: Web Presentation Design Patterns- One of the most important changes in applications in recent years is the penetration of web-based user interfaces. These types of interfaces come with various advantages, including that the client often does not need to install a special program to use them. The creation of web applications is often accompanied by the generation of server-side codes. The request is entered into the web server, and then the web server delivers the request based on the content of the request to the web application or the corresponding website. To separate the details related to the view from the data structure and logic, you can benefit from the design patterns presented in this chapter. In this chapter, the creation and handling of user interface requests are discussed, and it is stated how you can prepare and implement the view and how you can manage the requests in a suitable way. These design patterns are among the PoEAA design patterns.
第 11 章:Web 表示设计模式 - 近年来应用程序最重要的变化之一是基于 Web 的用户界面的渗透。这些类型的接口具有各种优点,包括客户端通常不需要安装特殊程序即可使用它们。Web 应用程序的创建通常伴随着服务器端代码的生成。将请求输入到 Web 服务器中,然后 Web 服务器根据请求的内容将请求投递到 Web 应用程序或相应的网站。要将与视图相关的细节与数据结构和逻辑分开,您可以从本章中介绍的设计模式中受益。在本章中,讨论了用户界面请求的创建和处理,并说明了如何准备和实现视图以及如何以适当的方式管理请求。这些设计模式属于 PoEAA 设计模式。

Chapter 12: Distribution Design Patterns- One of the problems of implementing communication between systems is observing the level of coarseness and fineness of communication. This level should be such that both the effectiveness and efficiency during the network are not disturbed, and the data structure delivered to the client is the structure that is expected and suitable for the client. In this chapter, design patterns that can be useful in building distributed software are discussed. These design patterns are among the PoEAA design patterns.
第 12 章:分布设计模式 - 在系统之间实现通信的问题之一是观察通信的粗略程度和精细度。这个级别应该是这样的,网络期间的有效性和效率都不会受到干扰,并且交付给客户端的数据结构是客户预期和适合的结构。本章讨论了在构建分布式软件时有用的设计模式。这些设计模式属于 PoEAA 设计模式。

Chapter 13: Offline Concurrency Design Patterns- One of the most complicated parts of software production is dealing with topics related to concurrency. Whenever several threads or processes have access to the same data, there is a possibility of problems related to concurrency, so one should think about concurrency in software production. Of course, there are different solutions at different levels for working and managing concurrency in enterprise software applications. For example, you can use transactions, internal features of relational databases, etc., for this purpose. Of course, this reason is not proof of the claim that concurrency management can basically be blamed on these methods and tools. In this chapter, design patterns that can be useful in solving these problems have been introduced. These design patterns are among the PoEAA design patterns.
第 13 章:离线并发设计模式 - 软件生产中最复杂的部分之一是处理与并发相关的主题。每当多个线程或进程可以访问相同的数据时,就可能存在与并发相关的问题,因此应该考虑软件生产中的并发性。当然,在企业软件应用程序中工作和管理并发在不同级别有不同的解决方案。例如,为此,您可以使用事务、关系数据库的内部功能等。当然,这个原因并不能证明并发管理基本上可以归咎于这些方法和工具的说法。本章介绍了可用于解决这些问题的设计模式。这些设计模式属于 PoEAA 设计模式。

Chapter 14: Session State Design Patterns- When we talk about transactions, we often talk about system transactions and business transactions. This discussion continues to the discussion of stateless or stateless sessions. Obviously, first, it should be determined what is meant by Stateful or Stateless. When we look at an object, this object consists of a series of data (status) and a series of behaviors. If we assume that the object does not contain any data, then we have accepted that the object in question does not have any data with it. If we bring this discussion to enterprise software, the meaning of Stateless will be a state in which the server does not keep any data of the request between two requests. If the server needs to store data between two requests, then we will face stateful mode. This chapter talks about how to manage user sessions. Some points have been raised regarding stateless and stateful sessions. These design patterns are among the PoEAA design patterns.
第 14 章:会话状态设计模式 - 当我们谈论事务时,我们经常谈论系统事务和业务事务。此讨论将继续讨论 stateless 或 stateless 会话。显然,首先,应该确定 Stateful 或 Stateless 的含义。当我们查看一个对象时,这个对象由一系列数据 (status) 和一系列 Behavior 组成。如果我们假设该对象不包含任何数据,则我们已接受该对象不包含任何数据。如果我们把这个讨论带到企业软件上,Stateless 的含义将是服务器在两个请求之间不保留请求的任何数据的状态。如果服务器需要在两个请求之间存储数据,那么我们将面临 Stateful 模式。本章讨论如何管理用户会话。已经提出了一些关于无状态和有状态会话的观点。这些设计模式属于 PoEAA 设计模式。

Chapter 15: Base Design Patterns- When we are designing software, we need to use different design patterns. To use these patterns, it is also necessary to use a series of basic design patterns to finally provide a suitable and better design. In fact, basic design patterns provide the foundation for designing and using other patterns. In this chapter, a series of basic design patterns have been introduced, and it has been shown how the use of these design patterns can be effective on the use of other design patterns. These design patterns are among the PoEAA design patterns.
第 15 章:基本设计模式 - 当我们设计软件时,我们需要使用不同的设计模式。要使用这些模式,还需要使用一系列基本的设计模式,以最终提供合适且更好的设计。事实上,基本设计模式为设计和使用其他模式提供了基础。在本章中,介绍了一系列基本设计模式,并展示了如何使用这些设计模式来有效地使用其他设计模式。这些设计模式属于 PoEAA 设计模式。

Code Bundle and Coloured Images

代码包和彩色图像

Please follow the link to download the Code Bundle and the Coloured Images of the book:https://rebrand.ly/g3mn07e
请点击链接下载代码包和书籍的彩色图像: https://rebrand.ly/g3mn07e

The code bundle for the book is also hosted on GitHub at https://github.com/bpbpublications/.NET-7-Design-Patterns-In-Depth. In case there's an update to the code, it will be updated on the existing GitHub repository.
该书的代码包也托管在 GitHub 上,网址为 https://github.com/bpbpublications/.NET-7-Design-Patterns-In-Depth。如果代码有更新,它将在现有的 GitHub 存储库上更新。

We have code bundles from our rich catalogue of books and videos available at https://github.com/bpbpublications. Check them out!
我们在 https://github.com/bpbpublications 上提供了丰富的书籍和视频目录中的代码包。看看他们吧!

Errata

勘误表
We take immense pride in our work at BPB Publications and follow best practices to ensure the accuracy of our content to provide with an indulging reading experience to our subscribers. Our readers are our mirrors, and we use their inputs to reflect and improve upon human errors, if any, that may have occurred during the publishing processes involved. To let us maintain the quality and help us reach out to any readers who might be having difficulties due to any unforeseen errors, please write to us at :errata@bpbonline.com

我们为我们在 BPB Publications 的工作感到非常自豪,并遵循最佳实践来确保我们内容的准确性,从而为我们的订阅者提供沉迷的阅读体验。我们的读者是我们的镜子,我们利用他们的意见来反映和改进在所涉及的发布过程中可能发生的人为错误(如果有)。为了让我们保持质量并帮助我们联系任何可能因任何不可预见的错误而遇到困难的读者,请写信给我们:errata@bpbonline.com

Your support, suggestions and feedbacks are highly appreciated by the BPB Publications’ Family.

BPB Publications 大家庭高度感谢您的支持、建议和反馈。

ASP.NET Core in Action 36 Testing ASP.NET Core applications

36 Testing ASP.NET Core applications‌

This chapter covers

• Writing unit tests for custom middleware, API controllers, and minimal API endpoints
• Using the Test Host package to write integration tests Testing your real application’s behavior with WebApplicationFactory
• Testing code dependent on Entity Framework Core with the in-memory database provider

In chapter 35 I described how to test .NET 7 applications using the xUnit test project and the .NET Test software development kit (SDK). You learned how to create a test project, add a project reference to your application, and write unit tests for services in your app.

In this chapter we focus on testing ASP.NET Core applications specifically. In sections 36.1 and 36.2 we’ll look at how to test common features of your ASP.NET Core apps: custom middleware, API controllers, and minimal API endpoints. I show you how to write isolated unit tests for both, much like you would any other service, and I’ll point out the tripping points to watch for.

To ensure that components work correctly, it’s important to test them in isolation. But you also need to test that they work correctly in a middleware pipeline. ASP.NET Core provides a handy Test Host package that lets you easily write these integration tests for your components. You can even go one step further with the WebApplicationFactory helper class and test that your app is working correctly. In section 36.3 you’ll see how to use WebApplicationFactory to simulate requests to your application and verify that it generates the correct response.

In the final section of this chapter I’ll demonstrate how to use the SQLite database provider for Entity Framework Core (EF Core) with an in-memory database. You can use this provider to test services that depend on an EF Core DbContext without having to use a real database. That prevents the pain of having unknown database infrastructure and resetting the database between tests, with different people having slightly different database configurations.

In chapter 35 I showed how to write unit tests for an exchange-rate calculator service, such as you might find in your application’s domain model. If well designed, domain services are normally relatively easy to unit-test. But domain services only make up a portion of your application. It can also be useful to test your ASP.NET Core-specific constructs, such as custom middleware, as you’ll see in the next section.

36.1 Unit testing custom middleware‌

In this section you’ll learn how to test custom middleware in isolation. You’ll see how to test whether your middleware handled a request or whether it called the next middleware in the pipeline. You’ll also see how to read the response stream for your middleware.

In chapter 31 you saw how to create custom middleware and encapsulate middleware as a class with an Invoke function. In this section you’ll create unit tests for a simple health- check middleware component, similar to the one in chapter 31. This is a basic implementation, but it demonstrates the approach you can take for more complex middleware components.

The middleware you’ll be testing is shown in listing 36.1. When invoked, this middleware checks that the path starts with /ping and, if it does, returns a plain text "pong" response. If the request doesn’t match, it calls the next middleware in the pipeline (the provided RequestDelegate).

Listing 36.1 StatusMiddleware to be tested, which returns a "pong" response

public class StatusMiddleware
{
private readonly RequestDelegate _next; ❶
public StatusMiddleware(RequestDelegate next) ❶
{
_next = next;
}
public async Task Invoke(HttpContext context) ❷
{
if(context.Request.Path.StartsWithSegments("/ping")) ❸
{ ❸
context.Response.ContentType = "text/plain"; ❸
await context.Response.WriteAsync("pong"); ❸
return; ❸
} ❸
await _next(context); ❹
}
}

❶ The RequestDelegate representing the rest of the middleware pipeline
❷ Called when the middleware is executed
❸ If the path starts with “/ping”, a “pong” response is returned . . .
❹ . . . otherwise, the next middleware in the pipeline is invoked.

In this section, you’re going to test two simple cases:

• When a request is made with a path of "/ping"
• When a request is made with a different path

WARNING Where possible, I recommend that you don’t directly inspect paths in your middleware like this. A better approach is to use endpoint routing instead, as I discussed in chapter 31. The middleware in this section is for demonstration purposes only.

Middleware is slightly complicated to unit-test because the HttpContext object is conceptually a big class. It contains all the details for the request and the response, which can mean there’s a lot of surface area for your middleware to interact with. For that reason, I find unit tests tend to be tightly coupled to the middleware implementation, which is generally undesirable.

For the first test, you’ll look at the case where the incoming request Path doesn’t start with /ping. In this case,StatusMiddleware should leave the HttpContext unchanged and call the RequestDelegate provided in the constructor, which represents the next middleware in the pipeline.

You could test this behavior in several ways, but in listing 36.2 you test that the RequestDelegate (essentially a one-parameter function) is executed by setting a local variable to true. In the Assert at the end of the method, you verify that the variable was set and therefore that the delegate was invoked. To invoke StatusMiddleware, create and pass in a DefaultHttpContext, which is an implementation of HttpContext.

NOTE The DefaultHttpContext derives from HttpContext and is part of the base ASP.NET Core framework abstractions. If you’re so inclined, you can explore the source code for it on GitHub at http://mng.bz/MB9Q.

Listing 36.2 Unit testing StatusMiddleware when a nonmatching path is provided

[Fact]
public async Task ForNonMatchingRequest_CallsNextDelegate()
{
var context = new DefaultHttpContext(); ❶
context.Request.Path = "/somethingelse"; ❶
var wasExecuted = false; ❷
RequestDelegate next = (HttpContext ctx) => ❸
{ ❸
wasExecuted = true; ❸
return Task.CompletedTask; ❸
}; ❸
var middleware = new StatusMiddleware(next); ❹
await middleware.Invoke(context); ❺
Assert.True(wasExecuted); ❻
}

❶ Creates a DefaultHttpContext and sets the path for the request
❷ Tracks whether the RequestDelegate was executed
❸ The RequestDelegate representing the next middleware should be invoked in
this example.
❹ Creates an instance of the middleware, passing in the next RequestDelegate
❺ Invokes the middleware with the HttpContext; should invoke the
RequestDelegate
❻ Verifies that RequestDelegate was invoked

When the middleware is invoked, it checks the provided Path and finds that it doesn’t match the required value of /ping. The middleware therefore calls the next RequestDelegate and returns.

The other obvious case to test is when the request Path is "/ping"; the middleware should generate an appropriate response. You could test several characteristics of the response:

• The response should have a 200 OK status code.
• The response should have a Content-Type of text/plain.
• The response body should contain the "pong" string.

Each of these characteristics represents a different requirement, so you’d typically codify each as a separate unit test. This makes it easier to tell exactly which requirement hasn’t been met when a test fails. For simplicity, in listing 36.3 I show all these assertions in the same test.

The positive case unit test is made more complex by the need to read the response body to confirm it contains "pong". DefaultHttpContext uses Stream.Null for the Response .Body object, which means anything written to Body is lost. To capture the response and read it out to verify the contents, you must replace the Body with a MemoryStream. After the middleware executes, you can use a StreamReader to read the contents of the MemoryStream into a string and verify it.

Listing 36.3 Unit testing StatusMiddleware when a matching Path is provided

[Fact]
public async Task ReturnsPongBodyContent()
{
var bodyStream = new MemoryStream(); ❶
var context = new DefaultHttpContext(); ❶
context.Response.Body = bodyStream; ❶
context.Request.Path = "/ping"; ❷
RequestDelegate next = (ctx) => Task.CompletedTask; ❸
var middleware = new StatusMiddleware(next: next); ❸
await middleware.Invoke(context); ❹
string response; ❺
bodyStream.Seek(0, SeekOrigin.Begin); ❺
using (var stringReader = new StreamReader(bodyStream)) ❺
{ ❺
response = await stringReader.ReadToEndAsync(); ❺
} ❺
Assert.Equal("pong", response); ❻
Assert.Equal("text/plain", context.Response.ContentType); ❼
Assert.Equal(200, context.Response.StatusCode); ❽
}

❶ Creates a DefaultHttpContext and initializes the body with a MemoryStream
❷ The path is set to the required value for the StatusMiddleware.
❸ Creates an instance of the middleware and passes in a simple RequestDelegate
❹ Invokes the middleware
❺ Rewinds the MemoryStream and reads the response body into a string
❻ Verifies that the response has the correct value
❼ Verifies that the ContentType response is correct
❽ Verifies that the Status Code response is correct

As you can see, unit testing middleware requires a lot of setup. On the positive side, it allows you to test your middleware in isolation, but in some cases, especially for simple middleware without any dependencies on databases or other services, integration testing can (somewhat surprisingly) be easier. In section 36.3 you’ll create integration tests for this middleware to see the difference.

Custom middleware is common in ASP.NET Core projects, but far more common are Razor Pages, API controllers, and minimal API endpoints. In the next section you’ll see how you can unit test them in isolation from other components.

36.2 Unit testing API controllers and minimal API endpoints‌

In this section you’ll learn how to unit-test API controllers and minimal API endpoints. You’ll learn about the benefits and difficulties of testing these components in isolation and the situations when it can be useful.

Unit tests are all about isolating behavior; you want to test only the logic contained in the component itself, separate from the behavior of any dependencies. The Razor Pages and MVC/API frameworks use the filter pipeline, routing, and model-binding systems, but these are all external to the controller or PageModels. The PageModels and controllers themselves are responsible for a limited number of things:

• For invalid requests (that have failed validation, for example), return an appropriate ActionResult (API controllers) or redisplay a form (Razor Pages).

• For valid requests, call the required business logic services and return an appropriate ActionResult (API controllers), or show or redirect to a success page (Razor Pages).

• Optionally, apply resource-based authorization as required.

Controllers and Razor Pages generally shouldn’t contain business logic themselves; instead, they should call out to other services. Think of them more as orchestrators, serving as the intermediary between the HTTP interfaces your app exposes and your business logic services.

If you follow this separation, you’ll find it easier to write unit tests for your business logic, and you’ll benefit from greater flexibility when you want to change your controllers to meet your needs. With that in mind, there’s often a drive to make your controllers and page handlers as thin as possible, to the point where there’s not much left to test!

TIP One of my first introductions to this idea was a series of posts by Jimmy Bogard. The following link points to the last post in the series, but it contains links to all the earlier posts too. Bogard is also behind the MediatR library (https://github.com/jbogard/MediatR), which makes creating thin controllers even easier. See “Put your controllers on a diet: POSTs and commands”: http://mng.bz/7VNQ.

All that said, controllers and actions are classes and methods, so you can write unit tests for them. The difficulty is deciding what you want to test. As an example, we’ll consider the simple API controller in the following listing, which converts a value using a provided exchange rate and returns a response.

Listing 36.4 The API controller under test

[Route("api/[controller]")]
public class CurrencyController : ControllerBase
{
private readonly CurrencyConverter _converter ❶
= new CurrencyConverter(); ❶
[HttpGet]
public ActionResult<decimal> Convert(InputModel model) ❷
{
if (!ModelState.IsValid) ❸
{ ❸
return BadRequest(ModelState); ❸
} ❸
decimal result = _converter.ConvertToGbp(model) ❹
return result; ❺
}
}

❶ The CurrencyConverter would normally be injected using DI and is created here
for simplicity.
❷ The Convert method returns an Action-Result.
❸ If the input is invalid, returns a 400 Bad Request result, including the ModelState
❹ If the model is valid, calculates the result
❺ Returns the result directly

Let’s first consider the happy path, when the controller receives a valid request. The following listing shows that you can create an instance of the API controller, call an action method, and receive an ActionResult response.

Listing 36.5 A simple API controller unit test

public class CurrencyControllerTest
{
[Fact]
public void Convert_ReturnsValue()
{
var controller = new CurrencyController(); ❶
var model = new InputModel ❶
{ ❶
Value = 1, ❶
ExchangeRate = 3, ❶
DecimalPlaces = 2, ❶
}; ❶
ActionResult<decimal> result = controller.Convert(model); ❷
Assert.NotNull(result); ❸
}
}

❶ Creates an instance of the ConvertController to test and a model to send to the
API
❷ Invokes the ConvertToGbp method and captures the value returned
❸ Asserts that the IActionResult is not null

An important point to note here is that you’re testing only the return value of the action, the ActionResult, not the response that’s sent back to the user. The process of serializing the result to the response is handled by the Model-View-Controller (MVC) formatter infrastructure, as you saw in chapter 9, not by the controller.

When you unit-test controllers, you’re testing them separately from the MVC infrastructure, such as formatting, model binding, routing, and authentication. This is obviously by design, but as with testing middleware in section 36.1, it can make testing some aspects of your controller somewhat complex.

Consider model validation. As you saw in chapter 6, one of the key responsibilities of action methods and Razor Page handlers is to check the ModelState.IsValid property and act accordingly if a binding model is invalid. Testing that your controllers and PageModels handle validation failures correctly seems like a good candidate for a unit test.

Unfortunately, things aren’t simple here either. The Razor Page/MVC framework automatically sets the ModelState property as part of the model-binding process. In practice, when your action method or page handler is invoked in your running app, you know that the ModelState will match the binding model values. But in a unit test, there’s no model binding, so you must set the ModelState yourself manually.

Imagine you’re interested in testing the error path for the controller in listing 36.4, where the model is invalid and the controller should return BadRequestObjectResult. In a unit test, you can’t rely on the ModelState property being correct for the binding model. Instead, you must add a model-binding error to the controller’s ModelState manually before calling the action, as shown in the following listing.

Listing 36.6 Testing handling of validation errors in MVC controllers

[Fact]
public void Convert_ReturnsBadRequestWhenInvalid()
{
var controller = new CurrencyController(); ❶
var model = new ConvertInputModel ❷
{ ❷
Value = 1, ❷
ExchangeRate = -2, ❷
DecimalPlaces = 2, ❷
}; ❷
controller.ModelState.AddModelError( ❸
nameof(model.ExchangeRate), ❸
"Exchange rate must be greater than zero" ❸
); ❸
ActionResult<decimal> result = controller.Convert(model); ❹
Assert.IsType<BadRequestObjectResult>(result.Result); ❺
}

❶ Creates an instance of the Controller to test
❷ Creates an invalid binding model by using a negative ExchangeRate
❸ Manually adds a model error to the Controller’s ModelState. This sets ModelState.IsValid to false.
❹ Invokes the action method, passing in the binding models
❺ Verifies that the action method returned a BadRequestObjectResult

NOTE In listing 36.6, I passed in an invalid model, but I could just as easily have passed in a valid model or even null; the controller doesn’t use the binding model if the ModelState isn’t valid, so the test would still pass. But if you’re writing unit tests like this one, I recommend trying to keep your model consistent with your ModelState; otherwise, your unit tests won’t be testing a situation that occurs in practice.

I tend to shy away from unit testing API controllers directly in this way. As you’ve seen with model binding, the controllers are somewhat dependent on earlier stages of the MVC framework, which you often need to emulate. Similarly, if your controllers access the HttpContext (available on the ControllerBase base classes), you may need to perform additional setup.

NOTE You can read more about why I generally don’t unit- test my controllers in my blog article “Should you unit-test API/MVC controllers in ASP.NET Core?” at http://mng.bz/YqMo.

So what about minimal API endpoints? There’s both good news and bad news here. On one hand, minimal API endpoints are simple lambda functions, so you can unit-test them, but these tests also suffer from many drawbacks:

• You must write your endpoint handlers as static or instance methods on a class, not as lambda methods or local functions, so that you can reference them from the test project.

• You are testing only the execution of the endpoint handler, outside any filters applied to the endpoint or route group that execute in the real app.

• You are not testing model-binding or result serialization—two common sources of errors in practice.

• If your endpoint is simple, as it should be, there’s not much to test!

I find unit tests for minimal APIs to be overly restrictive and limited in value, so I avoid them, but you can see an example of a minimal API unit test in the source code for this chapter.

NOTE I haven’t discussed Razor Pages much in this section, as they suffer from many of the same problems, in that they are dependent on the supporting infrastructure of the framework. Nevertheless, if you do wish to test your Razor Page PageModel, you can read about it in Microsoft’s “Razor Pages unit tests in ASP.NET Core” documentation: http://mng.bz/GxmM.

Instead of using unit testing, I try to keep my minimal API endpoints, controllers, and Razor Pages as thin as possible. I push as much of the behavior in these classes into business logic services that can be easily unit-tested, or into middleware and filters, which can be more easily tested independently.

NOTE This is a personal preference. Some people like to get as close to 100 percent test coverage for their code base as possible, but I find testing orchestration classes is often more hassle than it’s worth.

Although I tend to forgo unit-testing my ASP.NET Core endpoints, I often write integration tests that test them in the context of a complete application. In the next section, we’ll look at ways to write integration tests for your app so you can test its various components in the context of the ASP.NET Core framework as a whole.

36.3 Integration testing: Testing your whole app in-memory‌

In this section you’ll learn how to create integration tests that test component interactions. You’ll learn to create a TestServer that sends HTTP requests in-memory to test custom middleware components more easily. You’ll then learn how to run integration tests for a real application, using your real app’s configuration, services, and middleware pipeline. Finally, you’ll learn how to use WebApplicationFactory to replace services in your app with test versions to avoid depending on third-party APIs in your tests.

If you search the internet for types of testing, you’ll find a host of types to choose among. The differences are sometimes subtle, and people don’t universally agree on the definitions. I chose not to dwell on that topic in this book. I consider unit tests to be isolated tests of a component and integration tests to be tests that exercise multiple components at the same time.

In this section I’m going to show how you can write integration tests for the StatusMiddleware from section 36.1 and the API controller from section 36.2. Instead of isolating the components from the surrounding framework and invoking them directly, you’ll specifically test them in a context similar to how you use them in practice.

Integration tests are an important part of confirming that your components function correctly, but they don’t remove the need for unit tests. Unit tests are excellent for testing small pieces of logic contained in your components and are typically quick to execute. Integration tests are normally significantly slower, as they require much more configuration and may rely on external infrastructure, such as a database.

Consequently, it’s normal to have far more unit tests for an app than integration tests. As you saw in chapter 35, unit tests typically verify the behavior of a component, using valid inputs, edge cases, and invalid inputs to ensure that the component behaves correctly in all cases. Once you have an extensive suite of unit tests, you’ll likely need only a few integration tests to be confident your application is working correctly.

You could write many types of integration tests for an application. You could test that a service can write to a database correctly, integrate with a third-party service (for sending emails, for example), or handle HTTP requests made to it.

In this section we’re going to focus on the last point: verifying that your app can handle requests made to it, as it would if you were accessing the app from a browser. For this, we’re going to use a library provided by the ASP.NET Core team called Microsoft.AspNetCore.TestHost.

36.3.1 Creating a TestServer using the Test Host package‌

Imagine you want to write some integration tests for the StatusMiddleware from section 36.1. You’ve already written unit tests for it, but you want to have at least one integration test that tests the middleware in the context of the ASP.NET Core infrastructure.

You could go about this in many ways. Perhaps the most complete approach would be to create a separate project and configure StatusMiddleware as the only middleware in the pipeline. You’d then need to run this project, wait for it to start up, send requests to it, and inspect the responses.

This would possibly make for a good test, but it would also require a lot of configuration, and it would be fragile and error-prone. What if the test app can’t start because it tries to use an already-taken port? What if the test app doesn’t shut down correctly? How long should the integration test wait for the app to start?

The ASP.NET Core Test Host package lets you get close to this setup without having the added complexity of spinning up a separate app. You add the Test Host to your test project by adding the Microsoft.AspNetCore.TestHost NuGet package, using the Visual Studio NuGet GUI, Package Manager Console, or .NET command-line interface (CLI). Alternatively, add the element directly to your test project’s .csproj file:‌

<PackageReference Include="Microsoft.AspNetCore.TestHost" Version="7.0.0"/>

In a typical ASP.NET Core app, you create a HostBuilder in your Program class; configure a web server (Kestrel); and define your application’s configuration, services, and middleware pipeline (using a Startup file). Finally, you call Build() on the HostBuilder to create an instance of an IHost that can be run and that will listen for requests on a given URL and port.

NOTE All this happens behind the scenes when you use the minimal hosting WebApplicationBuilder and WebApplication APIs. I have an in-depth post exploring the code behind WebApplicationBuilder and how it relates to HostBuilder on my blog at http://mng.bz/a1mj.‌

The Test Host package uses the same HostBuilder to define your test application, but instead of listening for requests at the network level, it creates an IHost that uses in-memory request objects, as shown in figure 36.1.

alt text
alt text

Figure 36.1 When your app runs normally, it uses the Kestrel server. This listens for HTTP requests and converts the requests to an HttpContext, which is passed to the middleware pipeline. The TestServer doesn’t listen for requests on the network. Instead, you use an HttpClient to make in-memory requests.From the point of view of the middleware, there’s no difference.

It even exposes an HttpClient that you can use to send requests to the test app. You can interact with the HttpClient as though it were sending requests over the network, but in reality, the requests are kept entirely in memory.

Listing 36.7 shows how to use the Test Host package to create a simple integration test for the StatusMiddleware. First, create a HostBuilder, and call ConfigureWebHost() to define your application by adding middleware in the Configure method. This is equivalent to the Startup.Configure() method you would typically use to configure your application when using the generic host approach.‌

NOTE You can write a similar test using WebApplicationBuilder, but this sets up lots of extra defaults such as configuration, extra dependency injection (DI) services, and automatically added middleware, which can generally slow and add some confusion to simple tests. You can see an example of this approach in StatusMiddlewareTestHostTests in the source code for this book, but I recommend using the approach in listing 36.7, using HostBuilder, in most cases.

Call the UseTestServer() extension method in ConfigureWebHost(), which replaces the default Kestrel server with the TestServer from the Test Host package.

The TestServer is the main component in the Test Host package, which makes all the magic possible. After configuring the HostBuilder, call StartAsync() to build and start the test application. You can then create an HttpClient using the extension method GetTestClient(). This returns an HttpClient configured to make in-memory requests to the TestServer, as shown in the following listing.

Listing 36.7 Creating an integration test with TestServer


public class StatusMiddlewareTests
{
[Fact]
public async Task StatusMiddlewareReturnsPong()
{
var hostBuilder = new HostBuilder() ❶
.ConfigureWebHost(webHost => ❶
{
webHost.Configure(app => ❷
app.UseMiddleware<StatusMiddleware>()); ❷
webHost.UseTestServer(); ❸
});
IHost host = await hostBuilder.StartAsync(); ❹
HttpClient client = host.GetTestClient(); ❺
var response = await client.GetAsync("/ping"); ❻
response.EnsureSuccessStatusCode(); ❼
var content = await response.Content.ReadAsStringAsync(); ❽
Assert.Equal("pong", content); ❽
}
}

❶ Configures a HostBuilder to define the in-memory test app
❷ Adds the Status-Middleware as the only middleware in the pipeline
❸ Configures the host to use the TestServer instead of Kestrel
❹ Builds and starts the host
❺ Creates an HttpClient, or you can interact directly with the server object
❻ Makes an in-memory request, which is handled by the app as normal
❼ Verifies that the response was a success (2xx) status code
❽ Reads the body content and verifies that it contains “pong”

This test ensures that the test application defined by HostBuilder returns the expected value when it receives a request to the /ping path. The request is entirely in- memory, but from the point of view of StatusMiddleware, it’s the same as if the request came from the network.

The HostBuilder configuration in this example is simple. Even though I’ve called this an integration test, you’re specifically testing the StatusMiddleware on its own rather than in the context of a real application. I think this setup is preferable for testing custom middleware compared with the “proper” unit tests I showed in section 36.1.

Regardless of what you call it, this test relies on simple configuration for the test app. You may also want to test the middleware in the context of your real application so that the result is representative of your app’s real configuration.

If you want to run integration tests based on an existing app, you don’t want to have to configure the test HostBuilder manually, as you did in listing 36.7. Instead, you can use another helper package, Microsoft.AspNetCore.Mvc.Testing.

36.3.2 Testing your application with WebApplicationFactory‌

Building up a HostBuilder and using the Test Host package, as you did in section 36.3.1, can be useful when you want to test isolated infrastructure components, such as middleware. However, it’s also common to want to test your real app, with the full middleware pipeline configured and all the required services added to DI. This gives you the most confidence that your application is going to work in production.

The TestServer that provides the in-memory server can be used for testing your real app, but in principle, a lot more configuration is required. Your real app likely loads configuration files or static files; it may use Razor Pages and views, as well as using WebApplicationBuilder instead of the generic host. Fortunately, the Microsoft.AspNetCore.Mvc.Testing NuGet package and WebApplicationFactory largely solve these configuration problems for you.

NOTE Don’t be put off by the Mvc in the package name; you can use this package for testing ASP.NET Core apps that don’t use any MVC or Razor Pages services or components.

You can use the WebApplicationFactory class (provided by the Microsoft.AspNetCore.Mvc.Testing NuGet package) to run an in-memory version of your real application. It uses the TestServer behind the scenes, but it uses your app’s real configuration, DI service registration, and middleware pipeline. The following listing shows an example that tests that when your application receives a "/ping" request, it responds with "pong".

Listing 36.8 Creating an integration test with WebApplicationFactory

public class IntegrationTests: ❶
IClassFixture<WebApplicationFactory<Program>> ❶
{
private readonly WebApplicationFactory<Program> _fixture; ❷
public IntegrationTests( ❷
WebApplicationFactory<Startup> fixture) ❷
{ ❷
_fixture = fixture; ❷
} ❷
[Fact]
public async Task PingRequest_ReturnsPong()
{
HttpClient client = _fixture.CreateClient(); ❸
var response = await client.GetAsync("/ping"); ❹
response.EnsureSuccessStatusCode(); ❹
var content = await response.Content.ReadAsStringAsync(); ❹
Assert.Equal("pong", content); ❹
}
}

❶ Implementing the interface allows sharing an instance across tests.
❷ Injects an instance of WebApplicationFactory, where T is a class in your app
❸ Creates an HttpClient that sends requests to the in-memory TestServer
❹ Makes requests and verifies the response as before

One of the advantages of using WebApplicationFactory as shown in listing 36.8 is that it requires less manual configuration than using the TestServer directly, as shown in listing 36.13, despite performing more configuration behind the scenes. The WebApplicationFactory tests your app using the configuration defined in your Program.cs and Startup.cs files.

NOTE The generic WebApplicationFactory must reference a public class in your app project. It’s common to use the Program or Startup class. If you’re using top-level statements for your app (the default in .NET 7), the automatically generated Program class is internal by default. To make it public and thereby expose it to your test project, add the following partial class definition to your app: public partial class Program {}.‌

Listings 36.8 and 36.7 are conceptually quite different too. Listing 36.7 tests that the StatusMiddleware behaves as expected in the context of a dummy ASP.NET Core app; listing 36.7 tests that your app behaves as expected for a given input. It doesn’t say anything specific about how that happens. Your app doesn’t have to use the StatusMiddleware for the test in listing 36.7 to pass; it simply has to respond correctly to the given request. That means the test knows less about the internal implementation details of your app and is concerned only with its behavior.

DEFINITION Tests that fail whenever you change your app slightly are called brittle or fragile. Try to avoid brittle tests by ensuring that they aren’t dependent on the implementation details of your app.‌

To create tests that use WebApplicationFactory, follow these steps:

  1. Install the Microsoft.AspNetCore.Mvc.Testing NuGet package in your project by running dotnet add package Microsoft.AspNetCore.Mvc.Testing, by using the NuGet explorer in Visual Studio, or by adding a <PackageReference> element to your project file as follows:

    <PackageReference Include="Microsoft.AspNetCore.Mvc.Testing" Version="7.0.0" />
  2. Update the <Project> element in your test project’s .csproj file to the following:

<Project Sdk="Microsoft.NET.Sdk.Web">

This is required by WebApplicationFactory so that it can find your configuration files and static files.

  1. Implement IClassFixture<WebApplicationFactory<T>> in your xUnit test class, where T is a class in your real application’s project. By convention, you typically use your application’s Program class for T.

• WebApplicationFactory uses the T reference to find the entry point for your application, running the application in memory, and dynamically replacing Kestrel with a TestServer for tests.

• If you’re using C# top-level statements and using the Program class for T, you need to make sure that the Program class is accessible from the test project. You can change the visibility of the automatically generated Program class by adding public partial class Program {} to your app.

• The IClassFixture<TFixture> is an xUnit marker interface that tells xUnit to build an instance of TFixture before building the test class and to inject the instance into the test class’s constructor. You can read more about fixtures at https://xunit.net/docs/shared- context.

  1. Inject an instance of WebApplicationFactory in your test class’s constructor. You can use this fixture to create an HttpClient for sending in-memory requests to the TestServer. Those requests emulate your application’s production behavior, as your application’s real configuration, services, and middleware are all used.

The big advantage of WebApplicationFactory is that you can easily test your real app’s behavior. That power comes with responsibility: your app will behave as it would in real life, so it will write to a database and send to third-party APIs! Depending on what you’re testing, you may want to replace some of your dependencies to avoid this, as well as to make testing easier.

36.3.3 Replacing dependencies in WebApplicationFactory‌

When you use WebApplicationFactory to run integration tests on your app, your app will be running in-memory, but other than that, it’s as though you’re running your application using dotnet run. That means any connection strings, secrets, or API keys that can be loaded locally will also be used to run your application.

TIP By default, WebApplicationFactory uses the "Development" hosting environment, the same as when you run locally.

On the plus side, that means you have a genuine test that your application can start correctly. For example, if you’ve forgotten to register a required DI dependency that is detected on application startup, any tests that use WebApplicationFactory will fail.

On the downside, that means all your tests will be using the same database connection and services as when you run your application locally. It’s common to want to replace those with alternative test versions of your services.

As a simple example, imagine the CurrencyConverter that you’ve been testing in this app uses IHttpClientFactory to call a third-party API to retrieve the latest exchange rates. You don’t want to hit that API repeatedly in your integration tests, so you want to replace the CurrencyConverter with your own StubCurrencyConverter.

The first step is to ensure that the service CurrencyConverter implements an interface— ICurrencyConverter for example—and that your app uses this interface throughout, not the implementation. For our simple example, the interface would probably look like the following:

public interface ICurrencyConverter
{
decimal ConvertToGbp(decimal value, decimal rate, int dps);
}

You would register your real CurrencyConverter service in Program.cs using


builder.Services.AddScoped<ICurrencyConverter, CurrencyConverter>();

Now that your application depends on CurrencyConverter only indirectly, you can provide an alternative implementation in your tests.

TIP Using an interface decouples your application services from a specific implementation, allowing you to substitute alternative implementations. This is a key practice for making classes testable.

We’ll create a simple alternative implementation of ICurrencyConverter for our tests that always returns the same value, 3. It’s obviously not terribly useful as an actual converter, but that’s not the point: you have complete control! Create the following class in your test project:

public class StubCurrencyConverter : ICurrencyConverter
{
public decimal ConvertToGbp(decimal value, decimal rate, int dps)
{
return 3;
}
}

You now have all the pieces you need to replace the implementation in your tests. To achieve that, we’ll use a feature of WebApplicationFactory that lets you customize the DI container before starting the test server.

TIP It’s important to remember that you want to replace the implementation only when running in the test project. I’ve seen some people try to configure their real apps to replace live services for fake services when a specific value is set, for example. That is often unnecessary, bloats your apps with test services, and generally adds confusion!

WebApplicationFactory exposes a method, WithWebHostBuilder, that allows you to customize your application before the in-memory TestServer starts. The following listing shows an integration test that uses this builder to replace the default ICurrencyConverter implementation with our test stub.‌

Listing 36.9 Replacing a dependency in a test using WithWebHostBuilder

public class IntegrationTests: ❶
IClassFixture<WebApplicationFactory<Startup>> ❶
{ ❶
private readonly WebApplicationFactory<Startup> _fixture; ❶
public IntegrationTests(WebApplicationFactory<Startup> fixture) ❶
{ ❶
_fixture = fixture; ❶
} ❶
[Fact]
public async Task ConvertReturnsExpectedValue()
{
var customFactory = _fixture.WithWebHostBuilder( ❷
(IWebHostBuilder hostBuilder) => ❷
{
hostBuilder.ConfigureTestServices(services => ❸
{
services.RemoveAll<ICurrencyConverter>(); ❹
services.AddScoped
<ICurrencyConverter, StubCurrencyConverter>(); ❺
});
});
HttpClient client = customFactory.CreateClient(); ❻
var response = await client.GetAsync("/api/currency"); ❼
response.EnsureSuccessStatusCode(); ❼
var content = await response.Content.ReadAsStringAsync(); ❼
Assert.Equal("3", content); ❽
}
}

❶ Implements the required interface and injects it into the constructor
❷ Creates a custom factory with the additional configuration
❸ ConfigureTestServices executes after all other DI services are configured in
your real app.
❹ Removes all implementations of ICurrency-Converter from the DI container
❺ Adds the test service as a replacement
❻ Calling CreateClient bootstraps the application and starts the TestServer.
❼ Invokes the currency converter endpoint
❽ As the test converter always returns 3, so does the API endpoint.

There are a couple of important points to note in this example:

• WithWebHostBuilder() returns a new WebApplicationFactory instance. The new instance has your custom configuration, and the original injected _fixture instance remains unchanged.

• ConfigureTestServices() is called after your real app’s ConfigureServices() method. That means you can replace services that have been previously registered. You can also use this to override configuration values, as you’ll see in section 36.4.

WithWebHostBuilder() is handy when you want to replace a service for a single test. But what if you want to replace the ICurrencyConverter in every test? All that boiler- plate would quickly become cumbersome. Instead, you can create a custom WebApplicationFactory.

36.3.4 Reducing duplication by creating a custom WebApplicationFactory‌

If you find yourself writing WithWebHostBuilder() a lot in your integration tests, it might be worth creating a custom WebApplicationFactory instead. The follow- ing listing shows how to centralize the test service we used in listing 36.9 into a custom WebApplicationFactory.

Listing 36.10 Creating a custom WebApplicationFactory to reduce duplication

public class CustomWebApplicationFactory ❶
: WebApplicationFactory<Program> ❶
{
protected override void ConfigureWebHost( ❷
IWebHostBuilder builder) ❷
{
builder.ConfigureTestServices(services => ❸
{ ❸
services.RemoveAll<ICurrencyConverter>(); ❸
services.AddScoped ❸
<ICurrencyConverter, StubCurrencyConverter>(); ❸
}); ❸
}
}

In this example, we override ConfigureWebHost and configure the test services for the factory.1 You can use your custom factory in any test by injecting it as an IClassFixture, as you have before. The following listing shows how you would update listing 36.9 to use the custom factory defined in listing 36.10.

Listing 36.11 Using a custom WebApplicationFactory in an integration test

public class IntegrationTests: ❶
IClassFixture<CustomWebApplicationFactory> ❶
{
private readonly CustomWebApplicationFactory _fixture; ❷
public IntegrationTests(CustomWebApplicationFactory fixture) ❷
{
_fixture = fixture;
}
[Fact]
public async Task ConvertReturnsExpectedValue()
{
HttpClient client = _fixture.CreateClient(); ❸
var response = await client.GetAsync("/api/currency");
response.EnsureSuccessStatusCode();
var content = await response.Content.ReadAsStringAsync();
Assert.Equal("3", content); ❹
}
}

❶ Implements the IClassFixture interface for the custom factory
❷ Injects an instance of the factory in the constructor
❸ The client already contains the test service configuration.
❹ The result confirms that the test service was used.

You can also combine your custom WebApplicationFactory, which substitutes services that you always want to replace, with the WithWebHostBuilder() method to override additional services on a per-test basis. That combination gives you the best of both worlds: reduced duplication with the custom factory and control with the per-test configuration.

Running integration tests using your real app’s configuration provides about the closest thing you’ll get to a guarantee that your app is working correctly. The sticking point in that guarantee is nearly always external dependencies, such as third-party APIs and databases.

In the final section of this chapter we’ll look at how to use the SQLite provider for EF Core with an in-memory database. You can use this approach to write tests for services that use an EF Core database context without needing access to a real database.‌

36.4 Isolating the database with an in-memory EF Core provider‌

In this section you’ll learn how to write unit tests for code that relies on an EF Core DbContext. You’ll learn how to create an in-memory database, and you’ll see the difference between the EF in-memory provider and the SQLite in- memory provider. Finally, you’ll see how to use the in- memory SQLite provider to create fast, isolated tests for code that relies on a DbContext.

As you saw in chapter 12, EF Core is an object-relational mapper (ORM) that is used primarily with relational databases. In this section I’m going to discuss one way to test services that depend on an EF Core DbContext without having to configure or interact with a real database.

NOTE To learn more about testing your EF Core code, see Entity Framework Core in Action, 2nd ed., by Jon P. Smith (Manning, 2021), http://mng.bz/QPpR.‌

The following listing shows a highly stripped-down version of the RecipeService you created in chapter 12 for the recipe app. It shows a single method to fetch the details of a recipe using an injected EF Core DbContext.

Listing 36.12 RecipeService to test, which uses EF Core to store and load entities

public class RecipeService
{
readonly AppDbContext _context; ❶
public RecipeService(AppDbContext context) ❶
{ ❶
_context = context; ❶
} ❶
public RecipeViewModel GetRecipe(int id)
{
return _context.Recipes ❷
.Where(x => x.RecipeId == id)
.Select(x => new RecipeViewModel
{
Id = x.RecipeId,
Name = x.Name
})
.SingleOrDefault();
}
}

❶ An EF Core DbContext is injected in the constructor.
❷ Uses the DbSet<Recipes> property to load recipes and creates a
RecipeViewModel

Writing unit tests for this class is a bit of a problem. Unit tests should be fast, repeatable, and isolated from other dependencies, but you have a dependency on your app’s DbContext. You probably don’t want to be writing to a real database in unit tests, as it would make the tests slow, potentially unrepeatable, and highly dependent on the configuration of the database—a failure on all three requirements!

NOTE Depending on your development environment, you may want to use a real database for your integration tests, despite these drawbacks. Using a database like the one you’ll use in production increases the likelihood that you’ll detect any problems in your tests. You can find an example of using Docker to achieve this in Microsoft’s “Testing ASP.NET Core services and web apps” documentation at http://mng.bz/zxDw.

Luckily, Microsoft ships two in-memory database providers for this scenario. Recall from chapter 12 that when you configure your app’s DbContext in Program.cs, you configure a specific database provider, such as SQL Server:

builder.Services.AddDbContext<AppDbContext>(options => options.UseSqlServer(connectionString);

The in-memory database providers are alternative providers designed only for testing. Microsoft includes two in-memory providers in ASP.NET Core:

• Microsoft.EntityFrameworkCore.InMemory—This provider doesn’t simulate a database. Instead, it stores objects directly in memory. It isn’t a relational database as such, so it doesn’t have all the features of a normal database. You can’t execute SQL against it directly, and it won’t enforce constraints, but it’s fast. These limitations are large enough that Microsoft generally advise against using it. See http://mng.bz/e1E9.

• Microsoft.EntityFrameworkCore.Sqlite—SQLite is a relational database. It’s limited in features compared with a database like SQL Server, but it’s a true relational database, unlike the in-memory database provider. Normally a SQLite database is written to a file, but the provider includes an in- memory mode, in which the database stays in memory. This makes it much faster and easier to create and use for testing.

Unfortunately, EF Core migrations are tailored to a specific database, which means you can’t run migrations created for SQL Server or PostreSQL against a SQLite database. It’s possible to create multiple sets of migrations, as described in the documentation (http://mng.bz/pP15), but this can add a lot of complexity. Consequently, always use EnsureCreated() with SQLite tests, which creates the database without running migrations, as you’ll see in listing 36.13.

Instead of storing data in a database on disk, both of these providers store data in memory, as shown in figure 36.2. This makes them fast and easy to create and tear down, which allows you to create a new database for every test to ensure that your tests stay isolated from one another.

alt text
alt text

Figure 36.2 The in-memory database provider and SQLite provider (in-memory mode) compared with the SQL Server database provider. The in-memory database provider doesn’t simulate a database as such. Instead, it stores objects in memory and executes LINQ queries against them directly.

NOTE In this section I describe how to use the SQLite provider as an in-memory database, as it’s more full-featured than the in-memory provider. For details on using the in- memory provider, see Microsoft’s “EF Core In-Memory Database Provider” documentation: http://mng.bz/hdIq.

To use the SQLite provider in memory, add the Microsoft.EntityFrameworkCore.Sqlite package to your test project’s .csproj file. This adds the UseSqlite() extension method, which you’ll use to configure the database provider for your unit tests.

Listing 36.13 shows how you could use the in-memory SQLite provider to test the GetRecipe() method of RecipeService. Start by creating a SqliteConnection object and using the "DataSource=:memory:" connection string. This tells the provider to store the database in memory and then open the connection. This is typically faster than using a file-based connection-string and means you can easily run multiple tests in parallel, as there’s no shared database.‌

WARNING The SQlite in-memory database is destroyed when the connection is closed. If you don’t open the connection yourself, EF Core closes the connection to the in- memory database when you dispose of the DbContext. If you want to share an in-memory database between DbContexts, you must explicitly open the connection yourself.

Next, pass the SqliteConnection instance into the DbContextOptionsBuilder<> and call UseSqlite(). This configures the resulting DbContextOptions<> object with the necessary services for the SQLite provider and provides the connection to the in-memory database.‌Because you’re passing this options object in to an instance of AppDbContext, all calls to the DbContext result in calls to the in-memory database provider.

Listing 36.13 Using the in-memory database provider to test an EF Core DbContext

[Fact]
public void GetRecipeDetails_CanLoadFromContext()
{
var connection = new SqliteConnection("DataSource=:memory:"); ❶
connection.Open(); ❷
var options = new DbContextOptionsBuilder<AppDbContext>() ❸
.UseSqlite(connection) ❸
.Options; ❸
using (var context = new AppDbContext(options)) ❹
{
context.Database.EnsureCreated(); ❺
context.Recipes.AddRange( ❻
new Recipe { RecipeId = 1, Name = "Recipe1" }, ❻
new Recipe { RecipeId = 2, Name = "Recipe2" }, ❻
new Recipe { RecipeId = 3, Name = "Recipe3" }); ❻
context.SaveChanges(); ❼
}
using (var context = new AppDbContext(options)) ❽
{
var service = new RecipeService(context); ❾
var recipe = service.GetRecipe (id: 2); ❿
Assert.NotNull(recipe); ⓫
Assert.Equal(2, recipe.Id); ⓫
Assert.Equal("Recipe2", recipe.Name); ⓫
}
}

❶ Configures an in-memory SQLite connection using the special “in-memory” connection string
❷ Opens the connection so EF Core won’t close it automatically
❸ Creates an instance of DbContextOptions<> and configures it to use the SQLite connection
❹ Creates a DbContext and passes in the options
❺ Ensures that the in-memory database matches EF Core’s model (similar to running migrations)
❻ Adds some recipes to the DbContext
❼ Saves the changes to the in-memory database
❽ Creates a fresh DbContext to test that you can retrieve data from the DbContext
❾ Creates the Recipe-Service to test and pass in the fresh DbContext
❿ Executes the GetRecipe function. This executes the query against the inmemory database.
⓫ Verifies that you retrieved the recipe correctly from the in-memory database

This example follows the standard format for any time you need to test a class that depends on an EF Core DbContext:

  1. Create a SqliteConnection with the "DataSource=:memory:" connection string, and open the connection.

  2. Create a DbContextOptionsBuilder<> and call UseSqlite(), passing in the open connection.

  3. Retrieve the DbContextOptions object from the Options property.

  4. Pass the options to an instance of your DbContext and ensure the database matches EF Core’s model by calling context.Database.EnsureCreated(). This is similar to running migrations on your database, but it should be used only on test databases. Create and add any required test data to the in- memory database, and call SaveChanges() to persist the data.

  5. Create a new instance of your DbContext and inject it into your test class. All queries will be executed against the in-memory database.

By using a separate DbContext for each purpose, you can avoid bugs in your tests due to EF Core caching data without writing it to the database. With this approach, you can be sure that any data read in the second DbContext was persisted to the underlying in-memory database provider.

This was a brief introduction to using the SQLite provider as an in-memory database provider and EF Core testing in general, but if you follow the setup shown in listing 36.13, it should take you a long way. The source code for this chapter shows how you can combine this code with a custom WebApplicationFactory to use an in-memory database for your integration tests. For more details on testing EF Core, including additional options and strategies, see Entity Framework Core in Action, 2nd ed., by Jon P. Smith (Manning, 2021).‌‌

Summary

Use the DefaultHttpContext class to unit-test your custom middleware components. If you need access to the response body, you must replace the default Stream.Null with a MemoryStream instance and read the stream manually after invoking the middleware.

API controllers, minimal APIs, and Razor Page models can be unit-tested like other classes, but they should generally contain little business logic, so it may not be worth the effort. For example, the API controller is tested independently of routing, model validation, and filters, so you can’t easily test logic that depends on any of these aspects.

Integration tests allow you to test multiple components of your app at the same time, typically within the context of the ASP.NET Core framework itself. The Microsoft.AspNetCore.TestHost package provides a TestServer object that you can use to create a simple web host for testing. This creates an in- memory server that you can make requests to and receive responses from. You can use the TestServer directly when you wish to create integration tests for custom components like middleware.

For more extensive integration tests of a real application, you should use the WebApplicationFactory class in the Microsoft.AspNetCore.Mvc.Testing package.

Implement IClassFixture<WebApplicationFactory<P rogram>> on your test class, and inject an instance of WebApplicationFactory<Program> into the constructor. This creates an in-memory version of your whole app, using the same configuration, DI services, and middleware pipeline. You can send in- memory requests to your app to get the best idea of how your application will behave in production.

To customize the WebApplicationFactory, call WithWebHostBuilder() and then call ConfigureTestServices(). This method is invoked after your app’s standard DI configuration. This enables you to add or remove the default services for your app, such as to replace a class that contacts a third-party API with a stub implementation.

If you need to customize the services for every test, you can create a custom WebApplicationFactory by deriving from it and overriding the ConfigureWebHost method. You can place all your configuration in the custom factory and implement IClassFixture<CustomWebApplicationFac tory> in your test classes instead of calling WithWebHostBuilder() in every test method.

You can use the EF Core SQLite provider as an in- memory database to test code that depends on an EF Core database context. You configure the in- memory provider by creating a SqliteConnection with a "DataSource=:memory:" connection string.

Create a DbContextOptionsBuilder<> object and call UseSqlite(), passing in the connection. Finally, pass DbContextOptions<> into an instance of your app’s DbContext, and call context.Database.EnsureCreated() to prepare the in-memory database for use with EF Core.

The SQLite in-memory database is maintained as long as there’s an open SqliteConnection.

When you open the connection manually, the database can be used with multiple DbContexts. If you don’t call Open() on the connection, EF Core will close the connection (and delete the in- memory database) when the DbContext is disposed of.

  1. WebApplicationFactory has many other methods you could override for other scenarios. For details, see https://learn.microsoft.com/aspnet/core/test/integration-tests.

ASP.NET Core in Action 35 Testing applications with xUnit

35 Testing applications with xUnit‌

This chapter covers

• Testing in ASP.NET Core

• Creating unit test projects with xUnit Creating Fact and Theory tests

When I started programming, I didn’t understand the benefits of automated testing. It involved writing so much more code. Wouldn’t it be more productive to be working on new features instead? It was only when my projects started getting bigger that I appreciated the advantages. Instead of having to run my app and test each scenario manually, I could click Play on a suite of tests and have my code tested for me automatically.

Testing is universally accepted as good practice, but how it fits into your development process can often turn into a religious debate. How many tests do you need? Should you write tests before, during, or after the main code? Is anything less than 100 percent coverage of your code base adequate? What about 80 percent?

This chapter won’t address any of those questions. Instead, I focus on the mechanics of creating a test project in .NET. In this chapter I show you how to use isolated unit tests to verify the behavior of your services in isolation. In chapter 36 we build on these basics to create unit tests for an ASP.NET Core application, as well as create integration tests that exercise multiple components of your application at the same time.

TIP For a broader discussion of testing, or if you’re brand-new to unit testing, see The Art of Unit Testing, 3rd ed., by Roy Osherove (Manning, 2024). If you want to explore unit test best practices using C# examples, see Unit Testing Principles, Practices, and Patterns, by Vladimir Khorikov (Manning, 2020). Effective Software Testing: A Developers Guide, by Maurício Aniche (Manning, 2022), uses Java examples but covers a broad range of topics and techniques. Alternatively, for an in- depth look at testing with xUnit in .NET Core, see .NET in Action, 2nd ed., by Dustin Metzgar (Manning, 2023).

In section 35.1 I introduce the .NET software development kit (SDK) testing framework and show how you can use it to create unit testing apps. I describe the components involved, including the testing SDK and the testing frameworks themselves, like xUnit and MSTest. Finally, I cover some of the terminology I use throughout this chapter and chapter 36.

This chapter focuses on the mechanics of getting started with xUnit. You’ll learn how to create unit test projects, reference classes in other projects, and run tests with Visual Studio or the .NET command-line interface (CLI). You’ll create a test project and use it to test the behavior of a basic currency- converter service. Finally, you’ll write some simple unit tests that check whether the service returns the expected results and throws exceptions when you expect it to.

Let’s start by looking at the overall testing landscape for ASP.NET Core, the options available to you, and the components involved.

35.1 An introduction to testing in ASP.NET Core‌

In this section you’ll learn about the basics of testing in ASP.NET Core. You’ll learn about the types of tests you can write, such as unit tests and integration tests, and why you should write both types. Finally, you’ll see how testing fits into ASP.NET Core.

If you have experience building apps with the full .NET Framework or mobile apps with Xamarin, you might have some experience with unit testing frameworks. If you were building apps in Visual Studio, the steps for creating a test project differed among testing frameworks (such as xUnit, NUnit, and MSTest), and running the tests in Visual Studio often required installing a plugin. Similarly, running tests from the command line varied among frameworks.

With the .NET SDK, testing in ASP.NET Core and .NET Core is a first-class citizen, on a par with building, restoring packages, and running your application. Just as you can run dotnet build to build a project, or dotnet run to execute it, you can use dotnet test to execute the tests in a test project, regardless of the testing framework used.

The dotnet test command uses the underlying .NET SDK to execute the tests for a given project. This is the same as when you run your tests using the Visual Studio test runner, so whichever approach you prefer, the results are the same.

Test projects are console apps that contain several tests. A test is typically a method that evaluates whether a given class in your app behaves as expected. The test project typically has dependencies on at least three components:

• The .NET Test SDK

• A unit testing framework, such as xUnit, NUnit, Fixie, or MSTest

• A test-runner adapter for your chosen testing framework so that you can execute your tests by calling dotnet test

These dependencies are normal NuGet packages that you can add to a project, but they allow you to hook in to the dotnet test command and the Visual Studio test runner. You’ll see an example .csproj file from a test app in the next section.

Typically, a test consists of a method that runs a small piece of your app in isolation and checks whether it has the desired behavior. If you were testing a Calculator class, you might have a test that checks that passing the values 1 and 2 to the Add() method returns the expected result, 3.‌

You can write lots of small, isolated tests like this for your app’s classes to verify that each component is working correctly, independent of any other components. Small isolated tests like these are called unit tests.

Using the ASP.NET Core framework, you can build apps that you can easily unit-test. You can test some aspects of your API controllers in isolation from your action filters and model binding, for example, because the framework

• Avoids static types

• Uses interfaces instead of concrete implementations

• Has a highly modular architecture, allowing you to test your API controllers in isolation from your action filters and model binding

But the fact that all your components work correctly independently doesn’t mean they’ll work when you put them together. For that, you need integration tests, which test the interaction between multiple components.

The definition of an integration test is another somewhat- contentious problem, but I think of integration tests as testing multiple components together or testing large vertical slices of your app—testing a user manager class that can save values to a database, for example, or testing that a request made to a health-check endpoint returns the expected response.Integration tests don’t necessarily include the entire app, but they use more components than unit tests.

NOTE I don’t cover UI tests, which (for example) interact with a browser to provide true end-to-end automated testing. Playwright (https://playwright.dev) and Cypress (https://www.cypress.io) are two of the most popular modern tools for UI testing.

ASP.NET Core has a couple of tricks up its sleeve when it comes to integration testing, as you’ll see in chapter 36. You can use the Test Host package to run an in-process ASP.NET Core server, which you can send requests to and inspect the responses. This saves you from the orchestration headache of trying to spin up a web server on a different process, making sure ports are available, and so on, but still allows you to exercise your whole app.

At the other end of the scale, the Entity Framework Core (EF Core) SQLite in-memory database provider lets you isolate your tests from the database. Interacting with and configuring a database is often one of the hardest aspects of automating tests, so this provider lets you sidestep the problem. You’ll see how to use it in chapter 36.

The easiest way to get to grips with testing is to give it a try, so in the next section you’ll create your first test project and use it to write unit tests for a simple custom service.

35.2 Creating your first test project with xUnit‌

As I described in section 35.1, to create a test project you need to use a testing framework. You have many options, such as NUnit and MSTest, but (anecdotally) the most used test framework with ASP.NET Core is xUnit (https://xunit.net). The ASP.NET Core framework project itself uses xUnit as its testing framework, so it’s become somewhat of a convention. If you’re familiar with a different testing framework, feel free to use that instead.

Visual Studio includes a template to create a .NET 7 xUnit test project, as shown in figure 35.1. Choose File > New > Project, and choose xUnit Test Project in the New Project dialog box. Alternatively, you could choose MSTest Project or

NUnit Test Project if you’re more comfortable with those frameworks.

alt text

Figure 35.1 The New Project dialog box in Visual Studio. Choose xUnit Test Project to create an xUnit project, or choose Unit Test Project to create an MSTest project.

Alternatively, if you’re not using Visual Studio, you can create a similar template using the .NET CLI with

dotnet new xunit

Whether you use Visual Studio or the .NET CLI, the template creates a console project and adds the required testing NuGet
packages to your .csproj file, as shown in the following listing. If you chose to create an MSTest (or other framework) test project, the xUnit and xUnit runner packages would be replaced by packages appropriate to your testing framework of choice.

Listing 35.1 The .csproj file for an xUnit test project

<Project Sdk="Microsoft.NET.Sdk"> ❶
<PropertyGroup> ❶
<TargetFramework>net7.0</TargetFramework> ❶
<IsPackable>false</IsPackable>
</PropertyGroup>
<ItemGroup>
<PackageReference
Include="Microsoft.NET.Test.Sdk" Version="17.3.2" /> ❷
<PackageReference Include="xunit" Version="2.4.2" /> ❸
<PackageReference
Include="xunit.runner.visualstudio" Version="2.4.5" /> ❹
<PackageReference Include="coverlet.collector" Version="3.1.2" /> ❺
</ItemGroup>
</Project>

❶ The test project is a standard .NET 7.0 project.
❷ The .NET Test SDK, required by all test projects
❸ The xUnit test framework
❹ The xUnit test adapter for the .NET Test SDK
❺ An optional package that collects metrics about how much of your code base is covered by tests

TIP Adding the Microsoft.NET.Test.Sdk package marks the project as a test project by setting the IsTestProject MsBuild property.

In addition to the NuGet packages, the template includes a single example unit test. This doesn’t do anything, but it’s a valid xUnit test all the same, as shown in the following listing.

In xUnit, a test is a method on a public class, decorated with a [Fact] attribute.

Listing 35.2 An example xUnit unit test, created by the default template

public class UnitTest1 ❶
{
[Fact] ❷
public void Test1() ❸
{
}
}

Even though this test doesn’t test anything, it highlights some characteristics of xUnit [Fact] tests:

• Tests are denoted by the [Fact] attribute.

• The method should be public, with no method arguments.

• The method is void. It could also be an async method and return Task.

• The method resides inside a public, nonstatic class.

NOTE The [Fact] attribute and these restrictions are specific to the xUnit testing framework. Other frameworks have other ways to denote test classes and different restrictions on the classes and methods themselves.

It’s also worth noting that although I said test projects are console apps, there’s no Program class or static void Main method. Instead, the app looks more like a class library because the test SDK automatically injects a Program class at build time. It’s not something you have to worry about in‌ general, but you may have problems if you try to add your own Program.cs file to your test project.

NOTE This isn’t a common thing to do, but I’ve seen it done occasionally. I describe this problem in detail and how to fix it in my blog post “Fixing the error ‘Program has more than one entry point defined’ for console apps containing xUnit tests,” at http://mng.bz/w9q5.

Before we go any further and create some useful tests, we’ll run the test project as it is, using both Visual Studio and the .NET SDK tooling, to see the expected output.

35.3 Running tests with dotnet test‌

When you create a test app that uses the .NET Test SDK, you can run your tests by using Visual Studio or the .NET CLI. In Visual Studio, you run tests by choosing Test > Run All Tests or by choosing Run All in the Test Explorer window, as shown in figure 35.2.

alt text

Figure 35.2 The Test Explorer window in Visual Studio lists all tests found in the solution and their most recent pass/fail status. Click a test in the left pane to see details about the most recent test run in the right pane.

The Test Explorer window lists all the tests found in your solution and the results of each test. In xUnit, a test passes if it doesn’t throw an exception, so UnitTest1.Test1 passed successfully.

NOTE The Test Explorer in Visual Studio uses the open-source VSTest protocol (https://github.com/microsoft/vstest) for listing and debugging tests. It’s also used by Visual Studio for Mac and Visual Studio Code, for example.

Alternatively, you can run your tests from the command line using the .NET CLI by running dotnet test from the unit-test project’s folder, as shown in figure 35.3.

alt text

Figure 35.3 You can run tests from the command line using dotnet test. This restores and builds the test project before executing all the tests in the project.

NOTE You can also run dotnet test from the solution folder. This runs all test projects referenced in the .sln solution file.

Calling dotnet test runs a restore and build of your test project and then runs the tests, as you can see from the console output in figure 35.3. Under the hood, the .NET CLI calls in to the same underlying infrastructure that Visual Studio does (the .NET SDK), so you can use whichever approach better suits your development style.

You’ve seen a successful test run, so it’s time to replace that placeholder test with something useful. First things first, though: you need something to test.

35.4 Referencing your app from your test project‌

In test-driven development (TDD), you typically write your unit tests before you write the actual class you’re testing, but I’m going to take a more traditional route here and create the class to test first. You’ll write the tests for it afterward.

Let’s assume you’ve created an app called ExchangeRates.Web, which exposes an API that converts among different currencies, and you want to add tests for it. You’ve added a test project to your solution as described in section 35.2.1, so your solution looks like figure 35.4.

alt text

Figure 35.4 A basic solution containing an ASP.NET Core app called ExchangeRates.Web and a test project called ExchangeRates.Web.Tests

For the ExchangeRates.Web.Tests project to test the classes in the ExchangeRates.Web project, you need to add a reference to the web project from your test project. In Visual Studio, you can do this by right-clicking the Dependencies node of your test project and choosing Add Project Reference from the contextual menu, as shown in figure 35.5. You can then select the web project in the Reference Manager dialog box. After adding it to your project, it shows up inside the Dependencies node, under Projects.

alt text

Figure 35.5 To test your app project, you need to add a reference to it from the test project. Right-click the Dependencies node, and choose Add Project Reference from the contextual menu. The app project is referenced inside the Dependencies node, under Projects.

Alternatively, you can edit the .csproj file directly and add a <ProjectReference> element inside an <ItemGroup> element with the relative path to the referenced project’s .csproj file:

<ItemGroup>
<ProjectReference
Include="..\..\src\ExchangeRates.Web\ExchangeRates.Web.csproj" />
</ItemGroup>

Note that the path is the relative path. A ".." in the path means the parent folder, so the relative path shown correctly traverses the directory structure for the solution, including both the src and test folders shown in Solution Explorer in figure 35.5.

TIP Remember that you can edit the .csproj file directly in Visual Studio by double-clicking the project in Solution Explorer.

Common conventions for project layout

The layout and naming of projects within a solution are completely up to you, but ASP.NET Core projects have generally settled on a couple of conventions that differ slightly from the Visual Studio File > New defaults. These conventions are used by the ASP.NET team on GitHub, as well as by many other open-source C# projects.

The following figure shows an example of these layout conventions. In summary, these are as follows:

The .sln solution file is in the root directory.

The main projects are placed in a src subdirectory.

The test projects are placed in a test or tests subdirectory.

Each main project has a test project equivalent, named the same as the associated main project with a .Test or .Tests suffix.

Other folders (such as samples, tools, and docs) contain sample projects, tools for building the project, or documentation.

alt text

Conventions for project structures have emerged in the ASP.NET Core framework libraries and open- source projects on GitHub. You don’t have to follow them for your own project, but it’s worth being aware of them.

All these conventions are optional. Whether to follow them is entirely up to you. Either way, it’s good to be aware of them so you can easily navigate other projects on GitHub.

Your test project is now referencing your web project, so you can write tests for classes in the web project. You’re going to be testing a simple class used for converting among currencies, as shown in the following listing.

Listing 35.3 Example CurrencyConverter class to convert currencies to GBP

public class CurrencyConverter
{
public decimal ConvertToGbp( ❶
decimal value, decimal exchangeRate, int decimalPlaces) ❶
{
if (exchangeRate <= 0) ❷
{ ❷
throw new ArgumentException( ❷
"Exchange rate must be greater than zero", ❷
nameof(exchangeRate)); ❷
} ❷
var valueInGbp = value / exchangeRate; ❸
return decimal.Round(valueInGbp, decimalPlaces); ❹
}
}

❶ The ConvertToGbp method converts a value using the provided exchange rate
and rounds it.
❷ Guard clause, as only positive exchange rates are valid
❸ Converts the value
❹ Rounds the result and returns it

This class has a single method, ConvertToGbp(), that converts a value from one currency into GBP, given the provided exchangeRate. Then it rounds the value to the required number of decimal places and returns it.

WARNING This class is a basic implementation. In practice, you’d need to handle arithmetic overflow/underflow for large or negative values, as well as consider other edge cases. This example is for demonstration purposes only!

Imagine you want to convert 5.27 USD to GBP, and the exchange rate from GBP to USD is 1.31. If you want to round to four decimal places, you’d make this call:

converter.ConvertToGbp(value: 5.27, exchangeRate: 1.31, decimalPlaces: 4);

You have your sample application, a class to test, and a test project, so it’s about time you wrote some tests.

35.5 Adding Fact and Theory unit tests‌

When I write unit tests, I usually target one of three paths through the method under test:

• The happy path—Where typical arguments with expected values are provided

• The error path—Where the arguments passed are invalid and tested for

• Edge cases—Where the provided arguments are right on the edge of expected values

I realize that this is a broad classification, but it helps me think about the various scenarios I need to consider.

TIP A completely different approach to testing is property- based testing. This fascinating approach is common in functional programming communities, like F#. You can find a great introduction by Scott Wlaschin in his blog post series “The ‘Property Based Testing’ Series” at http://mng.bz/o1eZ. That post uses F#, but it is still highly accessible even if you’re new to the language.‌

Let’s start with the happy path, writing a unit test that verifies that the ConvertToGbp() method is working as expected with typical input values, as shown in the following listing.

Listing 35.4 Unit test for ConvertToGbp using expected arguments

[Fact] ❶
public void ConvertToGbp_ConvertsCorrectly() ❷
{
var converter = new CurrencyConverter(); ❸
decimal value = 3; ❹
decimal rate = 1.5m; ❹
int dp = 4; ❹
decimal expected = 2; ❺
var actual = converter.ConvertToGbp(value, rate, dp); ❻
Assert.Equal(expected, actual); ❼
}

❶ The [Fact] attribute marks the method as a test method.
❷ You can call the test anything you like.
❸ The class to test, commonly called the “system under test”
❹ The parameters of the test that will be passed to ConvertToGbp
❺ The result you expect
❻ Executes the method and captures the result
❼ Verifies that the expected and actual values match; if they don’t, throws an exception

This is your first proper unit test, which has been configured using Arrange, Act, Assert (AAA) style:

• Arrange—Define all the parameters and create an instance of the system (class) under test (SUT).

• Act—Execute the method being tested, and capture the result.

• Assert—Verify that the result of the Act stage had the expected value.

Most of the code in this test is standard C#, but if you’re new to testing, the Assert call will be unfamiliar. This is a helper class provided by xUnit for making assertions about your code. If the parameters provided to Assert.Equal() aren’t equal, the Equal() call will throw an exception and fail the test. If you change the expected variable in listing 35.4 to 2.5 instead of 2, for example, and run the test, Test Explorer shows a failure, as you see in figure 35.6.‌‌

alt text

Figure 35.6 When a test fails, it’s marked with a red cross in Test Explorer. Clicking the test in the left pane shows the reason for the failure in the right pane. In this case, the expected value was 2.5, but the actual value was 2.

TIP Alternative assertion libraries such as Fluent Assertions (https://fluentassertions.com) and Shouldly (https://github.com/shouldly/shouldly) allow you to write your assertions in a more natural style, such as actual.Should().Be(expected). These libraries are optional, but I find they make tests more readable and error messages easier to understand.

In listing 35.4 you chose specific values for value, exchangeRate, and decimalPlaces to test the happy path. But this is only one set of values in an infinite number of possibilities, so you probably should test at least a few different combinations. One way to achieve this would be to copy and paste the test multiple times, tweak the parameters, and change the test method name to make it unique. xUnit provides an alternative way to achieve the same thing without requiring so much duplication.

NOTE The names of your test class and method are used throughout the test framework to describe your test. You can customize how these are displayed in Visual Studio and in the CLI by configuring an xunit.runner.json file, as described at https://xunit.net/docs/configuration-files.

Instead of creating a [Fact] test method, you can create a [Theory] test method. A theory provides a way of parameterizing your test methods, effectively taking your test method and running it multiple times with different arguments. Each set of arguments is considered a different test.‌

You could rewrite the [Fact] test in listing 35.4 to be a [Theory] test, as shown in the next listing. Instead of specifying the variables in the method body, pass them as parameters to the method and then decorate the method with three [InlineData] attributes. Each instance of the attribute provides the parameters for a single run of the test.

Listing 35.5 Theory test for ConvertToGbp testing multiple sets of values

[Theory] ❶
[InlineData(0, 3, 0)] ❷
[InlineData(3, 1.5, 2)] ❷
[InlineData(3.75, 2.5, 1.5)] ❷
public void ConvertToGbp_ConvertsCorrectly ( ❸
decimal value, decimal rate, decimal expected) ❸
{
var converter = new CurrencyConverter();
int dps = 4; ❹
var actual = converter.ConvertToGbp(value, rate, dps); ❺
Assert.Equal(expected, actual); ❻
}

❶ Marks the method as a parameterized test
❷ Each [InlineData] attribute provides all the parameters for a single run of the test method.
❸ The method takes parameters, which are provided by the [InlineData] attributes.
❹ The dps variable doesn’t change, so there’s no need to include it in [InlineData].
❺ Executes the SUT
❻ Verifies the result

If you run this [Theory] test using dotnet test or Visual Studio, it will show up as three separate tests, one for each set of [InlineData], as shown in figure 35.7.

alt text

Figure 35.7 Each set of parameters in an [InlineData] attribute for a [Theory] test creates a separate test run. In this example, a single [Theory] has three [InlineData] attributes, so it creates three tests, named according to the method name and the provided parameters.

[InlineData] isn’t the only way to provide the parameters for your theory tests, but it’s one of the most commonly used. You can also use a static property on your test class with the

[MemberData] attribute or a class itself using the

[ClassData] attribute.

TIP I describe how you can use the [ClassData] and [MemberData] attributes in my blog post “Creating parameterised tests in xUnit with [InlineData], [ClassData], and [MemberData]”: http://mng.bz/8ayP.

You now have some tests for the happy path of the ConvertToGbp() method, and I even sneaked an edge case into listing 35.5 by testing the case where value = 0. The final concept I’ll cover is testing error cases, where invalid values are passed to the method under test.‌

35.6 Testing failure conditions‌

A key part of unit testing is checking whether the system under test handles edge cases and errors correctly. For the CurrencyConverter, that would mean checking how the class handles negative values, small or zero exchange rates, large values and rates, and so on.

Some of these edge cases might be rare but valid cases, whereas other cases might be technically invalid. Calling ConvertToGbp with a negative value is probably valid; the converted result should be negative too. On the other hand, a negative exchange rate doesn’t make sense conceptually, so it should be considered an invalid value.

Depending on the design of the method, it’s common to throw exceptions when invalid values are passed to a method. In listing 35.3 you saw that we throw an ArgumentException if the exchangeRate parameter is less than or equal to 0.

xUnit includes a variety of helpers on the Assert class for testing whether a method throws an exception of an expected type. You can then make further assertions on the exception, such as to test whether the exception had an expected message.

WARNING Take care not to tie your test methods too closely to the internal implementation of a method. Doing so can make your tests brittle, and trivial changes to a class may break the unit tests.

The following listing shows a [Fact] test to check the behavior of the ConvertToGbp() method when you pass it a 0 exchangeRate. The Assert.Throws method takes a lambda function that describes the action to execute, which should throw an exception when run.‌‌

Listing 35.6 Using Assert.Throws<> to test whether a method throws an exception

[Fact]
public void ThrowsExceptionIfRateIsZero()
{
var converter = new CurrencyConverter();
const decimal value = 1;
const decimal rate = 0; ❶
const int dp = 2;
var ex = Assert.Throws<ArgumentException>( ❷
() => converter.ConvertToGbp(value, rate, dp)); ❸
// Further assertions on the exception thrown, ex
}

❶ An invalid value
❷ You expect an Argument-Exception to be thrown.
❸ The method to execute, which should throw an exception

The Assert.Throws method executes the lambda and catches the exception. If the exception thrown matches the expected type, the test passes. If no exception is thrown or the exception thrown isn’t of the expected type, the Assert.Throws method throws an exception and fails the test.

That brings us to the end of this brief introduction to unit testing with xUnit. The examples in this section described how to use the new .NET Test SDK, but we didn’t cover anything specific to ASP.NET Core. In chapter 36 we’ll focus on applying these techniques to testing ASP.NET Core projects specifically.

Summary

Unit test apps are console apps that have a dependency on the .NET Test SDK, a test framework such as xUnit, MSTest, or NUnit, and a test runner adapter. You can run the tests in a test project by calling dotnet test from the command line in your test project or by using Test Explorer in Visual Studio.

Many testing frameworks are compatible with the .NET Test SDK, but xUnit has emerged as an almost de facto standard for ASP.NET Core projects. The ASP.NET Core team themselves use it to test the framework.

To create an xUnit test project, choose xUnit Test Project in Visual Studio or use the dotnet new xunit CLI command. This creates a test project containing the Microsoft.NET.Test.Sdk, xunit, and xunit.runner.visualstudio NuGet packages.

xUnit includes two attributes to identify test methods. [Fact] methods should be public and parameterless. [Theory] methods can contain parameters, so they can be used to run a similar test repeatedly with different parameters. You can provide the data for each [Theory] run using the [InlineData], [ClassData], or [MemberData] attributes.

Use assertions in your test methods to verify that the SUT returned an expected value. Assertions exist for most common scenarios, including verifying that a method call raised an exception of a specific type. If your code raises an unhandled exception, the test will fail.

ASP.NET Core in Action 34 Building background tasks and ser vices

34 Building background tasks and ser vices‌

This chapter covers

• Creating tasks that run in the background for your application

• Using the generic IHost to create Windows Services and Linux daemons

• Using Quartz.NET to run tasks on a schedule in a clustered environment

We’ve covered a lot of ground in the book so far. You’ve learned how to create page-based applications using Razor Pages and how to create APIs for mobile clients and services. You’ve seen how to add authentication and authorization to your application, use Entity Framework Core (EF Core) for storing state in the database, and create custom components to meet your requirements.

As well as using these UI-focused apps, you may find you need to build background or batch-task services. These services aren’t meant to interact with users directly. Rather, they stay running in the background, processing items from a queue or periodically executing a long-running process.

For example, you might want to have a background service that sends email confirmations for e-commerce orders or a batch job that calculates sales and losses for retail stores after the shops close. ASP.NET Core includes support for these background tasks by providing abstractions for running a task in the background when your application starts.

In section 34.1 you’ll learn about the background task support provided in ASP.NET Core by the IHostedService interface. You’ll learn how to use the BackgroundService helper class to create tasks that run on a timer and how to manage your DI lifetimes correctly in a long-running task.

In section 34.2 we’ll take the background service concept one step further to create headless worker services using the generic IHost. Worker services don’t use Razor Pages, API controllers, or minimal API endpoints; instead, they consist only of IHostedService services running tasks in the background. You’ll also see how to configure and install a worker service app as a Windows Service or as a Linux daemon.

In section 34.3 I introduce the open-source library Quartz.NET, which provides extensive scheduling capabilities for creating background services. You’ll learn how to install Quartz.NET in your applications, create complex schedules for your tasks, and add redundancy to your worker services using clustering.

Before we get to more complex scenarios, we’ll start by looking at the built-in support for running background tasks in your apps.

34.1 Running background tasks with IHostedService‌

In most applications, it’s common to create tasks that happen in the background rather than in response to a request. This could be a task to process a queue of emails, handling events published to some sort of a message bus or running a batch process to calculate daily profits. By moving this work to a background task, your user interface can stay responsive. Instead of trying to send an email immediately, for example, you could add the request to a queue and return a response to the user immediately. The background task can consume that queue in the background at its leisure.

In ASP.NET Core, you can use the IHostedService interface to run tasks in the background. Classes that implement this interface are started when your application starts, shortly after your application starts handling requests, and they are stopped shortly before your application is stopped. This provides the hooks you need to perform most tasks.

NOTE Even the default ASP.NET Core server, Kestrel, runs as an IHosted-Service. In one sense, almost everything in an ASP.NET Core app is a background task.

In this section you’ll see how to use the IHostedService to create a background task that runs continuously throughout the lifetime of your app. This could be used for many things, but in the next section you’ll see how to use it to populate a simple cache. You’ll also learn how to use services with a scoped lifetime in your singleton background tasks by managing container scopes yourself.

34.1.1 Running background tasks on a timer‌

In this section you’ll learn how to create a background task that runs periodically on a timer throughout the lifetime of your app. Running background tasks can be useful for many reasons, such as scheduling work to be performed later or performing work in advance.

In chapter 33 we used IHttpClientFactory and a typed client to call a third-party service to retrieve the current exchange rate between various currencies and returned them in an API endpoint, as shown in the following listing.

Listing 34.1 Using a typed client to return exchange rates from a third-party service

app.MapGet("/", async (ExchangeRatesClient ratesClient) => ❶
await ratesClient.GetLatestRatesAsync()); ❷

A typed client created using IHttpClientFactory is injected using dependency
injection (DI).
❷ The typed client is used to retrieve exchange rates from the remote API and
returns them.

A simple optimization for this code might be to cache the exchange rate values for a period. There are multiple ways you could implement that, but in this section we’ll use a simple cache that preemptively fetches the exchange rates in the background, as shown in figure 34.1. The API endpoint simply reads from the cache; it never has to make HTTP calls itself, so it remains fast.

alt text

Figure 34.1 You can use a background task to cache the results from a third-party API on a schedule. The API controller can then read directly from the cache instead of calling the third-party API itself. This reduces the latency of requests to your API controller while ensuring that the data remains fresh.

NOTE An alternative approach might add caching to your strongly typed client, ExchangeRatesClient. The downside is that when you need to update the rates, you will have to perform the request immediately, making the overall response slower. Using a background service keeps your API endpoint consistently fast.

You can implement a background task using the IHostedService interface. This consists of two methods:

public interface IHostedService
{
Task StartAsync(CancellationToken cancellationToken);
Task StopAsync(CancellationToken cancellationToken);
}

There are subtleties to implementing the interface correctly. In particular, the StartAsync() method, although asynchronous, runs inline as part of your application startup. Background tasks that are expected to run for the lifetime of your application must return a Task immediately and schedule background work on a different thread.

WARNING Calling await in the IHostedService.StartAsync() method blocks your application from starting until the method completes. This can be useful in some cases, when you don’t want the application to start handling requests until the IHostedService task has completed, but that’s often not the desired behavior for background tasks.

To make it easier to create background services using best- practice patterns, ASP.NET Core provides the abstract base class BackgroundService, which implements IHostedService and is designed to be used for long- running tasks. To create a background task, you must override a single method of this class, ExecuteAsync(). You’re free to use async-await inside this method, and you can keep running the method for the lifetime of your app.‌

The following listing shows a background service that fetches the latest interest rates using a typed client and saves them in a cache, as you saw in figure 34.1. The ExecuteAsync() method keeps looping and updating the cache until the Cancellation-Token passed as an argument indicates that the application is shutting down.

Listing 34.2 Implementing a BackgroundService that calls a remote HTTP API

public class ExchangeRatesHostedService : BackgroundService ❶
{
private readonly IServiceProvider _provider; ❷
private readonly ExchangeRatesCache _cache; ❸
public ExchangeRatesHostedService(
IServiceProvider provider, ExchangeRatesCache cache)
{
_provider = provider;
_cache = cache;
}
protected override async Task ExecuteAsync( ❹
CancellationToken stoppingToken) ❺
{
while (!stoppingToken.IsCancellationRequested) ❻
{
var client = _provider ❼
.GetRequiredService<ExchangeRatesClient>(); ❼
string rates = await client.GetLatestRatesAsync(); ❽
_cache.SetRates(rates); ❾
await Task.Delay(TimeSpan.FromMinutes(5), stoppingToken); ❿
}
}
}

❶ Derives from BackgroundService to create a task that runs for the lifetime of
your app
❷ Injects an IServiceProvider so you can create instances of the typed client
❸ A simple cache for exchange rates
❹ You must override ExecuteAsync to set the service’s behavior.
❺ The CancellationToken passed as an argument is triggered when the
application shuts down.
❻ Keeps looping until the application shuts down
❼ Creates a new instance of the typed client so that the HttpClient is short-lived
❽ Fetches the latest rates from the remote API
❾ Stores the rates in the cache
❿ Waits for 5 minutes (or for the application to shut down) before updating the
cache

The ExchangeRateCache in listing 34.2 is a simple singleton that stores the latest rates. It must be thread-safe, as it is accessed concurrently by your API endpoint. You can see a simple implementation in the source code for this chapter.

To register your background service with the dependency injection (DI) container, use the AddHostedService() extension method in Program.cs, which registers the service using a singleton lifetime, as shown in the following listing.‌

Listing 34.3 Registering an IHostedService with the DI container

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
builder.Services.AddHttpClient<ExchangeRatesClient>(); ❶
builder.Services.AddSingleton<ExchangeRatesCache>(); ❷
builder.Services.AddHostedService<ExchangeRatesHostedService>(); ❸

❶ Registers the typed client as before
❷ Adds the cache object as a singleton so it is shared throughout your app
❸ Registers ExchangeRatesHostedService as an IHostedService

By using a background service to fetch the exchange rates, your API endpoint becomes even simpler. Instead of fetching the latest rates itself, it returns the value from the cache, which is kept up to date by the background service:

app.MapGet("/", (ExchangeRatesCache cache) => 
cache.GetLatestRatesAsync());

This approach to caching works to simplify the API, but you may have noticed a potential risk: if the API receives a request before the background service has successfully updated the rates, the API will fail to return any rates.

This may be OK, but you could take another approach. As well as updating the rates periodically, you could use the StartAsync method to block app startup until the rates have successfully updated. That way, you guarantee that the rates are available before the app starts handling requests, so the API will always return successfully. Listing 34.4 shows how you could update listing 34.2 to block startup until the rates have been updated while still updating periodically in the background.

Listing 34.4 Implementing StartAsync to block startup in an IHostedService

public class ExchangeRatesHostedService : BackgroundService
{
private readonly IServiceProvider _provider;
private readonly ExchangeRatesCache _cache;
public ExchangeRatesHostedService(
IServiceProvider provider, ExchangeRatesCache cache)
{
_provider = provider;
_cache = cache;
}
public override async Task StartAsync( ❶
CancellationToken cancellationToken) ❶
{
var success = false;
while(!success && !cancellationToken.IsCancellationRequested) ❷
{ ❷
success = await TryUpdateRatesAsync(); ❷
} ❷
await base.StartAsync(cancellationToken); ❸
}
protected override async Task ExecuteAsync(
CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
await Task.Delay(TimeSpan.FromMinutes(5), stoppingToken);
await TryUpdateRatesAsync();
}
}
private async Task<bool> TryUpdateRatesAsync()
{
try
{
var client = _provider
.GetRequiredService<ExchangeRatesClient>();
string rates = await client.GetLatestRatesAsync();
_cache.SetRates(rates);
return true;
}
catch(Exception ex)
{
    return false;
}
}
}

❶ The StartAsync method runs on start, before the app starts handling requests.
❷ Keeps trying to update the rates until it succeeds
❸ Once the update succeeds, starts the background process

WARNING The downside to listing 34.4 is that if there’s a problem retrieving the rates, the app won’t ever start up and start listening for requests. Whether you consider that a bug or a feature will depend on your deployment process! Many orchestrators, for example, will use rolling updates, which ensure that a new deployment is listening for requests before shutting down the old deployment instances.

One slightly messy aspect of both listings 34.2 and 34.4 is that I used the Service Locator pattern to retrieve the typed client. This isn’t ideal, but you shouldn’t inject typed clients into background services directly. Typed clients are designed to be short-lived to ensure that you take advantage of the HttpClient handler rotation, as described in chapter 21.By contrast, background services are singletons that live for the lifetime of your application.

TIP If you wish, you can avoid the Service Locator pattern in listing 34.2 by using the factory pattern described in Steve Gordon’s post titled “IHttpClientFactory Patterns: Using Typed Clients from Singleton Services”: http://mng.bz/opDZ.

The need for short-lived services leads to another common question: how can you use scoped services in a background service?

34.1.2 Using scoped services in background tasks‌

Background services that implement IHostedService are created once when your application starts. That means they are by necessity singletons, as there will be only a single instance of the class.

That leads to a problem if you need to use services registered with a scoped lifetime. Any services you inject into the constructor of your singleton IHostedService must themselves be registered as singletons. Does that mean there’s no way to use scoped dependencies in a background service?

NOTE As I discussed in chapter 9, the dependencies of a service must always have a lifetime that’s the same as or longer than that of the service itself, to avoid captive dependencies.

Imagine a slight variation on the caching example from section 34.1.1. Instead of storing the exchange rates in a singleton cache object, you want to save the exchange rates to a database so you can look up the historic rates.

Most database providers, including EF Core’s DbContext, register their services with scoped lifetimes. That means you need to access the scoped DbContext from inside the singleton ExchangeRatesHostedService, which precludes injecting the DbContext with constructor injection. The solution is to create a new container scope every time you update the exchange rates.

In typical ASP.NET Core applications, the framework creates a new container scope every time a new request is received, immediately before the middleware pipeline executes. All the services that are used in that request are fetched from the scoped container. When the request ends, the scoped container is disposed, along with any of the IDisposable scoped and transient services that were obtained from it. In a background service, however, there are no requests, so no container scopes are created. The solution is to create your own.

You can create a new container scope anywhere you have access to an IServiceProvider by calling IServiceProvider.CreateScope(). This creates a scoped container, which you can use to safely retrieve scoped and transient services.

WARNING Always make sure to dispose of the IServiceScope returned by CreateScope() when you’re finished with it, typically with a using statement. This disposes of any IDisposable services that were created by the scoped container and prevents memory leaks.‌

The following listing shows a version of the ExchangeRatesHostedService that stores the latest exchange rates as an EF Core entity in the database. It creates a new scope for each iteration of the while loop and retrieves the scoped AppDbContext from the scoped container.

Listing 34.5 Consuming scoped services from an IHostedService

public class ExchangeRatesHostedService : BackgroundService ❶
{
private readonly IServiceProvider _provider; ❷
public ExchangeRatesHostedService(IServiceProvider provider) ❷
{
_provider = provider;
}
protected override async Task ExecuteAsync(
CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
using(IServiceScope scope = _provider.CreateScope()) ❸
{
var scopedProvider = scope.ServiceProvider; ❹
var client = scope.ServiceProvider ❺
.GetRequiredService<ExchangeRatesClient>(); ❺
var context = scope.ServiceProvider ❻
.GetRequiredService<AppDbContext>(); ❻
var rates = await client.GetLatestRatesAsync(); ❻
context.Add(rates); ❻
await context.SaveChanges(rates); ❻
} ❼
await Task.Delay(TimeSpan.FromMinutes(5), stoppingToken); ❽
}
}
}

❶ Background-Service is registered as a singleton.
❷ The injected IServiceProvider can be used to retrieve singleton services or to
create scopes.
❸ Creates a new scope using the root IServiceProvider
❹ The scope exposes an IServiceProvider that can be used to retrieve scoped
components.
❺ Retrieves the scoped services from the container
❻ Fetches the latest rates, and saves using EF Core
❼ Disposes of the scope with the using statement
❽ Waits for the next iteration. A new scope is created on the next iteration.

Creating scopes like this is a general solution whenever you need to access scoped services and you’re not running in the context of a request. For example, if you need to access scoped or transient services in Program.cs, you can create a new scope by calling WebApplication.Services.CreateScope(). You can then retrieve the services you need, do your work, and dispose the scope to clean up the services.

Another prime example is when you’re injecting services into an OptionsBuilder instance, as you saw in chapter 31. You can take exactly the same approach—create a new scope—as shown in my blog post titled “The dangers and gotchas of using scoped services in OptionsBuilder”: http://mng.bz/4D6j.

TIP Using service location in this way always feels a bit convoluted. I typically try to extract the body of the task to a separate class and use service location to retrieve that class only. You can see an example of this approach in the “Consuming a scoped service in a background task” section of Microsoft’s “Background tasks with hosted services in ASP.NET Core” documentation: http://mng.bz/4ZER.

IHostedService is available in ASP.NET Core, so you can run background tasks in your Razor Pages and minimal API applications. However, sometimes all you want is the background task; you don’t need any UI. For those cases, you can use the generic IHost abstraction without having to bother with HTTP handling at all.‌

34.2 Creating headless worker services using IHost‌

In this section you’ll learn about worker services, which are ASP.NET Core applications that do not handle HTTP traffic. You’ll learn how to create a new worker service from a template and compare the generated code with a traditional ASP.NET Core application. You’ll also learn how to install the worker service as a Windows Service or as a systemd daemon in Linux.

In section 34.1 we cached exchange rates based on the assumption that they’re being consumed directly by the UI part of your application, such as by Razor Pages or minimal API endpoints. However, in the section 34.1.2 example we saved the rates to a database instead of storing them in- process. That raises the possibility that other applications with access to the database will use the rates too. Taking that one step further, could we create an application which is responsible only for caching these rates and has no UI at all?

Since .NET Core 3.0, ASP.NET Core has been built on top of a generic IHost implementation, as you learned in chapter 30. The IHost implementation provides features such as configuration, logging, and DI. ASP.NET Core adds the middleware pipeline for handling HTTP requests, as well as paradigms such as Razor Pages or Model-View-Controller (MVC) controllers on top of that, as shown in figure 34.2.

alt text

Figure 34.2 ASP.NET Core builds on the generic IHost implementation. IHost provides features such as configuration, DI, and configuration. ASP.NET Core adds HTTP handling on top of that by way of the middleware pipeline, Razor Pages, and API controllers. If you don’t need HTTP handling, you can use IHost without the additional ASP.NET Core libraries to create a smaller application.

If your application doesn’t need to handle HTTP requests, there’s no real reason to use ASP.NET Core. You can use the IHost implementation alone to create an application that has a lower memory footprint, faster startup, and less surface area to worry about from a security perspective than a full ASP.NET Core application. .NET applications that use this approach are commonly called worker services or workers.‌

DEFINITION A worker is a .NET application that uses the generic IHost but doesn’t include the ASP.NET Core libraries for handling HTTP requests. They are sometimes called headless services, as they don’t expose a UI for you to interact with.

Workers are commonly used for running background tasks (IHostedService implementations) that don’t require a UI. These tasks could be for running batch jobs, running tasks repeatedly on a schedule, or handling events using some sort of message bus. In the next section we’ll create a worker for retrieving the latest exchange rates from a remote API instead of adding the background task to an ASP.NET Core application.

34.2.1 Creating a worker service from a template‌

In this section you’ll see how to create a basic worker service from a template. Visual Studio includes a template for creating worker services: choose File > New > Project > Worker Service. You can create a similar template using the .NET command-line interface (CLI) by running dotnet new worker. The resulting template consists of two C# files:‌

• Worker.cs—This simple BackgroundService implementation writes to the log every second, as shown in listing 34.6. You can replace this class with your own BackgroundService implementation, such as the example from listing 34.5.

• Program.cs—As in a typical ASP.NET Core application, this contains the entry point for your application, and it’s where the IHost is built and run. By contrast with a typical .NET 7 ASP.NET Core app, it uses the generic host instead of the minimal hosting WebApplication and WebApplicationBuilder.

Listing 34.6 Default BackgroundService implementation for worker service template

public class Worker : BackgroundService ❶
{
private readonly ILogger<Worker> _logger;
public Worker(ILogger<Worker> logger)
{
_logger = logger;
}
protected override async Task ExecuteAsync( ❷
CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested) ❸
{
_logger.LogInformation(
"Worker running at: {time}", DateTimeOffset.Now);
await Task.Delay(1000, stoppingToken); ❹
}
}
}

❶ The Worker service derives from BackgroundService.
❷ ExecuteAsync starts the main execution loop for the service.
❸ When the app is shutting down, the CancellationToken is canceled.
❹ The service writes a log message every second until the app shuts down.

The most notable difference between the worker service template and an ASP.NET Core template is that Program.cs doesn’t use the WebApplicationBuilder and WebApplication APIs for minimal hosting. Instead, it uses the Host.CreateDefaultBuilder() helper method you learned about in chapter 30 to create an IHostBuilder.‌

NOTE .NET 8 will change the worker service template to use a new type, HostApplicationBuilder, which is analogous to WebApplicationBuilder.

HostApplicationBuilder brings the familiar script-like setup experience of minimal hosting to worker services, instead of using the callback-based approach of IHostBuilder.

You configure your DI services in Program.cs using the ConfigureServices() method on IHostBuilder, as shown in listing 34.7. This method takes a lambda method, which takes two arguments:

• A HostBuilderContext object. This context object exposes the IConfiguration for your app as the property Configuration, and the IHostEnvironment as the property HostingEnvironment.

• An ISeviceCollection object. You add your services to this collection in the same way you add them to WebApplicationBuilder.Services in typical ASP.NET Core apps.

The following listing shows how to configure EF Core, the exchange rates typed client from chapter 33, and the background service that saves exchange rates to the database, as you saw in section 34.1.2. It uses C#’s top-level statements, so no static void Main entry point is shown.

Listing 34.7 Program.cs for a worker service that saves exchange rates using EF Core

using Microsoft.EntityFrameworkCore;
IHost host = Host.CreateDefaultBuilder(args) ❶
.ConfigureServices((hostContext, services) => ❷
{
services.AddHttpClient<ExchangeRatesClient>(); ❸
services.AddHostedService<ExchangeRatesHostedService>(); ❸
var connectionString = hostContext.Configuration ❹
.GetConnectionString("SqlLiteConnection")) ❹
services.AddDbContext<AppDbContext>(options => ❺
options.UseSqlite(connectionString)); ❺
})
.Build(); ❻
host.Run();

❶ Creates an IHostBuilder using the default helper
❷ Configures your DI services
❸ Adds services to the IServiceCollection
❹ IConfiguration can be accessed from the HostBuilderContext parameter.
❺ Adds services to the IServiceCollection
❻ Builds an IHost instance
❼ Runs the app and waits for shutdown

The changes in Program.cs to use the generic host instead of minimal hosting are the most obvious differences between a worker service and an ASP.NET Core app, but there are some important differences in the .csproj project file too. The following listing shows the project file for a worker service that uses IHttpClientFactory and EF Core, and highlights some of the differences with a similar ASP.NET Core application.

Listing 34.8 Project file for a worker service

<Project Sdk="Microsoft.NET.Sdk.Worker"> ❶
<PropertyGroup>
<TargetFramework>net7.0</TargetFramework> ❷
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
<UserSecretsId>5088-4277-B226-DC0A790AB790</UserSecretsId> ❸
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Hosting" ❹
Version="7.0.0" /> ❹
<PackageReference Include="Microsoft.Extensions.Http" ❺
Version="7.0.0" /> ❺
<PackageReference Include="Microsoft.EntityFrameworkCore.Design" ❻
Version="7.0.0" PrivateAssets="All" /> ❻
<PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" ❻
Version="7.0.0" /> ❻
</ItemGroup>
</Project>

❶ Worker services use a different project software development kit (SDK) type
from ASP.NET Core apps.
❷ The target framework is the same as for ASP.NET Core apps.
❸ Worker services use configuration so they can use User Secrets, like ASP.NET
Core apps.
❹ All worker services must explicitly add this package. ASP.NET Core apps add it
implicitly.
❺ If you’re using IHttpClient-Factory, you’ll need to add this package in worker
services.
❻ EF Core packages must be explicitly added, the same as for ASP.NET Core apps.

Some parts of the project file are the same for both worker services and ASP.NET Core apps:

• Both types of apps must specify a <TargetFramework>, such as net7.0 for .NET 7.

• Both types of apps use the configuration system, so you can use <UserSecretsId> to manage secrets in development, as discussed in chapter 10.

• Both types of apps must explicitly add references to the EF Core NuGet packages to use EF Core in the app.

There are also several differences in the project template:

• The <Project> element’s Sdk for a worker service should be Microsoft.NET.Sdk.Worker, whereas for an ASP.NET Core app it is Microsoft.NET.Sdk.Web. The Web SDK includes implicit references to additional packages that are not generally required in worker services.

• The worker service must include an explicit PackageReference for the Microsoft.Extensions.Hosting NuGet package. This package includes the generic IHost implementation used by worker services.

• You may need to include additional packages to reference the same functionality as in an ASP.NET Core app. An example is the Microsoft.Extensions.Http package (which provides IHttpClientFactory). This package is referenced implicitly in ASP.NET Core apps but must be explicitly referenced in worker services.

Running a worker service is the same as running an ASP.NET Core application: use dotnet run from the command line or press F5 in Visual Studio. A worker service is essentially a console application (as are ASP.NET Core applications), so they both run the same way.

You can run worker services in most of the same places you would run an ASP.NET Core application, though as a worker service doesn’t handle HTTP traffic, some options make more sense than others. In the next section we’ll look at two supported ways of running your application: as a Windows Service or as a Linux systemd daemon.

34.2.2 Running worker services in production‌

In this section you’ll learn how to run worker services in production. You’ll learn how to install a worker service as a Windows Service so that the operating system monitors and starts your worker service automatically. You’ll also see how to prepare your application for installation as a systemd daemon in Linux.

Worker services, like ASP.NET Core applications, are fundamentally .NET console applications. The difference is that they are typically intended to be long-running applications. The common approach for running these types of applications on Windows is to use a Windows Service or to use a systemd daemon in Linux.

NOTE It’s also common to run applications in the cloud using Docker containers or dedicated platform services like Azure App Service. The process for deploying a worker service to these managed services is typically identical to deploying an ASP.NET Core application.

Adding support for Windows Services or systemd is easy, thanks to two optional NuGet packages:

• Microsoft.Extensions.Hosting.Systemd—Adds support for running the application as a systemd application. To enable systemd integration, call UseSystemd() on your IHostBuilder in Program.cs.

• Microsoft.Extensions.Hosting.WindowsServices— Adds support for running the application as a Windows Service. To enable the integration, call UseWindowsService() on your IHostBuilder in Program.cs.

These packages each add a single extension method to IHostBuilder that enables the appropriate integration when running as a systemd daemon or as a Windows Service. The following listing shows how to enable Windows Service support.

Listing 34.9 Adding Windows Service support to a worker service

IHost host = Host.CreateDefaultBuilder(args) ❶
.ConfigureServices((hostContext, services) => ❶
{ ❶
Services.AddHostedService<Worker>(); ❶
}) ❶
.UseWindowsService() ❷
.Build();
host.Run();

❶ Configures your worker service as you would normally
❷ Adds support for running as a Windows Service.

During development, or if you run your application as a console app, UseWindowsService() does nothing; your application runs exactly the same as it would without the method call. However, your application can now be installed as a Windows Service, as your app now has the required integration hooks to work with the Windows Service system. The following basic steps show how to install a worker service app as a Windows Service:

  1. Add the Microsoft.Extensions.Hosting.WindowsServices NuGet package to your application using Visual Studio by running dotnet add package Microsoft.Extensions.Hosting.WindowsServices in the project folder, or by adding a <PackageReference> to your .csproj file:
<PackageReference Include="Microsoft.Extensions.Hosting.WindowsServices"  Version="7.0.0" />
  1. Add a call to UseWindowsService() on your IHostBuilder, as shown in listing 34.9.

  2. Publish your application, as described in chapter 27. From the command line you could run dotnet publish -c Release from the project folder.

  3. Open a command prompt as Administrator and install the application using the Windows sc utility. You need to provide the path to your published project’s .exe file and a name to use for the service, such as My Test Service:

    sc create "My Test Service" BinPath="C:\path\to\MyService.exe"
  4. You can manage the service from the Services control panel in Windows, as shown in figure 34.3. Alternatively, to start the service from the command line run sc start "My Test Service", or to delete the service run sc delete "My Test Service".

After you complete the preceding steps, your worker service will be running as a Windows Service.

alt text

Figure 34.3 The Services control panel in Windows. After installing a worker service as a Windows Service using the sc utility, you can manage your worker service from here. This control panel allows you to control when the Windows Service starts and stops, the user account that the application runs under, and how to handle errors.

WARNING These steps are the bare minimum required to install a Windows Service. When running in production, you must consider many security aspects not covered here. For more details, see Microsoft’s “Host ASP.NET Core in a Windows Service” documentation: http://mng.bz/Xdy9.

An interesting point of note is that installing as a Windows Service or system daemon isn’t limited to worker services; you can install an ASP.NET Core application in the same way. Simply follow the preceding instructions, add the call to UseWindowsService(), and install your ASP.NET Core app. You can do this thanks to the fact that the ASP.NET Core functionality is built directly on top of the generic Host functionality.

NOTE Hosting an ASP.NET Core app as a Windows Service can be useful if you don’t want to (or can’t) use Internet Information Services (IIS). Some older versions of IIS don’t support gRPC, for example. By hosting as a Windows Service, your application can be restarted automatically if it crashes.

You can follow a similar process to install a worker service as a system daemon by installing the Microsoft.Extensions.Hosting.Systemd package and calling UseSystemd() on your IHostBuilder. For more details on configuring system, see the “Monitor the app” section of Microsoft’s “Host ASP.NET Core on Linux with Nginx” documentation: http://mng.bz/yYDp.

So far in this chapter we’ve used IHostedService and the BackgroundService to run tasks that repeat on an interval, and you’ve seen how to install worker services as long-running applications by installing as a Windows Service.

In the final section of this chapter we’ll look at how you can create more advanced schedules for your background tasks, as well as how to add resiliency to your application by running multiple instances of your workers. To achieve that, we’ll use a mature third-party library, Quartz.NET.‌

34.3 Coordinating background tasks using Quartz.NET‌

In this section you’ll learn how to use the open-source scheduler library Quartz.NET. You’ll learn how to install and configure the library and how to add a background job to run on a schedule. You’ll also learn how to enable clustering for your applications so that you can run multiple instances of your worker service and share jobs among them.

All the background tasks you’ve seen so far in this chapter repeat a task on an interval indefinitely, from the moment the application starts. However, sometimes you want more control of this timing. Maybe you always want to run the application at 15 minutes past each hour. Or maybe you want to run a task only on the second Tuesday of the month at 3 a.m. Additionally, maybe you want to run multiple instances of your application for redundancy but ensure that only one of the services runs a task at any time.

It would certainly be possible to build all this extra functionality into your app yourself, but excellent libraries already provide all this functionality for you. Two of the most well known in the .NET space are Hangfire (https://www.hangfire.io) and Quartz.NET (https://www.quartz-scheduler.net).

Hangfire is an open-source library that also has a Pro subscription option. One of its most popular features is a dashboard UI that shows the state of all your running jobs, each task’s history, and any errors that have occurred.

Quartz.NET is completely open-source and essentially offers a beefed-up version of the BackgroundService functionality. It has extensive scheduling functionality, as well as support for running in a clustered environment, where multiple instances of your application coordinate to distribute the jobs among themselves.

NOTE Quartz.NET is based on a similar Java library called Quartz Scheduler. When looking for information on Quartz.NET, be sure you’re looking at the correct Quartz!

Quartz.NET is based on four main concepts:

• Jobs—The background tasks that implement your logic.

• Triggers—Control when a job runs based on a schedule, such as “every five minutes” or “every second Tuesday.” A job can have multiple triggers.

• Job factory—Responsible for creating instances of your jobs. Quartz.NET integrates with ASP.NET Core’s DI container, so you can use DI in your job classes.

• Scheduler—Keeps track of the triggers in your application, creates jobs using the job factory, and runs your jobs. The scheduler typically runs as an IHostedService for the lifetime of your app.

Background services vs. cron jobs

It’s common to use cron jobs to run tasks on a schedule in Linux, and Windows has similar functionality with Task Scheduler, used to periodically run an application or script file, which is typically a short-lived task.

By contrast, .NET apps using background services are designed to be long- lived, even if they are used only to run tasks on a schedule. This allows your application to do things like adjust its schedule as required or perform optimizations. In addition, being long-lived means your app doesn’t only have to run tasks on a schedule. It can respond to ad hoc events, such as events in a message queue.

Of course, if you don’t need those capabilities and would rather not have a long-running application, you can use .NET in combination with cron jobs. You could create a simple .NET console app that runs your task and then shuts down, and you could schedule it to execute periodically as a cron job. The choice is yours!

In this section I show you how to install Quartz.NET and configure a background service to run on a schedule. Then I explain how to enable clustering so that you can run multiple instances of your application and distribute the jobs among them.

34.3.1 Installing Quartz.NET in an ASP.NET Core application‌

In this section I show how to install the Quartz.NET scheduler into an ASP.NET Core application. Quartz.NET runs in the background in the same way as the IHostedService implementations do. In fact, Quartz.NET uses the IHostedService abstractions to schedule and run jobs.

DEFINITION A job in Quartz.NET is a task to be executed that implements the IJob interface. It is where you define the logic that your tasks execute.‌

Quartz.NET can be installed in any .NET 7 application, so in this chapter I show how to install Quartz.NET in a worker service using the generic host rather than an ASP.NET Core app using minimal hosting. You’ll install the necessary dependencies and configure the Quartz.NET scheduler to run as a background service. In section 34.3.2 we’ll convert the exchange-rate downloader task from section 34.1 to a Quartz.NET IJob and configure triggers to run on a schedule.

NOTE The instructions in this section can be used to install Quartz.NET in either a worker service or a full ASP.NET Core application. The only difference is whether you use the generic host in Program.cs or WebApplicationBuilder.

To install Quartz.NET, follow these steps:

  1. Install the Quartz.AspNetCore NuGet package in your project by running dotnet add package Quartz.Extensions.Hosting, by using the NuGet explorer in Visual Studio, or by adding a <PackageReference> element to your project file as follows:
<PackageReference Include="Quartz.Extensions.Hosting" Version="3.5.0" />
  1. Add the Quartz.NET IHostedService scheduler by calling AddQuartzHostedService() on the IServiceCollection in ConfigureServices (or on WebApplicationBuilder.Services) as follows. Set WaitForJobsToComplete=true so that your app will wait for any jobs in progress to finish when shutting down.
services.AddQuartzHostedService(q => q.WaitForJobsToComplete = true);
  1. Configure the required Quartz.NET services. The example in the following listing configures the Quartz.NET job factory to retrieve job implementations from the DI container and adds the required hosted service.

Listing 34.10 Configuring Quartz.NET

using Quartz;
IHost host = Host.CreateDefaultBuilder(args)
.ConfigureServices((hostContext, services) => ❶
{
services.AddQuartz(q => ❷
{
q. UseMicrosoftDependencyInjectionJobFactory(); ❸
});
services.AddQuartzHostedService( ❹
q => q.WaitForJobsToComplete = true); ❹
})
.Build();
host.Run();

❶ Adds Quartz.NET in ConfigureServices for worker services
❷ Registers Quartz.NET services with the DI container
❸ Configures Quartz.NET to load jobs from the DI container
❹ Adds the Quartz.NET IHostedService that runs the Quartz.NET scheduler

This configuration registers all Quartz.NET’s required components, so you can now run your application using dotnet run or by pressing F5 in Visual Studio. When your app starts, the Quartz.NET IHostedService starts its scheduler, as shown in figure 34.4. We haven’t configured any jobs to run yet, so the scheduler doesn’t have anything to schedule. The app will sit there, periodically checking whether any jobs have been added.

alt text

Figure 34.4 The Quartz.NET scheduler starts on app startup and logs its configuration. The default configuration stores the list of jobs and their schedules in memory and runs in a nonclustered state. In this example, you can see that no jobs or triggers have been registered, so the scheduler has nothing to schedule yet.

TIP Running your application before you’ve added any jobs is good practice. It lets you check that you have installed and configured Quartz.NET correctly before you get to more advanced configuration.

A job scheduler without any jobs to schedule isn’t a lot of use, so in the next section we’ll create a job and add a trigger for it to run on a timer.

34.3.2 Configuring a job to run on a schedule with Quartz.NET‌

In section 34.1 we created an IHostedService that downloads exchange rates from a remote service and saves the results to a database using EF Core. In this section you’ll see how you can create a similar Quartz.NET IJob and configure it to run on a schedule.

The following listing shows an implementation of IJob that downloads the latest exchange rates from a remote API using a typed client, ExchangeRatesClient. The results are then saved using an EF Core DbContext, AppDbContext.

Listing 34.11 A Quartz.NET IJob for downloading and saving exchange rates

public class UpdateExchangeRatesJob : IJob ❶
{
private readonly ILogger<UpdateExchangeRatesJob> _logger; ❷
private readonly ExchangeRatesClient _typedClient; ❷
private readonly AppDbContext _dbContext; ❷
public UpdateExchangeRatesJob( ❷
ILogger<UpdateExchangeRatesJob> logger, ❷
ExchangeRatesClient typedClient, ❷
AppDbContext dbContext) ❷
{ ❷
_logger = logger; ❷
_typedClient = typedClient; ❷
_dbContext = dbContext; ❷
} ❷
public async Task Execute(IJobExecutionContext context) ❸
{
    _logger.LogInformation("Fetching latest rates");
var latestRates = await _typedClient.GetLatestRatesAsync(); ❹
_dbContext.Add(latestRates); ❺
await _dbContext.SaveChangesAsync(); ❺
_logger.LogInformation("Latest rates updated");
}
}

❶ Quartz.NET jobs must implement the IJob interface.
❷ You can use standard DI to inject any dependencies.
❸ IJob requires you to implement a single asynchronous method, Execute.
❹ Downloads the rates from the remote API
❺ Saves the rates to the database

Functionally, the IJob in listing 34.11 is doing a similar task to the BackgroundService implementation in listing 34.5, with a few notable exceptions:

• The IJob defines only the task to execute; it doesn’t define timing information. In the BackgroundService implementation, we also had to control how often the task was executed.

• A new IJob instance is created every time the job is executed. By contrast, the BackgroundService implementation is created only once, and its Execute method is invoked only once.

• We can inject scoped dependencies directly into the IJob implementation. To use scoped dependencies in the IHostedService implementation, we had to create our own scope manually and use service location to load dependencies. Quartz.NET takes care of that for us, allowing us to use pure constructor injection. Every time the job is executed, a new scope is created and used to create a new instance of the IJob.

The IJob defines what to execute, but it doesn’t define when to execute it. For that, Quartz.NET uses triggers.Triggers can define arbitrarily complex blocks of time during which a job should execute. For example, you can specify start and end times, how many times to repeat, and blocks of time when a job should or shouldn’t run (such as only 9 a.m. to 5 p.m. Monday to Friday).

In the following listing, we register the UpdateExchangeRatesJob with the DI container using the AddJob() method, and we provide a unique name to identify the job. We also configure a trigger that fires immediately and then every five minutes until the application shuts down.

Listing 34.12 Configuring a Quartz.NET IJob and trigger

using Quartz;
IHost host = Host.CreateDefaultBuilder(args)
.ConfigureServices((hostContext, services) =>
{
services.AddQuartz(q =>
{
q. UseMicrosoftDependencyInjectionJobFactory();
var jobKey = new JobKey("Update exchange rates"); ❶
q.AddJob<UpdateExchangeRatesJob>(opts => ❷
opts.WithIdentity(jobKey)); ❷
q.AddTrigger(opts => opts ❸
.ForJob(jobKey) ❸
.WithIdentity(jobKey.Name + " trigger") ❹
.StartNow() ❺
.WithSimpleSchedule(x => x ❻
.WithInterval(TimeSpan.FromMinutes(5)) ❻
.RepeatForever())
);
});
services.AddQuartzHostedService(
q => q.WaitForJobsToComplete = true);
})
.Build();
host.Run();

❶ Creates a unique key for the job, used to associate it with a trigger
❷ Adds the IJob to the DI container and associates it with the job key
❸ Registers a trigger for the IJob via the job key
❹ Provides a unique name for the trigger for use in logging and in clustered
scenarios
❺ Fires the trigger as soon as the Quartz.NET scheduler runs on app startup
❻ Fires the trigger every 5 minutes until the app shuts down

Simple triggers like the schedule defined here are common, but you can also achieve more complex configurations using other schedules. The following configuration would set a trigger to fire every week on a Friday at 5:30 p.m.:

q.AddTrigger(opts => opts
.ForJob(jobKey)
.WithIdentity("Update exchange rates trigger")
.WithSchedule(CronScheduleBuilder
.WeeklyOnDayAndHourAndMinute(DayOfWeek.Friday, 17, 30)));

You can configure a wide array of time- and calendar-based triggers with Quartz.NET. You can also control how Quartz.NET handles missed triggers—that is, triggers that should have fired, but your app wasn’t running at the time. For a detailed description of the trigger configuration options and more examples, see the Quartz.NET documentation at https://www.quartz-scheduler.net/documentation.

TIP A common problem people run into with long-running jobs is that Quartz.NET keeps starting new instances of the job when a trigger fires, even though it’s already running. To avoid that, tell Quartz.NET to not start another instance by decorating your IJob implementation with the [DisallowConcurrentExecution] attribute.‌

The ability to configure advanced schedules, the simple use of DI in background tasks, and the separation of jobs from triggers are reasons enough for me to recommend Quartz.NET if you have anything more than the most basic background service needs. However, the real tipping point is when you need to scale your application for redundancy or performance reasons; that’s when Quartz.NET’s clustering capabilities make it shine.

34.3.3 Using clustering to add redundancy to your background tasks‌

In this section you’ll learn how to configure Quartz.NET to persist its configuration to a database. This is a necessary step in enabling clustering so that multiple instances of your application can coordinate to run your Quartz.NET jobs.

As your applications become more popular, you may need to run more instances of your app to handle the traffic they receive. If you keep your ASP.NET Core applications stateless, the process of scaling is relatively simple: the more applications you have, the more traffic you can handle, everything else being equal.

However, scaling applications that use IHostedService to run background tasks might not be as simple. For example, imagine your application includes the BackgroundService that we created in section 34.1.2, which saves exchange rates to the database every five minutes. When you’re running a single instance of your app, the task runs every five minutes as expected.

But what happens if you scale your application and run 10 instances of it? Every one of those applications will be running the BackgroundService, and they’ll all be updating every five minutes from the time each instance started!

One option would be to move the BackgroundService to a separate worker service app. You could then continue to scale your ASP.NET Core application to handle the traffic as required but deploy a single instance of the worker service. As only a single instance of the BackgroundService would be running, the exchange rates would be updated on the correct schedule again.

TIP Differing scaling requirements, as in this example, are one of the best reasons for splitting bigger apps into smaller microservices. Breaking up an app like this has a maintenance overhead, however, so think about the tradeoffs if you take this route. For more on this tradeoff, I recommend Microservices in .NET Core, 2nd ed., by Christian Horsdal Gammelgaard (Manning, 2021).‌

However, if you take this route, you add a hard limitation that you can have only a single instance of your worker service. If you need to run more instances of your worker service to handle additional load, you’ll be stuck.

An alternative option to enforcing a single service is using clustering, which allows you to run multiple instances of your application, with tasks distributed among the instances.Quartz.NET achieves clustering by using a database as a backing store. When a trigger indicates that a job needs to execute, the Quartz.NET schedulers in each app attempt to obtain a lock to execute the job, as shown in figure 34.5.Only a single app can be successful, ensuring that a single app handles the trigger for the IJob.

alt text

Figure 34.5 Using clustering with Quartz.NET allows horizontal scaling. Quartz.NET uses a database as a backing store, ensuring that only a single instance of the application handles a trigger at a time. This makes it possible to run multiple instances of your application to meet scalability requirements.

Quartz.NET relies on a persistent database for its clustering functionality. Quartz .NET stores descriptions of the jobs and triggers in the database, including when the trigger last fired. The locking features of the database ensure that only a single application can execute a task at a time.

TIP You can also enable persistence without enabling clustering, allowing the Quartz.NET scheduler to catch up with missed triggers.

Listing 34.13 shows how to enable persistence for Quartz.NET and how to enable clustering. This example stores data in a Microsoft SQL Server (or LocalDB) server, but Quartz.NET supports many other databases. This example uses the recommended values for enabling clustering and persistence as outlined in the documentation.

TIP The Quartz.NET documentation discusses many configuration setting controls for persistence. See the “Job Stores” documentation at http://mng.bz/PP0R. To use the recommended JSON serializer for persistence, you must also install the Quartz.Serialization.Json NuGet package.

Listing 34.13 Enabling persistence and clustering for Quartz.NET

using Quartz;
IHost host = Host.CreateDefaultBuilder(args)
.ConfigureServices((hostContext, services) => ❶
{
var connectionString = Configuration ❷
.GetConnectionString("DefaultConnection"); ❷
services.AddQuartz(q =>
{
q.SchedulerId = "AUTO"; ❸
q. UseMicrosoftDependencyInjectionJobFactory();
q.UsePersistentStore(s => ❹
{
s.UseSqlServer(connectionString); ❺
s.UseClustering(); ❻
s.UseProperties = true; ❼
s.UseJsonSerializer(); ❼
});
var jobKey = new JobKey("Update_exchange_rates");
q.AddJob<UpdateExchangeRatesJob>(opts =>
opts.WithIdentity(jobKey));
q.AddTrigger(opts => opts
.ForJob(jobKey)
.WithIdentity(jobKey.Name + " trigger")
.StartNow()
.WithSimpleSchedule(x => x
.WithInterval(TimeSpan.FromMinutes(5))
.RepeatForever())
);
});
services.AddQuartzHostedService(
q => q.WaitForJobsToComplete = true);
})
.Build();
host.Run();

❶ Configuration is identical for both ASP.NET Core apps and worker services.
❷ Obtains the connection string for your database from configuration
❸ Each instance of your app must have a unique SchedulerId. AUTO takes care of this for you.
❹ Enables database persistence for the Quartz.NET scheduler data
❺ Stores the scheduler data in a SQL Server (or LocalDb) database
❻ Enables clustering between multiple instances of your app
❼ Adds the recommended configuration for job persistence

With this configuration, Quartz.NET stores a list of jobs and triggers in the database, and uses database locking to ensure that only a single instance of your app handles a trigger and runs the associated job.

WARNING SQLite doesn’t support the database locking primitives required for clustering. You can use SQLite as a persistence store, but you won’t be able to use clustering. Quartz.NET stores data in your database, but it doesn’t attempt to create the tables it uses itself. Instead, you must add the required tables manually. Quartz.NET provides SQL scripts on GitHub for all the supported database server types, including SQL Server, SQLite, PostgreSQL, MySQL, and many more; see http://mng.bz/JDeZ.

TIP If you’re using EF Core migrations to manage your database, I suggest using them even for ad hoc scripts like these. In the code sample associated with this chapter, you can see a migration that creates the required tables using the Quartz.NET scripts.

Clustering is one of those advanced features that is necessary only as you start to scale your application, but it’s an important tool to have in your belt. It gives you the ability to safely scale your services as you add more jobs. There are some important things to bear in mind, however, so I suggest reading the warnings in the Quartz.NET documentation at http://mng.bz/aozj.

That brings us to the end of this chapter on background services. In the final chapters of this book I describe an important aspect of web development that sometimes, despite the best intentions, is left until last: testing. You’ll learn how to write simple unit tests for your classes, design for testability, and build integration tests that test your whole app.‌

Summary

You can use the IHostedService interface to run tasks in the background of your ASP.NET Core apps. Call AddHostedService<T>() to add an implementation T to the DI container.IHostedService is useful for implementing long- running tasks.

Typically, you should derive from BackgroundService to create an IHostedService, as this implements best practices required for long-running tasks. You must override a single method, ExecuteAsync, that is called when your app starts. You should run your tasks within this method until the provided CancellationToken indicates that the app is shutting down.

You can create DI scopes manually using IServiceProvider.CreateScope(). This is useful for accessing scoped lifetime services from within a singleton lifetime component, such as from an IHostedService implementation.

A worker service is a .NET Core application that uses the generic IHost but doesn’t include the ASP.NET Core libraries for handling HTTP requests. It generally has a smaller memory and disk footprint than an ASP.NET Core equivalent.

Worker services use the same logging, configuration, and DI systems as ASP.NET Core apps. However, they don’t use the WebApplicationBuilder minimal hosting APIs, so you must configure your app using the generic host APIs. For example, configure your DI services using IHostBuilder.ConfigureServices().

To run a worker service or ASP.NET Core app as a Windows Service, add the Microsoft.Extensions.Hosting.WindowsServices NuGet package, and call UseWindowsService() on IHostBuilder. You can install and manage your app with the Windows sc utility.

To install a Linux systemd daemon, add the Microsoft.Extensions.Hosting.Systemd NuGet package and call AddSystemd() on IHostBuilder. Both the Systemd and Windows Service integration packages do nothing when running the application as a console app, which is great for testing your app. You can even add both packages so that your app can run as a service in both Windows and Linux.

Quartz.NET runs jobs based on triggers using advanced schedules. It builds on the IHostedService implementation to add extra features and scalability. You can install Quartz by adding the Quartz.AspNetCore NuGet package and calling AddQuartz() and AddQuartzHostedService() in ConfigureServices().

You can create a Quartz.NET job by implementing the IJob interface. This requires implementing a single method, Execute. You can enable DI for the job by calling UseMicrosoftDependencyInjectionJobFac tory in AddQuartz(). This allows you to directly inject scoped (or transient) services into your job without having to create your own scopes.

You must register your job, T, with DI by calling AddJob<T>() and providing a JobKey name for the job. You can add an associated trigger by calling AddTrigger() and providing the JobKey. Triggers have a wide variety of schedules available for controlling when a job should be executed.

By default, triggers spawn new instances of a job as often as necessary. For long-running jobs scheduled with a short interval, that will result in many instances of your job running concurrently. If you want a trigger to execute a job only when an instance is not already running, decorate your job with the [DisallowConcurrentExecution] attribute.

Quartz.NET supports database persistence for storing when triggers have executed. To enable persistence, call UsePersistentStore() in your AddQuartz() configuration method, and configure a database, using UseSqlServer() for example. With persistence, Quartz.NET can persist details about jobs and triggers between application restarts.

Enabling persistence also allows you to use clustering. Clustering enables multiple apps using Quartz.NET to coordinate, so that jobs are spread across multiple schedulers. To enable clustering, first enable database persistence and then call UseClustering(). SQLite does not support clustering due to limitations of the database itself.

ASP.NET Core in Action 33 Calling remote APIs with IHttpClientFactory

33 Calling remote APIs with IHttpClientFactory‌

This chapter covers
• Seeing problems caused by using HttpClient incorrectly to call HTTP APIs

• Using IHttpClientFactory to manage HttpClient lifetimes Encapsulating configuration and handling transient errors with IHttpClientFactory

So far in this book we’ve focused on creating web pages and exposing APIs. Whether that’s customers browsing a Razor Pages application or client-side SPAs and mobile apps consuming your APIs, we’ve been writing the APIs for others to consume.

However, it’s common for your application to interact with third-party services by consuming their APIs as well as your own API apps. For example, an e-commerce site needs to take payments, send email and Short Message Service (SMS) messages, and retrieve exchange rates from a third-party service. The most common approach for interacting with services is using HTTP. So far in this book we’ve looked at how you can expose HTTP services, using minimal APIs and API controllers, but we haven’t looked at how you can consume HTTP services.

In section 33.1 you’ll learn the best way to interact with HTTP services using HttpClient. If you have any experience with C#, it’s likely that you’ve used this class to send HTTP requests, but there are two gotchas to think about; otherwise, your app could run into difficulties.

IHttpClientFactory was introduced in .NET Core 2.1; it makes creating and managing HttpClient instances easier and avoids the common pitfalls. In section 33.2 you’ll learn how IHttpClientFactory achieves this by managing the HttpClient handler pipeline. You’ll learn how to create named clients to centralize the configuration for calling remote APIs and how to use typed clients to encapsulate the remote service’s behavior.‌

Network glitches are a fact of life when you’re working with HTTP APIs, so it’s important for you to handle them gracefully. In section 33.3 you’ll learn how to use the open- source resilience and fault-tolerance library Polly to handle common transient errors using simple retries, with the possibility for more complex policies.

Finally, in section 33.4 you’ll see how you can create your own custom HttpMessageHandler handlers managed by IHttpClientFactory. You can use custom handlers to implement cross-cutting concerns such as logging, metrics, and authentication, whenever a function needs to execute every time you call an HTTP API. You’ll also see how to create a handler that automatically adds an API key to all outgoing requests to an API.

To misquote John Donne, no app is an island, and the most common way of interacting with other apps and services is over HTTP. In .NET, that means using HttpClient.

33.1 Calling HTTP APIs: The problem with HttpClient‌

In this section you’ll learn how to use HttpClient to call HTTP APIs. I’ll focus on two common pitfalls in using HttpClient—socket exhaustion and DNS rotation problems —and show why they occur. In section 33.2 you’ll see how to avoid these problems by using IHttpClientFactory.

It’s common for an application to interact with other services to fulfill its duty. Take a typical e-commerce store, for example. In even the most basic version of the application, you will likely need to send emails and take payments using credit cards or other services. You could try to build that functionality yourself, but it probably wouldn’t be worth the effort.

Instead, it makes far more sense to delegate those responsibilities to third-party services that specialize in that functionality. Whichever service you use, they will almost certainly expose an HTTP API for interacting with the service. For many services, that will be the only way.

RESTful HTTP vs. gRPC vs. GraphQL
There are many ways to interact with third-party services, but HTTP RESTful services are still the king, decades after HTTP was first proposed. Every platform and programming language you can think of includes support for making HTTP requests and handling responses. That ubiquity makes it the go-to option for most services.

Despite their ubiquity, RESTful services are not perfect. They are relatively verbose, which means that more data ends up being sent and received than with some other protocols. It can also be difficult to evolve RESTful APIs after you have deployed them. These limitations have spurred interest in two alternative protocols in particular: gRPC and GraphQL.

gRPC is intended to be an efficient mechanism for server-to-server communication. It builds on top of HTTP/2 but typically provides much higher performance than traditional RESTful APIs. gRPC support was added in .NET Core 3.0 and is receiving many performance and feature updates. For a comprehensive view of .NET support, see the documentation at https://learn.microsoft.com/aspnet/core/grpc.

Whereas gRPC works best with server-to-server communication and nonbrowser clients, GraphQL is best used to provide evolvable APIs to mobile and single-page application (SPA) apps. It has become popular among frontend developers, as it can reduce the friction involved in deploying and using new APIs. For details, I recommend GraphQL in Action, by Samer Buna (Manning, 2021).‌‌

Despite the benefits and improvements gRPC and GraphQL can bring, RESTful HTTP services are here to stay for the foreseeable future, so it’s worth making sure that you understand how to use them with HttpClient.

In .NET we use the HttpClient class for calling HTTP APIs. You can use it to make HTTP calls to APIs, providing all the headers and body to send in a request, and reading the response headers and data you get back. Unfortunately, it’s hard to use correctly, and even when you do, it has limitations.

The source of the difficulty with HttpClient stems partly from the fact that it implements the IDisposable interface. In general, when you use a class that implements IDisposable, you should wrap the class with a using statement whenever you create a new instance to ensure that unmanaged resources used by the type are cleaned up when the class is removed, as in this example:‌

using (var myInstance = new MyDisposableClass())
{
// use myInstance
}

TIP C# also includes a simplified version of the using statement called a using declaration, which omits the curly braces, as shown in listing 33.1. You can read more about the syntax at http://mng.bz/nW12.

That might lead you to think that the correct way to create an HttpClient is shown in listing 33.1. This listing shows a simple example where a minimal API endpoint calls an external API to fetch the latest currency exchange rates, and returns them as the response.

alt text

Figure 33.1 To create a connection, a client selects a random port and connects to the HTTP server’s port and IP address. The client can then send HTTP requests to the server.

WARNING Do not use HttpClient as it’s shown in listing 33.1. Using it this way could cause your application to become unstable, as you’ll see shortly.

Listing 33.1 The incorrect way to use HttpClient

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
WebApplication app = builder.Build();
app.MapGet("/", async () =>
{
using HttpClient client = new HttpClient(); ❶
client.BaseAddress = new Uri("https://example.com/rates/"); ❷
var response = await client.GetAsync("latest"); ❸
response.EnsureSuccessStatusCode(); ❹
return await response.Content.ReadAsStringAsync(); ❺
});
app.Run();

❶ Wrapping the HttpClient in a using declaration means it is disposed at the end of the scope.
❷ Configures the base URL used to make requests using the HttpClient
❸ Makes a GET request to the exchange rates API
❹ Throws an exception if the request was not successful
❺ Reads the result as a string and returns it from the action method

HttpClient is special, and you shouldn’t use it like this! The problem is due primarily to the way the underlying protocol implementation works. Whenever your computer needs to send a request to an HTTP server, you must create a connection between your computer and the server. To create a connection, your computer opens a port, which has a random number between 0 and 65,535, and connects to the HTTP server’s IP address and port, as shown in figure 33.1. Your computer can then send HTTP requests to the server.

DEFINITION The combination of IP address and port is called a socket.

The main problem with the using statement/declaration and HttpClient is that it can lead to a problem called socket exhaustion, illustrated in figure 33.2. This happens when all the ports on your computer have been used up making other HTTP connections, so your computer can’t make any more requests. At that point, your application will hang, waiting for a socket to become free—a bad experience!‌

alt text

Figure 33.2 Disposing of HttpClient can lead to socket exhaustion. Each new connection requires the operating system to assign a new socket, and closing a socket doesn’t make it available until the TIME_WAIT period of 240 seconds has elapsed. Eventually you can run out of sockets, at which point you can’t make any outgoing HTTP requests.

Given that I said there are 65,536 different port numbers, you might think that’s an unlikely situation. It’s true that you will likely run into this problem only on a server that is making a lot of connections, but it’s not as rare as you might think.

The problem is that when you dispose of an HttpClient, it doesn’t close the socket immediately. The design of the TCP/IP protocol used for HTTP requests means that after trying to close a connection, the connection moves to a state called TIME_WAIT. The connection then waits for a specific period (240 seconds in Windows) before closing the socket.

Until the TIME_WAIT period has elapsed, you can’t reuse the socket in another HttpClient to make HTTP requests. If you’re making a lot of requests, that can quickly lead to socket exhaustion, as shown in figure 33.2.

TIP You can view the state of active ports/sockets in Windows and Linux by running the command netstat from the command line or a terminal window. Be sure to run netstat -n in Windows to skip Domain Name System (DNS) resolution.

Instead of disposing of HttpClient, the general advice (before the introduction of IHttpClientFactory) was to use a single instance of HttpClient, as shown in the following listing.

Listing 33.2 Using a singleton HttpClient to avoid socket exhaustion

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
WebApplication app = builder.Build();
HttpClient client = new HttpClient ❶
{ ❶
BaseAddress = new Uri("https://example.com/rates/"), ❶
}; ❶
app.MapGet("/", async () =>
{
var response = await client.GetAsync("latest"); ❷
response.EnsureSuccessStatusCode();
return await response.Content.ReadAsStringAsync();
});
app.Run();

❶ A single instance of HttpClient is created for the lifetime of the app.
❷ Multiple requests use the same instance of HttpClient.

This solves the problem of socket exhaustion. As you’re not disposing of the HttpClient, the socket is not disposed of, so you can reuse the same port for multiple requests. No matter how many times you call the API in the preceding example, you will use only a single socket. Problem solved!

Unfortunately, this introduces a different problem, primarily related to DNS. DNS is how the friendly hostnames we use, such as manning.com, are converted to the Internet Protocol (IP) addresses that computers need. When a new connection is required, the HttpClient first checks the DNS record for a host to find the IP address and then makes the connection. For subsequent requests, the connection is already established, so it doesn’t make another DNS call.

For singleton HttpClient instances, this can be a problem because the HttpClient won’t detect DNS changes. DNS is often used in cloud environments for load balancing to do graceful rollouts of deployments.1 If the DNS record of a service you’re calling changes during the lifetime of your application, a singleton HttpClient will keep calling the old service, as shown in figure 33.3.

alt text

Figure 33.3 HttpClient does a DNS lookup before establishing a connection to determine the IP address associated with a hostname. If the DNS record for a hostname changes, a singleton HttpClient will not detect it and will continue sending requests to the original server it connected to.

NOTE HttpClient won’t respect a DNS change while the original connection exists. If the original connection is closed (for example, if the original server goes offline), it will respect the DNS change, as it must establish a new connection.

It seems that you’re damned if you do and damned if you don’t! Luckily, IHttpClientFactory can take care of all this for you.

33.2 Creating HttpClients with IHttpClientFactory‌

In this section you’ll learn how you can use IHttpClientFactory to avoid the common pitfalls of HttpClient. I’ll show several patterns you can use to create an HttpClient:

• Using CreateClient() as a drop-in replacement for HttpClient

• Using named clients to centralize the configuration of an HttpClient used to call a specific third- party API

• Using typed clients to encapsulate the interaction with a third-party API for easier consumption by your code

IHttpClientFactory makes it easier to create HttpClient instances correctly instead of relying on either of the faulty approaches I discussed in section 33.1. It also makes it easier to configure multiple HttpClients and allows you to create a middleware pipeline for outgoing requests.

Before we look at how IHttpClientFactory achieves all that, we will look at how HttpClient works under the hood.

33.2.1 Using IHttpClientFactory to manage HttpClientHandler lifetime‌

In this section we’ll look at the handler pipeline used by HttpClient. You’ll see how IHttpClientFactory manages the lifetime of this pipeline and how this enables the factory to avoid both socket exhaustion and DNS problems.

The HttpClient class you typically use to make HTTP requests is responsible for orchestrating requests, but it isn’t responsible for making the raw connection itself. Instead, the HttpClient calls into a pipeline of HttpMessageHandler, at the end of which is an HttpClientHandler, which makes the actual connection and sends the HTTP request, as shown in figure 33.4.

alt text

Figure 33.4 Each HttpClient contains a pipeline of HttpMessageHandlers. The final handler is an HttpClientHandler, which makes the connection to the remote server and sends the HTTP request. This configuration is similar to the ASP.NET Core middleware pipeline, and it allows you to make cross- cutting adjustments to outgoing requests.

This configuration is reminiscent of the middleware pipeline used by ASP.NET Core applications, but this is an outbound pipeline. When an HttpClient makes a request, each handler gets a chance to modify the request before the final HttpClientHandler makes the real HTTP request. Each handler in turn then gets a chance to view the response after it’s received.

TIP You’ll see an example of using this handler pipeline for cross-cutting concerns in section 33.3 when we add a transient error handler.

The problems of socket exhaustion and DNS I described in section 33.1 are related to the disposal of the HttpClientHandler at the end of the handler pipeline. By default, when you dispose of an HttpClient, you dispose of the handler pipeline too. IHttpClientFactory separates the lifetime of the HttpClient from the underlying HttpClientHandler.

Separating the lifetime of these two components enables the IHttpClientFactory to solve the problems of socket exhaustion and DNS rotation. It achieves this in two ways:

• By creating a pool of available handlers—Socket exhaustion occurs when you dispose of an HttpClientHandler, due to the TIME_WAIT problem described previously.

• IHttpClientFactory solves this by creating a pool of handlers.

IHttpClientFactory maintains an active handler that it uses to create all HttpClients for two minutes. When the HttpClient is disposed of, the underlying handler isn’t disposed of, so the connection isn’t closed. As a result, socket exhaustion isn’t a problem.

• By periodically disposing of handlers—Sharing handler pipelines solves the socket exhaustion problem, but it doesn’t solve the DNS problem. To work around this, the IHttpClientFactory periodically (every two minutes) creates a new active HttpClientHandler that it uses for each HttpClient created subsequently. As these HttpClients are using a new handler, they make a new TCP/IP connection, so DNS changes are respected.

IHttpClientFactory disposes of expired handlers periodically in the background once they are no longer used by an HttpClient. This ensures that your application’s HttpClients use a limited number of connections.

TIP I wrote a blog post that looks in depth at how IHttpClientFactory achieves its handler rotation. This is a detailed post, but it may be of interest to those who like to know how things are implemented behind the scenes. See “Exploring the code behind IHttpClientFactory in depth” at http://mng.bz/8NRK.

Rotating handlers with IHttpClientFactory solves both the problems we’ve discussed. Another bonus is that it’s easy to replace existing uses of HttpClient with IHttpClientFactory.

IHttpClientFactory is included by default in ASP.NET Core. You simply add it to your application’s services in Program.cs:

builder.Services.AddHttpClient();

This registers the IHttpClientFactory as a singleton in your application, so you can inject it into any other service. The following listing shows how you can replace the HttpClient approach from listing 33.2 with a version that uses IHttpClientFactory.

Listing 33.3 Using IHttpClientFactory to create an HttpClient

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
builder.Services.AddHttpClient(); ❶
WebApplication app = builder.Build();
app.MapGet("/", async (IHttpClientFactory factory) => ❷
{
HttpClient client = factory.CreateClient(); ❸
client.BaseAddress = ❹
new Uri("https://example.com/rates/"); ❹
var response = await client.GetAsync("latest"); ❺
response.EnsureSuccessStatusCode(); ❺
return await response.Content.ReadAsStringAsync(); ❺
});
app.Run();

❶ Registers the IHttpClientFactory service in DI
❷ Injects the IHttpClientFactory using DI
❸ Creates an HttpClient instance with an HttpClientHandler managed by the factory
❹ Configures the HttpClient for calling the API as before
❺ Uses the HttpClient in exactly the same way you would otherwise

The immediate benefit of using IHttpClientFactory in this way is efficient socket and DNS handling. When you create an HttpClient using CreateClient(), IHttpClientFactory uses a pooled HttpClientHandler to create a new instance of an HttpClient, pooling and disposing the handlers as necessary to find a balance between the tradeoffs described in section 33.1.

Minimal changes should be required to take advantage of this pattern, as the bulk of your code stays the same. Only the code where you’re creating an HttpClient instance changes. This makes it a good option if you’re refactoring an existing app.

SocketsHttpHandler vs. IHttpClientFactory

The limitations of HttpClient described in section 33.1 apply specifically to the HttpClientHandler at the end of the HttpClient handler pipeline in older versions of .NET Core. IHttpClientFactory provides a mechanism for managing the lifetime and reuse of HttpClientHandler instances.‌

From .NET 5 onward, the legacy HttpClientHandler has been replaced by SocketsHttpHandler. This handler has several advantages, most notably performance benefits and consistency across platforms. The SocketsHttpHandler can also be configured to use connection pooling and recycling, like IHttpClientFactory.

So if HttpClient can already use connection pooling, is it worth using IHttpClientFactory? In most cases, I would say yes. You must manually configure connection pooling with SocketsHttpHandler, and IHttpClientFactory has additional features such as named clients and typed clients. In any situations where you’re using dependency injection (DI), which is every ASP.NET Core app and most .NET 7 apps, I recommend using IHttpClientFactory to take advantage of these benefits.

Nevertheless, if you’re working in a non-DI scenario and can’t use IHttpClientFactory, be sure to enable the SocketsHttpHandler connection pooling as described in this post by Steve Gordon, titled “HttpClient connection pooling in .NET Core”: http://mng.bz/E27q.

Managing the socket problem is one big advantage of using IHttpClientFactory over HttpClient, but it’s not the only benefit. You can also use IHttpClientFactory to clean up the client configuration, as you’ll see in the next section.

33.2.2 Configuring named clients at registration time‌

In this section you’ll learn how to use the Named Client pattern with IHttpClientFactory. This pattern encapsulates the logic for calling a third-party API in a single location, making it easier to use the HttpClient in your consuming code.

NOTE IHttpClientFactory uses the same HttpClient type you’re familiar with if you’re coming from .NET Framework. The big difference is that IHttpClientFactory solves the DNS and socket exhaustion problem by managing the underlying message handlers.

Using IHttpClientFactory solves the technical problems I described in section 33.1, but the code in listing 33.3 is still pretty messy in my eyes, primarily because you must configure the HttpClient to point to your service before you use it. If you need to create an HttpClient to call the API in more than one place in your application, you must configure it in more than one place too.

IHttpClientFactory provides a convenient solution to this problem by allowing you to centrally configure named clients, which have a string name and a configuration function that runs whenever an instance of the named client is requested. You can define multiple configuration functions that run in sequence to configure your new HttpClient.

The following listing shows how to register a named client called "rates". This client is configured with the correct BaseAddress and sets default headers that are to be sent with each outbound request. Once you have configured this named client, you can create it from an IHttpClientFactory instance using the name of the client, "rates".

Listing 33.4 Using IHttpClientFactory to create a named HttpClient

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
builder.Services.AddHttpClient("rates", (HttpClient client) => ❶
{
client.BaseAddress = ❷
new Uri("https://example.com/rates/"); ❷
client.DefaultRequestHeaders.Add( ❷
HeaderNames.UserAgent, "ExchangeRateViewer"); ❷
})
.ConfigureHttpClient((HttpClient client) => {}) ❸
.ConfigureHttpClient(
(IServiceProvider provider, HttpClient client) => {}); ❹
WebApplication app = builder.Build();
app.MapGet("/", async (IHttpClientFactory factory) => ❺
{
HttpClient client = factory.CreateClient("rates"); ❻
var response = await client.GetAsync("latest"); ❼
❼
response.EnsureSuccessStatusCode();
return await response.Content.ReadAsStringAsync();
});
app.Run();

❶ Provides a name for the client and a configuration function
❷ The configuration function runs every time the named HttpClient is requested.
❸ You can add more configuration functions for the named client, which run in sequence.
❹ Additional overloads exist that allow access to the DI container when creating a named client.
❺ Injects the IHttpClientFactory using DI
❻ Requests the configured named client called “rates”
❼ Uses the HttpClient the same way as before

NOTE You can still create unconfigured clients using CreateClient() without a name. Be aware that if you pass an unconfigured name, such as CreateClient ("MyRates"), the client returned will be unconfigured. Take care—client names are case-sensitive, so "rates" is a different client from "Rates".

Named clients help centralize your HttpClient configuration in one place, removing the responsibility for configuring the client from your consuming code. But you’re still working with raw HTTP calls at this point, such as providing the relative URL to call ("/latest") and parsing the response. IHttpClientFactory includes a feature that makes it easier to clean up this code.

33.2.3 Using typed clients to encapsulate HTTP calls‌

A common pattern when you need to interact with an API is to encapsulate the mechanics of that interaction in a separate service. You could easily do this with the IHttpClientFactory features you’ve already seen by extracting the body of the GetRates() function from listing 33.4 into a separate service. But IHttpClientFactory has deeper support for this pattern.

IHttpClientFactory supports typed clients. A typed client is a class that accepts a configured HttpClient in its constructor. It uses the HttpClient to interact with the remote API and exposes a clean interface for consumers to call. All the logic for interacting with the remote API is encapsulated in the typed client, such as which URL paths to call, which HTTP verbs to use, and the types of responses the API returns. This encapsulation makes it easier to call the third-party API from multiple places in your app by using the typed client.

The following listing shows an example typed client for the exchange rates API shown in previous listings. It accepts an HttpClient in its constructor and exposes a GetLatestRates() method that encapsulates the logic for interacting with the third-party API.

Listing 33.5 Creating a typed client for the exchange rates API

public class ExchangeRatesClient
{
private readonly HttpClient _client; ❶
public ExchangeRatesClient(HttpClient client) ❶
{
_client = client;
}
public async Task<string> GetLatestRates() ❷
{
var response = await _client.GetAsync("latest"); ❸
response.EnsureSuccessStatusCode(); ❸
return await response.Content.ReadAsStringAsync(); ❸
}
}

❶ Injects an HttpClient using DI instead of an IHttpClientFactory
❷ The GetLatestRates() logic encapsulates the logic for interacting with the API.
❸ Uses the HttpClient the same way as before

We can then inject this ExchangeRatesClient into consuming services, and they don’t need to know anything about how to make HTTP requests to the remote service; they need only to interact with the typed client. We can update listing 33.3 to use the typed client as shown in the following listing, at which point the API endpoint method becomes trivial.

Listing 33.6 Consuming a typed client to encapsulate calls to a remote HTTP server

app.MapGet("/", async (ExchangeRatesClient ratesClient) => ❶
await ratesClient.GetLatestRates());

❶ Injects the typed client using DI
❷ Calls the typed client’s API. The typed client handles making the correct HTTP requests.

You may be a little confused at this point. I haven’t mentioned how IHttpClientFactory is involved yet!

The ExchangeRatesClient takes an HttpClient in its constructor. IHttpClientFactory is responsible for creating the HttpClient, configuring it to call the remote service and injecting it into a new instance of the typed client.

You can register the ExchangeRatesClient as a typed client and configure the HttpClient that is injected in ConfigureServices, as shown in the following listing. This is similar to configuring a named client, so you can register additional configuration for the HttpClient that will be injected into the typed client.

Listing 33.7 Registering a typed client with HttpClientFactory in Startup.cs

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
builder.Services.AddHttpClient<ExchangeRatesClient> ❶
(HttpClient client) => ❷
{ ❷
client.BaseAddress = ❷
new Uri("https://example.com/rates/"); ❷
client.DefaultRequestHeaders.Add( ❷
HeaderNames.UserAgent, "ExchangeRateViewer"); ❷
})
.ConfigureHttpClient((HttpClient client) => {}); ❸
}
WebApplication app = builder.Build();
app.MapGet("/", async (ExchangeRatesClient ratesClient) =>
await ratesClient.GetLatestRates());
app.Run();

❶ Registers a typed client using the generic AddHttpClient method
❷ You can provide an additional configuration function for the HttpClient that will be injected.
❸ As for named clients, you can provide multiple configuration methods.

Behind the scenes, the call to
AddHttpClient does several things:

• Registers HttpClient as a transient service in DI. That means you can accept an HttpClient in the constructor of any service in your app and IHttpClientFactory will inject a default pooled instance, which has no additional configuration.

• Registers ExchangeRatesClient as a transient service in DI.

• Controls the creation of ExchangeRatesClient so that whenever a new instance is required, a pooled HttpClient is configured as defined in the AddHttpClient lambda method.

TIP You can think of a typed client as a wrapper around a named client. I’m a big fan of this approach, as it encapsulates all the logic for interacting with a remote service in one place. It also avoids the magic strings that you use with named clients, removing the possibility of typos.

Another option when registering typed clients is to register an interface in addition to the implementation. This is often good practice, as it makes it much easier to test consuming code. If the typed client in listing 33.5 implemented the interface IExchangeRatesClient, you could register the interface and typed client implementation using

builder.Services.AddHttpClient<IExchangeRatesClient, ExchangeRatesClient>()

You could then inject this into consuming code using the interface type

app.MapGet("/", async (IExchangeRatesClient ratesClient) =>
await ratesClient.GetLatestRates());

Another common pattern is to not provide any configuration for the typed client in the AddHttpClient() call. Instead, you could place that logic in the constructor of your ExchangeRatesClient using the injected HttpClient:

public class ExchangeRatesClient
{
private readonly HttpClient _client;
public ExchangeRatesClient(HttpClient client)
{
_client = client;
_client.BaseAddress = new Uri("https://example.com/rates/");
}
}

This is functionally equivalent to the approach shown in listing 33.7. It’s a matter of taste where you’d rather put the configuration for your HttpClient. If you take this approach, you don’t need to provide a configuration lambda in AddHttpClient():

builder.Services.AddHttpClient<ExchangeRatesClient>();

Named clients and typed clients are convenient for managing and encapsulating HttpClient configuration, but IHttpClientFactory has another advantage we haven’t looked at yet: it’s easier to extend the HttpClient handler pipeline.‌‌

33.3 Handling transient HTTP errors with Polly‌

In this section you’ll learn how to handle a common scenario: transient errors when you make calls to a remote service, caused by an error in the remote server or temporary network problems. You’ll see how to use IHttpClientFactory to handle cross-cutting concerns like this by adding handlers to the HttpClient handler pipeline.

In section 33.2.1 I described HttpClient as consisting of a pipeline of handlers. The big advantage of this pipeline, much like the middleware pipeline of your application, is that it allows you to add cross-cutting concerns to all requests.For example, IHttpClientFactory automatically adds a
handler to each HttpClient that logs the status code and duration of each outgoing request.

In addition to logging, another common requirement is to handle transient errors when calling an external API. Transient errors can happen when the network drops out, or if a remote API goes offline temporarily. For transient errors, simply trying the request again can often succeed, but having to write the code to do so manually is cumbersome.

ASP.NET Core includes a library called Microsoft.Extensions.Http.Polly that makes handling transient errors easier. It uses the popular open-source library Polly (https://github.com/App-vNext/Polly) to automatically retry requests that fail due to transient network errors.

Polly is a mature library for handling transient errors that includes a variety of error-handling strategies, such as simple retries, exponential backoff, circuit breaking, and bulkhead isolation. Each strategy is explained in detail at https://github.com/App-vNext/Polly, so be sure to read about the benefits and trade-offs when selecting a strategy.

To provide a taste of what’s available, we’ll add a simple retry policy to the ExchangeRatesClient shown in section 33.2. If a request fails due to a network problem, such as a timeout or a server error, we’ll configure Polly to automatically retry the request as part of the handler pipeline, as shown in figure 33.5.

alt text
alt text

Figure 33.5 Using the PolicyHttpMessageHandler to handle transient errors. If an error occurs when calling the remote API, the Polly handler will automatically retry the request. If the request then succeeds, the result is passed back to the caller. The caller didn’t have to handle the error, making it simpler to use the HttpClient while remaining resilient to transient errors.

To add transient error handling to a named client or

  1. HttpClient, follow these steps:

Install the Microsoft.Extensions.Http.Polly NuGet package in your project by running dotnet add package Microsoft.Extensions.Http.Polly, by using the NuGet explorer in Visual Studio, or by adding a <PackageReference> element to your project file as follows:

<PackageReference Include="Microsoft.Extensions.Http.Polly" Version="7.0.0" />
  1. Configure a named or typed client as shown in listings 33.4 and 33.7.

  2. Configure a transient error-handling policy for your client as shown in list- ing 33.8.

Listing 33.8 Configuring a transient error-handling policy for a typed client

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
builder.services.AddHttpClient<ExchangeRatesClient>() ❶
.AddTransientHttpErrorPolicy(policy => ❷
policy.WaitAndRetryAsync(new[] { ❸
TimeSpan.FromMilliseconds(200), ❹
TimeSpan.FromMilliseconds(500), ❹
TimeSpan.FromSeconds(1) ❹
})
);

❶ You can add transient error handlers to named or typed clients.
❷ Uses the extension methods provided by the NuGet package to add transient
error handlers
❸ Configures the retry policy used by the handler. There are many types of
policies to choose among.
❹ Configures a policy that waits and retries three times if an error occurs

In the preceding listing we configure the error handler to catch transient errors and retry three times, waiting an increasing amount of time between requests. If the request fails on the third try, the handler ignores the error and pass it back to the client, as though there was no error handler at all. By default, the handler retries any request that

• Throws an HttpRequestException, indicating an error at the protocol level, such as a closed connection

• Returns an HTTP 5xx status code, indicating a server error at the API

• Returns an HTTP 408 status code, indicating a timeout

TIP If you want to handle more cases automatically or to restrict the responses that will be automatically retried, you can customize the selection logic as described in the “Polly and HttpClientFactory” documentation on GitHub: http://mng.bz/NY7E.

Using standard handlers like the transient error handler allows you to apply the same logic across all requests made by a given HttpClient. The exact strategy you choose will depend on the characteristics of both the service and the request, but a good retry strategy is a must whenever you interact with potentially unreliable HTTP APIs.

WARNING When designing a policy, be sure to consider the effect of your policy. In some circumstances it may be better to fail quickly instead of retrying a request that is never going to succeed. Polly includes additional policies such as circuit-breakers to create more advanced approaches.

The Polly error handler is an example of an optional HttpMessageHandler that you can plug in to your HttpClient, but you can also create your own custom handler. In the next section you’ll see how to create a handler that adds a header to all outgoing requests.

33.4 Creating a custom HttpMessageHandler‌

Most third-party APIs require some form of authentication when you’re calling them. For example, many services require you to attach an API key to an outgoing request, so that the request can be tied to your account. Instead of having to remember to add this header manually for every request to the API, you could configure a custom HttpMessageHandler to attach the header automatically for you.

NOTE More complex APIs may use JSON Web Tokens (JWT) obtained from an identity provider. If that’s the case, consider using the open source IdentityModel library (https://identitymodel.readthedocs.io), which provides integration points for ASP.NET Core Identity and HttpClientFactory.

You can configure a named or typed client using IHttpClientFactory to use your API-key handler as part of the HttpClient’s handler pipeline, as shown in figure 33.6. When you use the HttpClient to send a message, the HttpRequestMesssage is passed through each handler in turn. The API-key handler adds the extra header and passes the request to the next handler in the pipeline. Eventually, the HttpClientHandler makes the network request to send the HTTP request. After the response is received, each handler gets a chance to inspect (and potentially modify) the response.

alt text

Figure 33.6 You can use a custom HttpMessageHandler to modify requests before they’re sent to third-party APIs. Every request passes through the custom handler before the final handler (the HttpClientHandler) sends the request to the HTTP API. After the response is received, each handler gets a chance to inspect and modify the response.

To create a custom HttpMessageHandler and add it to a typed or named client’s pipeline, follow these steps:

• Create a custom handler by deriving from the DelegatingHandler base class.

• Override the SendAsync() method to provide your custom behavior. Call base.SendAsync() to execute the remainder of the handler pipeline.

• Register your handler with the DI container. If your handler does not require state, you can register it as a singleton service; otherwise, you should register it as a transient service.

• Add the handler to one or more of your named or typed clients by calling AddHttpMessageHandler<T>() on an IHttpClientBuilder, where T is your handler type. The order in which you register handlers dictates the order in which they are added to the HttpClient handler pipeline. You can add the same handler type more than once in a pipeline if you wish and to multiple typed or named clients.

The following listing shows an example of a custom HttpMessageHandler that adds a header to every outgoing request. We use the custom "API-KEY" header in this example, but the header you need will vary depending on the third-party API you’re calling. This example uses strongly typed configuration to inject the secret API key, as you saw in chapter 10.

Listing 33.9 Creating a custom HttpMessageHandler

public class ApiKeyMessageHandler : DelegatingHandler ❶
{
private readonly ExchangeRateApiSettings _settings; ❷
public ApiKeyMessageHandler( ❷
IOptions<ExchangeRateApiSettings> settings) ❷
{ ❷
_settings = settings.Value; ❷
} ❷
protected override async Task<HttpResponseMessage> SendAsync( ❸
HttpRequestMessage request, ❸
CancellationToken cancellationToken) ❸
{
request.Headers.Add("API-KEY", _settings.ApiKey); ❹
HttpResponseMessage response = ❺
await base.SendAsync(request, cancellationToken); ❺
return response; ❻
}
}

❶ Custom HttpMessageHandlers should derive from DelegatingHandler.
❷ Injects the strongly typed configuration values using DI
❸ Overrides the SendAsync method to implement the custom behavior
❹ Adds the extra header to all outgoing requests
❺ Calls the remainder of the pipeline and receives the response
❻ You could inspect or modify the response before returning it.

To use the handler, you must register it with the DI container and add it to a named or typed client. In the following listing, we add it to the ExchangeRatesClient, along with the transient error handler we registered in listing 33.7. This creates a pipeline similar to that shown in figure 33.6.

Listing 33.10 Registering a custom handler in Startup.ConfigureServices

WebApplicationBuilder builder = WebApplication.CreateBuilder(args);
builder.Services.AddTransient<ApiKeyMessageHandler>(); ❶
builder.Services.AddHttpClient<ExchangeRatesClient>()
.AddHttpMessageHandler<ApiKeyMessageHandler>() ❷
.AddTransientHttpErrorPolicy(policy => ❸
policy.WaitAndRetryAsync(new[] {
TimeSpan.FromMilliseconds(200),
TimeSpan.FromMilliseconds(500),
TimeSpan.FromSeconds(1)
})
);

❶ Registers the custom handler with the DI container
❷ Configures the typed client to use the custom handler
❸ Adds the transient error handler. The order in which the handlers are registered dictates their order in the pipeline.

Whenever you make a request using the typed client ExchangeRatesClient, you can be sure that the API key will be added and that transient errors will be handled automatically for you.

That brings us to the end of this chapter on IHttpClientFactory. Given the difficulties in using HttpClient correctly that I showed in section 33.1, you should always favor IHttpClientFactory where possible. As a bonus, IHttpClientFactory allows you to easily centralize your API configuration using named clients and to encapsulate your API interactions using typed clients.

Summary

Use the HttpClient class for calling HTTP APIs. You can use it to make HTTP calls to APIs, providing all the headers and body to send in a request, and reading the response headers and data you get back.

HttpClient uses a pipeline of handlers, consisting of multiple HttpMessageHandlers connected in a similar way to the middleware pipeline used in ASP.NET Core. The final handler is the HttpClientHandler, which is responsible for making the network connection and sending the request.

HttpClient implements IDisposable, but typically you shouldn’t dispose of it. When the HttpClientHandler that makes the TCP/IP connection is disposed of, it keeps a connection open for the TIME_WAIT period. Disposing of many HttpClients in a short period of time can lead to socket exhaustion, preventing a machine from handling any more requests.

Before .NET Core 2.1, the advice was to use a single HttpClient for the lifetime of your application. Unfortunately, a singleton HttpClient will not respect DNS changes, which are commonly used for traffic management in cloud environments.

IHttpClientFactory solves both these problems by managing the lifetime of the HttpMessageHandler pipeline. You can create a new HttpClient by calling CreateClient(), and IHttpClientFactory takes care of disposing of the handler pipeline when it is no longer in use.

You can centralize the configuration of an HttpClient in ConfigureServices() using named clients by calling AddHttpClient("test", c => {}). You can then retrieve a configured instance of the client in your services by calling IHttpClientFactory.CreateClient("test ").

You can create a typed client by injecting an HttpClient into a service, T, and configuring the client using AddHttpClient<T>(c => {}).

Typed clients are great for abstracting the HTTP mechanics away from consumers of your client.

You can use the Microsoft.Extensions.Http.Polly library to add transient HTTP error handling to your HttpClients. Call AddTransientHttpErrorPolicy() when configuring your IHttpClientFactory, and provide a Polly policy to control when errors should be automatically handled and retried.

It’s common to use a simple retry policy to try making a request multiple times before giving up and returning an error. When designing a policy, be sure to consider the effect of your policy; in some circumstances it may be better to fail quickly instead of retrying a request that is never going to succeed. Polly includes additional policies such as circuit-breakers to create more advanced approaches.

By default, the transient error-handling middleware will handle connection errors, server errors that return a 5xx error code, and 408 (timeout) errors. You can customize this if you want to handle additional error types but ensure that you retry only requests that are safe to do so.

You can create a custom HttpMessageHandler to modify each request made through a named or typed client. Custom handlers are good for implementing cross-cutting concerns such as logging, metrics, and authentication.

To create a custom HttpMessageHandler, derive from DelegatingHandler and override the SendAsync() method. Call base.SendAsync() to send the request to the next handler in the pipeline and finally to the HttpClientHandler, which makes the HTTP request.

Register your custom handler in the DI container as either a transient or a singleton. Add it to a named or typed client using AddHttpMessageHandler<T>(). The order in which you register the handler in the IHttpClientBuilder is the order in which the handler will appear in the HttpClient handler pipeline.

  1. Azure Traffic Manager, for example, uses DNS to route requests. You can read more about how it works at http://mng.bz/vnP4.

ASP.NET Core in Action 32 Building custom MVC and Razor Pages components

32 Building custom MVC and Razor Pages components‌

This chapter covers

• Creating custom Razor Tag Helpers
• Using view components to create complex Razor views
• Creating a custom DataAnnotations validation attribute
• Replacing the DataAnnotations validation framework with an alternative

In the previous chapter you learned how to customize and extend some of the core systems in ASP.NET Core: configuration, dependency injection (DI), and your middleware pipeline. These components form the basis of all ASP.NET Core apps. In this chapter we’re focusing on Razor Pages and Model-View-Controller (MVC)/API controllers. You’ll learn how to build custom components that work with Razor views. You’ll also learn how to build components that work with the validation framework used by both Razor Pages and API controllers.

We’ll start by looking at Tag Helpers. In section 32.1 I show how to create two Tag Helpers: one that generates HTML to describe the current machine and one that lets you write if statements in Razor templates without having to use C#.

These will give you the details you need to create your own custom Tag Helpers in your own apps if the need arises.

In section 32.2 you’ll learn about a new Razor concept: view components. View components are a bit like partial views, but they can contain business logic and database access. For example, on an e-commerce site you might have a shopping cart, a dynamically populated menu, and a login widget all on one page. Each of those sections is independent of the main page content and has its own logic and data-access needs. In an ASP.NET Core app using Razor Pages, you’d implement each of those as a view component.

In section 32.3 I’ll show you how to create a custom validation attribute. As you saw in chapter 6, validation is a key responsibility of Razor Page handlers and action methods, and the DataAnnotations attributes provide a clean, declarative way of doing so. We previously looked only at the built-in attributes, but you’ll often find you need to add attributes tailored to your app’s domain. In section 32.3 you’ll see how to create a simple validation attribute and how to extend it to use services registered with the DI container.

Throughout this book I’ve mentioned that you can easily swap out core parts of the ASP.NET Core framework if you wish. In section 32.4 you’ll do that by replacing the built-in attribute-based validation framework with a popular alternative, FluentValidation. This open-source library allows you to separate your binding models from the validation rules, which makes building certain validation logic easier.Many people prefer this approach of separating concerns to the declarative approach of DataAnnotations.

When you’re building pages with Razor Pages, one of the best productivity features is Tag Helpers, and in the next section you’ll see how you can create your own.

32.1 Creating a custom Razor Tag Helper‌

In this section you’ll learn how to create your own Tag Helpers, which allow you to customize your HTML output. You’ll learn how to create Tag Helpers that add new elements to your HTML markup, as well as Tag Helpers that can remove or customize existing markup. You’ll also see that your custom Tag Helpers integrate with the tooling of your integrated development environment (IDE) to provide rich IntelliSense in the same way as the built-in Tag Helpers.

In my opinion, Tag Helpers are one of the best additions to the venerable Razor template language in ASP.NET Core.They allow you to write Razor templates that are easier to read, as they require less switching between C# and HTML, and they augment your HTML tags rather than replace them (as opposed to the HTML Helpers used extensively in the legacy version of ASP.NET).

ASP.NET Core comes with a wide variety of Tag Helpers (see chapter 18), which cover many of your day-to-day requirements, especially when it comes to building forms.For example, you can use the Input Tag Helper by adding an

asp-for attribute to an <input> tag and passing a‌ reference to a property on your PageModel, in this case Input.Email:
`

`

The Tag Helper is activated by the presence of the attribute and gets a chance to augment the tag when rendering to HTML. The Input Tag Helper uses the name of the property to set the tag’s name and id properties, the value of the model to set the value property, and the presence of attributes such as [Required] or [EmailAddress] to add attributes for validations:‌‌‌

<input type="email" id="Input_Email" name="Input.Email" value="test@example.com" data-val="true"
data-val-email="The Email Address field is not a valid e-mail address."

data-val-required="The Email Address field is required."
/>

Tag Helpers help reduce the duplication in your code, or they can simplify common patterns. In this section I show how you can create your own custom Tag Helpers.

In section 32.1.1 you’ll create a system information Tag Helper, which prints details about the name and operating system of the server your app is running on. In section 32.1.2 you’ll create a Tag Helper that you can use to conditionally show or hide an element based on a C# Boolean property. In section 32.1.3 you’ll create a Tag Helper that reads the Razor content written inside the Tag Helper and transforms it.

32.1.1 Printing environment information with a custom Tag Helper‌

A common problem you may run into when you start running your web applications in production, especially if you’re using a server-farm setup, is working out which machine rendered the page you’re currently looking at. Similarly, when deploying frequently, it can be useful to know which version of the application is running. When I’m developing and testing, I sometimes like to add a little “info dump” at the bottom of my layouts so I can easily work out which server generated the current page, which environment it’s running in, and so on.

In this section I’m going to show you how to build a custom Tag Helper to output system information to your layout. You’ll be able to toggle the information it displays, but by default it displays the machine name and operating system on which the app is running, as shown in figure 32.1.

alt text

Figure 32.1 The SystemInfoTagHelper displays the machine name and operating system on which the application is running. It can be useful for identifying which instance of your app handled the request when running in a web-farm scenario.

You can call this Tag Helper from Razor by creating a <system-info> element in your template:

<footer>
<system-info></system-info>
</footer>

TIP You might not want to expose this sort of information in production, so you could also wrap it in an Tag Helper, as you saw in chapter 18.

The easiest way to create a custom Tag Helper is to derive from the TagHelper base class and override the Process() or ProcessAsync() function that describes how the class should render itself. The following listing shows your complete custom Tag Helper, SystemInfoTagHelper, which renders the system information to a

. You could easily extend this class if you wanted to display additional fields or add options.‌‌‌

Listing 32.1 SystemInfoTagHelper to render system information to a view

public class SystemInfoTagHelper : TagHelper ❶
{
private readonly HtmlEncoder _htmlEncoder; ❷
public SystemInfoTagHelper(HtmlEncoder htmlEncoder) ❷
{
_htmlEncoder = htmlEncoder;
}
[HtmlAttributeName("add-machine")] ❸
public bool IncludeMachine { get; set; } = true;
[HtmlAttributeName("add-os")] ❸
public bool IncludeOS { get; set; } = true;
public override void Process( ❹
TagHelperContext context, TagHelperOutput output) ❹
{
output.TagName = "div"; ❺
output.TagMode = TagMode.StartTagAndEndTag; ❻
var sb = new StringBuilder();
if (IncludeMachine) ❼
{ ❼
sb.Append(" <strong>Machine</strong> "); ❼
sb.Append(_htmlEncoder.Encode(Environment.MachineName)); ❼
} ❼
if (IncludeOS) ❽
{ ❽
sb.Append(" <strong>OS</strong> "); ❽
sb.Append( ❽
_htmlEncoder.Encode(RuntimeInformation.OSDescription)); ❽
} ❽
output.Content.SetHtmlContent(sb.ToString()); ❾
}
}

❶ Derives from the TagHelper base class
❷ An HtmlEncoder is necessary when writing HTML content to the page.
❸ Decorating properties with HtmlAttributeName allows you to set their values from Razor markup.
❹ The main function called when an element is rendered.
❺ Replaces the <system-info> element with a <div> element
❻ Renders both the <div> </div> start and end tag
❼ If required, adds a <strong> element and the HTML-encoded machine name
❽ If required, adds a <strong> element and the HTML-encoded OS name
❾ Sets the inner content of the

tag with the HTML-encoded value stored in the string builder

There’s a lot of new code in this example, so we’ll work through it line by line. First, the class name of the Tag Helper defines the name of the element you must create in your Razor template, with the suffix removed and converted to kebab-case. As this Tag Helper is called SystemInfoTagHelper, you must create a <system- info> element.‌

TIP If you want to customize the name of the element, for example to <env-info>, but you want to keep the same class name, you can apply [HtmlTargetElement] with the desired name, such as [HtmlTargetElement("Env-Info")]. HTML tags are not case-sensitive, so you could use "Env-Info" or "env-info".

Inject an HtmlEncoder into your Tag Helper so you can HTML-encode any data you write to the page. As you saw in chapter 29, you should always HTML-encode data you write to the page to avoid cross-site scripting (XSS) vulnerabilities and to ensure that the data is displayed correctly.

You’ve defined two properties on your Tag Helper, IncludeMachine and IncludeOS, which you’ll use to control which data is written to the page. These are decorated with a corresponding [HtmlAttributeName], which enables setting the properties from the Razor template. In Visual Studio you’ll even get IntelliSense and type-checking for these values, as shown in figure 32.2.‌

alt text

Figure 32.2 In Visual Studio, Tag Helpers are shown in a purple font, and you get IntelliSense for properties decorated with [HtmlAttributeName].

Finally, we come to the Process() method. The Razor engine calls this method to execute the Tag Helper when it identifies the target element in a view template. The Process() method defines the type of tag to render (<div>), whether it should render a start and end tag (or a self-closing tag—it depends on the type of tag you’re rendering), and the HTML content of the <div>. You set the HTML content to be rendered inside the tag by calling Content.SetHtmlContent() on the provided instance of TagHelperOutput.

WARNING Always HTML-encode your output before writing to your tag with SetHtmlContent(). Alternatively, pass unencoded input to SetContent(), and the output will be automatically HTML-encoded for you.

Before you can use your new Tag Helper in a Razor template, you need to register it. You can do this in the _ViewImports.cshtml file, using the @addTagHelper directive and specifying the fully qualified name of the Tag Helper and the assembly, as in this example:

@addTagHelper CustomTagHelpers.SystemInfoTagHelper, CustomTagHelpers

Alternatively, you can add all the Tag Helpers from a given assembly by using the wildcard syntax, *, and specifying the assembly name:

@addTagHelper *, CustomTagHelpers

With your custom Tag Helper created and registered, you’re now free to use it in any of your Razor views, partial views, or layouts.

TIP If you’re not seeing IntelliSense for your Tag Helper in Visual Studio, and the Tag Helper isn’t rendered in the bold font used by Visual Studio, you probably haven’t registered your Tag Helpers correctly in _ViewImports .cshtml using @addTagHelper.

The SystemInfoTagHelper is an example of a Tag Helper that generates content, but you can also use Tag Helpers to control how existing elements are rendered. In the next section you’ll create a simple Tag Helper that can control whether an element is rendered based on an HTML attribute.

32.1.2 Creating a custom Tag Helper to conditionally hide elements‌

If you want to control whether an element is displayed in a Razor template based on some C# variable, you’d typically wrap the element in a C# if statement:‌

@{
var showContent = true;
}
@if(showContent)
{
<p>The content to show</p>
}

Falling back to C# constructs like this can be useful, as it allows you to generate any markup you like. Unfortunately, it can be mentally disruptive having to switch back and forth between C# and HTML, and it makes it harder to use HTML editors that don’t understand Razor syntax.

In this section you’ll create a simple Tag Helper to avoid the cognitive dissonance problem. You can apply this Tag Helper to existing elements to achieve the same result as shown previously but without having to fall back to C#:

@{
var showContent = true;
}
<p if="showContent" >
The content to show
</p>

When rendered at runtime, this Razor template would return the HTML

<p>
The content to show
</p>

Instead of creating a new element, as you did for SystemInfoTagHelper (<system-info>), you’ll create a Tag Helper that you apply as an attribute to existing HTML elements. This Tag Helper does one thing: controls the visibility of the element it’s attached to. If the value passed in the if attribute is true, the element and its content is rendered as normal. If the value passed is false, the Tag Helper removes the element and its content from the template. The following listing shows how you could achieve this.

Listing 32.2 Creating an IfTagHelper to conditionally render elements

[HtmlTargetElement(Attributes = "if")] ❶
public class IfTagHelper : TagHelper
{
[HtmlAttributeName("if")] ❷
public bool RenderContent { get; set; } = true;
public override void Process( ❸
TagHelperContext context, TagHelperOutput output) ❸
{
if(RenderContent == false) ❹
{
output.TagName = null; ❺
output.SuppressOutput(); ❻
}
}
public override int Order => int.MinValue; ❼
}

❶ Setting the Attributes property ensures that the Tag Helper is triggered by an if attribute.
❷ Binds the value of the if attribute to the RenderContent property
❸ The Razor engine calls Process() to execute the Tag Helper.
❹ If the RenderContent property evaluates to false, removes the element
❺ Sets the element the Tag Helper resides on to null, removing it from the page
❻ Doesn’t render or evaluate the inner content of the element
❼ Ensures that this Tag Helper runs before any others attached to the element

Instead of a standalone <if> element, the Razor engine executes the IfTagHelper whenever it finds an element with an if attribute. This can be applied to any HTML element: <p>, <div>, <input>, whatever you need. You should define a Boolean property specifying whether you should render the content, which is bound to the value in the if attribute.‌‌‌‌

The Process() function is much simpler here. If RenderContent is false, it sets the TagHelperOutput.TagName to null, which removes the element from the page. It also calls SuppressOutput(), which prevents any content inside the attributed element from being rendered. If RenderContent is true, you skip these steps, and the content is rendered as normal.

One other point of note is the overridden Order property. This controls the order in which Tag Helpers run when multiple Tag Helpers are applied to an element. By setting Order to int.MinValue, you ensure that IfTagHelper always runs first, removing the element if required, before other Tag Helpers execute. There’s generally no point running other Tag Helpers if the element is going to be removed from the page anyway.

NOTE Remember to register your custom Tag Helpers in _ViewImports .cshtml with the @addTagHelper directive.

With a simple HTML attribute, you can now conditionally render elements in Razor templates without having to fall back to C#. This Tag Helper can show and hide content without needing to know what the content is. In the next section we’ll create a Tag Helper that does need to know the content.

32.1.3 Creating a Tag Helper to convert Markdown to HTML‌

The two Tag Helpers shown so far are agnostic to the content written inside the Tag Helper, but it can also be useful to create Tag Helpers that inspect, retrieve, and modify this

content. In this section you’ll see an example of one such Tag Helper that converts Markdown content written inside it to HTML.

DEFINITION Markdown is a commonly used text-based markup language that is easy to read but can also be converted to HTML. It is the common format used by README files on GitHub, and I use it to write blog posts, for example. For an introduction to Markdown, see the GitHub guide at http://mng.bz/o1rp.

We’ll use the popular Markdig library (https://github.com/xoofx/markdig) to create the Markdown Tag Helper. This library converts a string containing Markdown to an HTML string. You can install Markdig using Visual Studio by running dotnet add package Markdig or by adding a <PackageReference> to your .csproj file:

<PackageReference Include="Markdig" Version="0.30.4" />

The Markdown Tag Helper that we’ll create shortly can be used by adding elements to your Razor Page, as shown in the following listing.

Listing 32.3 Using a Markdown Tag Helper in a Razor Page

@page
@model IndexModel
@{
var showContent = true;
}
<markdown> ❶
## This is a markdown title ❷
This is a markdown list: ❸
* Item 1 ❸
* Item 2 ❸
<div if="showContent"> ❹
Content is shown when showContent is true ❹
</div> ❹
</markdown>

❶ Adds the Markdown Tag Helper using the <markdown> element
❷ Creates titles in Markdown using # to denote h1, ## to denote h2, and so on
❸ Markdown converts simple lists to HTML <ul> elements.
❹ Razor content can be nested inside other Tag Helpers.

The Markdown Tag Helper renders content with these steps:

  1. Render any Razor content inside the Tag Helper. This includes executing any nested Tag Helpers and C# code inside the Tag Helper. Listing 32.3 uses the IfTagHelper, for example.

  2. Convert the resulting string to HTML using the Markdig library.

  3. Replace the content with the rendered HTML and remove the Tag Helper <markdown> element.

The following listing shows a simple approach to implementing a Markdown Tag Helper using Markdig. Markdig supports many additional extensions and features that you could enable, but the overall pattern of the Tag Helper would be the same.

Listing 32.4 Implementing a Markdown Tag Helper using Markdig

public class MarkdownTagHelper: TagHelper ❶
{
public override async Task ProcessAsync(
TagHelperContext context, TagHelperOutput output)
{
TagHelperContent markdownRazorContent = await ❷
output.GetChildContentAsync(); ❷
string markdown = ❸
markdownRazorContent.GetContent(); ❸
string html = Markdig.Markdown.ToHtml(markdown); ❹
output.Content.SetHtmlContent(html); ❺
output.TagName = null; ❻
}
}

❶ The Markdown Tag Helper will use the <markdown> element.
❷ Retrieves the contents of the <markdown> element
❸ Renders the Razor contents to a string
❹ Converts the Markdown string to HTML using Markdig
❺ Writes the HTML content to the output
❻ Removes the <markdown> element from the content

When rendered to HTML, the Markdown content in listing 32.3 becomes

<h2>This is a markdown title</h2>
<p>This is a markdown list:</p>
<ul>
<li>Item 1</li>
<li>Item 2</li>
</ul>
<div>
Content is shown when showContent is true
</div>

NOTE In listing 32.4 we implemented ProcessAsync() instead of Process() because we called the async method GetChildContentAsync(). You must call async methods only from other async methods; otherwise, you can get problems such as thread starvation. For more details, see Microsoft’s “ASP.NET Core Best Practices” at http://mng.bz/KM7X.‌

The Tag Helpers in this section represent a small sample of possible avenues you could explore, but they cover the two broad categories: Tag Helpers for rendering new content and Tag Helpers for controlling the rendering of other elements.

TIP For further details and examples, see Microsoft’s “Author Tag Helpers in ASP.NET Core” documentation at http://mng.bz/Idb0.

Tag Helpers can be useful for providing small pieces of isolated, reusable functionality like this, but they’re not designed to provide larger, application-specific sections of an app or to make calls to business-logic services. Instead, you should use view components, as you’ll see in the next section.‌

32.2 View components: Adding logic to partial views‌

In this section you’ll learn about view components, which operate independently of the main Razor Page and can be used to encapsulate complex business logic. You can use view components to keep your main Razor Page focused on a single task—rendering the main content—instead of also being responsible for other sections of the page.

If you think about a typical website, you’ll notice that it may have multiple independent dynamic sections in addition to the main content. Consider Stack Overflow, shown in figure 32.3. As well as the main body of the page, which shows questions and answers, there’s a section showing the current logged-in user, a panel for blog posts and related items, and a section for job suggestions.

alt text

Figure 32.3 The Stack Overflow website has multiple sections that are independent of the main content but contain business logic and complex rendering logic.

Each of these sections could be rendered as a view component in ASP.NET Core.

Each of these sections is effectively independent of the main content. Each section contains business logic (deciding which posts or ads to show), database access (loading the details of the posts), and rendering logic for how to display the data.

In chapter 7 you saw that you can use layouts and partial views to split the rendering of a view template into similar sections, but partial views aren’t a good fit for this example. Partial views let you encapsulate view rendering logic but not business logic that’s independent of the main page content. Instead, view components provide this functionality, encapsulating both the business logic and rendering logic for displaying a small section of the page. You can use DI to provide access to a database context, and you can test view components independently of the view they generate, much like MVC and API controllers. Think of them as being a bit like mini MVC controllers or mini Razor Pages, but you invoke them directly from a Razor view instead of in response to an HTTP request.

TIP View components are comparable to child actions from the legacy .NET Framework version of ASP.NET, in that they provide similar functionality. Child actions don’t exist in ASP.NET Core.

View components vs. Razor Components and Blazor

In this book I focus on server-side rendered applications using Razor Pages and API applications using minimal APIs and web API controllers. .NET 7 also has a different approach to building ASP.NET Core applications: Blazor. I don’t cover Blazor in this book, so I recommend reading Blazor in Action, by Chris Sainty (Manning, 2021).‌

Blazor has two programming models, client-side and server-side, but both approaches use Blazor components (confusingly, officially called Razor components). Blazor components have a lot of parallels with view components, but they live in a fundamentally different world. Blazor components can interact easily, but you can’t use them with Tag Helpers or view components, and it’s hard to combine them with Razor Page form posts.

Nevertheless, if you need an island of rich client-side interactivity in a single Razor Page, you can embed a Blazor component in a Razor Page, as shown in the “Render components from a page or view” section of the “Prerender and integrate ASP.NET Core Razor components” documentation at http://mng.bz/PPen. You could also use Blazor components as a way to replace Asynchronous JavaScript and XML (AJAX) calls in your Razor Pages, as I show in my blog entry “Replacing AJAX calls in Razor Pages with Razor Components and Blazor” at http://mng.bz/9MJj.

If you don’t need the client-side interactivity of Blazor, view components are still the best option for isolated sections in Razor Pages. They interoperate cleanly with your Razor Pages; have no additional operational overhead; and use familiar concepts like layouts, partial views, and Tag Helpers. For more details on why you should continue to use view components, see my “Don’t replace your View Components with Razor Components” blog entry at http://mng.bz/1rKq.

In this section you’ll see how to create a custom view component for the recipe app you built in previous chapters, as shown in figure 32.4. If the current user is logged in, the view component displays a panel with a list of links to the user’s recently created recipes. For unauthenticated users, the view component displays links to the login and register actions.

alt text

Figure 32.4 The view component displays different content based on the currently logged-in user. It includes both business logic (determining which recipes to load from the database) and rendering logic (specifying how to display the data).

This component is a great candidate for a view component, as it contains database access and business logic (choosing which recipes to display) as well as rendering logic (deciding how the panel should be displayed).

TIP Use partial views when you want to encapsulate the rendering of a specific view model or part of a view model. Consider using a view component when you have rendering logic that requires business logic or database access or when the section is logically distinct from the main page content.

You invoke view components directly from Razor views and layouts using a Tag Helper-style syntax with a vc: prefix:

<vc:my-recipes number-of-recipes="3">
</vc:my-recipes>

Custom view components typically derive from the ViewComponent base class and implement an InvokeAsync() method, as shown in listing 32.5. Deriving from this base class allows access to useful helper methods in much the same way that deriving from the ControllerBase class does for API controllers. Unlike with API controllers, the parameters passed to InvokeAsync don’t come from model binding. Instead, you pass the parameters to the view component using properties on the Tag Helper element in your Razor view.‌‌

Listing 32.5 A custom view component to display the current user’s recipes

public class MyRecipesViewComponent : ViewComponent ❶
{
private readonly RecipeService _recipeService; ❷
private readonly UserManager<ApplicationUser> _userManager; ❷
public MyRecipesViewComponent(RecipeService recipeService, ❷
UserManager<ApplicationUser> userManager) ❷
{ ❷
_recipeService = recipeService; ❷
_userManager = userManager; ❷
} ❷
public async Task<IViewComponentResult> InvokeAsync( ❸
int numberOfRecipes) ❹
{
if(!User.Identity.IsAuthenticated)
{
return View("Unauthenticated"); ❺
}
var userId = _userManager.GetUserId(HttpContext.User); ❻
var recipes = await _recipeService.GetRecipesForUser( ❻
userId, numberOfRecipes);
return View(recipes); ❼
}
}

❶ Deriving from the ViewComponent base class provides useful methods like
View().
❷ You can use DI in a view component.
❸ InvokeAsync renders the view component. It should return a
Task<IViewComponentResult>.
❹ You can pass parameters to the component from the view.
❺ Calling View() will render a partial view with the provided name.
❻ You can use async external services, allowing you to encapsulate logic in your
business domain.
❼ You can pass a view model to the partial view. Default.cshtml is used by
default.

This custom view component handles all the logic you need to render a list of recipes when the user is logged in or a different view if the user isn’t authenticated. The name of the view component is derived from the class name, like Tag Helpers. Alternatively, you can apply the [ViewComponent] attribute to the class and set a different name entirely.

The InvokeAsync method must return a Task<IViewComponentResult>. This is similar to the way you can return IActionResult from an action method or a page handler, but it’s more restrictive; view components must render some sort of content, so you can’t return status codes or redirects. You’ll typically use the View() helper method to render a partial view template (as in the previous listing), though you can also return a string directly using the Content() helper method, which will HTML-encode the content and render it to the page directly.‌‌

You can pass any number of parameters to the InvokeAsync method. The name of the parameters (in this case, numberOfRecipes) is converted to kebab-case and exposed as a property in the view component’s Tag Helper (<number-of-recipes>). You can provide these parameters when you invoke the view component from a view, and you’ll get IntelliSense support, as shown in figure 32.5.

alt text

Figure 32.5 Visual Studio provides IntelliSense support for the method parameters of a view component’s InvokeAsync method. The parameter name, in this case numberOfRecipes, is converted to kebab-case for use as an attribute in the Tag Helper.

View components have access to the current request and HttpContext. In listing 32.5 you can see that we’re checking whether the current request was from an authenticated user. You can also see that we’ve used some conditional logic. If the user isn’t authenticated, we render the “Unauthenticated” Razor template; if they’re authenticated, we render the default Razor template and pass in the view models loaded from the database.

NOTE If you don’t specify a specific Razor view template to use in the View() function, view components use the template name Default.cshtml.

The partial views for view components work similarly to other Razor partial views that you learned about in chapter 7, but they’re stored separately from them. You must create partial views for view components at one of these locations:

• Views/Shared/Components/ComponentName/Templ ateName

• Pages/Shared/Components/ComponentName/Templ ateName

Both locations work, so for Razor Pages apps I typically use the Pages/ folder. For the view component in listing 32.5, for example, you’d create your view templates at

• Pages/Shared/Components/MyRecipes/Def ault.cshtml
• Pages/Shared/Components/MyRecipes/Una uthenticated.cshtml

This was a quick introduction to view components, but it should get you a long way. View components are a simple way to embed pockets of isolated, complex logic in your Razor layouts. Having said that, be mindful of these caveats:

• View component classes must be public, non- nested, and nonabstract classes.

• Although they’re similar to MVC controllers, you can’t use filters with view components.

• You can use layouts in your view components’ views to extract rendering logic common to a specific view component. This layout may contain @sections, as you saw in chapter 7, but these sections are independent of the main Razor view’s layout.

• View components are isolated from the Razor Page they’re rendered in, so you can’t, for example, define a @section in a Razor Page layout and then add that content from a view component; the contexts are completely separate.

• When using the <vc:my-recipes> Tag Helper syntax to invoke your view component, you must import it as a custom Tag Helper, as you saw in section 32.1.

• Instead of using the Tag Helper syntax, you may invoke the view component from a view directly by using IViewComponentHelper Component, though I don’t recommend using this syntax, as in this example:

@await Component.InvokeAsync("MyRecipes", new { numberOfRecipes = 3 })

We’ve covered Tag Helpers and view components, which are both features of the Razor engine in ASP.NET Core. In the next section you’ll learn about a different but related topic: how to create a custom DataAnnotations attribute. If you’ve used older versions of ASP.NET, this will be familiar, but ASP.NET Core has a couple of tricks up its sleeve to help you out.‌

32.3 Building a custom validation attribute‌

In this section you’ll learn how to create a custom DataAnnotations validation attribute that specifies specific values a string property may take. You’ll then learn how you can expand the functionality to be more generic by delegating to a separate service that is configured in your DI controller. This will allow you to create custom domain-specific validations for your apps.‌

We looked at model binding in chapter 7, where you saw how to use the built-in DataAnnotations attributes in your binding models to validate user input. These provide several built-in validations, such as

• [Required]—The property isn’t optional and must be provided.

• [StringLength(min, max)]—The length of the string value must be between min and max characters.

• [EmailAddress]—The value must have a valid email address format.

But what if these attributes don’t meet your requirements? Consider the following listing, which shows a binding model from a currency converter application. The model contains three properties: the currency to convert from, the currency to convert to, and the quantity.

Listing 32.6 Currency converter initial binding model

public class CurrencyConverterModel
{
[Required] ❶
[StringLength(3, MinimumLength = 3)] ❷
public string CurrencyFrom { get; set; }
[Required] ❶
[StringLength(3, MinimumLength = 3)] ❷
public string CurrencyTo { get; set; }
[Required] ❶
[Range(1, 1000)] ❸
public decimal Quantity { get; set; }
}

❶ All the properties are required.
❷ The strings must be exactly three characters.
❸ The quantity can be between 1 and 1000.

There’s some basic validation on this model, but during testing you identify a problem: users can enter any three- letter string for the CurrencyFrom and CurrencyTo properties. Users should be able to choose only a valid currency code, like "USD" or "GBP", but someone attacking your application could easily send "XXX" or "£$%".‌

Assuming that you support a limited set of currencies—say, GBP, USD, EUR, and CAD—you could handle the validation in a few ways. One way would be to validate the CurrencyFrom and CurrencyTo values within the Razor Page handler method, after model binding and attribute validation has already occurred.

Another way would be to use a [RegularExpresssion] attribute to look for the allowed strings. The approach I’m going to take here is to create a custom ValidationAttribute. The goal is to have a custom validation attribute you can apply to the CurrencyFrom and CurrencyTo attributes, to restrict the range of valid values. This will look something like the following example.

Listing 32.7 Applying custom validation attributes to the binding model

public class CurrencyConverterModel
{
[Required]
[StringLength(3, MinimumLength = 3)]
[CurrencyCode("GBP", "USD", "CAD", "EUR")] ❶
public string CurrencyFrom { get; set; }
[Required]
[StringLength(3, MinimumLength = 3)]
[CurrencyCode("GBP", "USD", "CAD", "EUR")] ❶
public string CurrencyTo { get; set; }
[Required]
[Range(1, 1000)]
public decimal Quantity { get; set; }
}

❶ CurrencyCodeAttribute validates that the property has one of the provided
values.

Creating a custom validation attribute is simple; you can start with the ValidationAttribute base class, and you have to override only a single method. The next listing shows how you could implement CurrencyCodeAttribute to ensure that the currency codes provided match the expected values.

Listing 32.8 Custom validation attribute for currency codes

public class CurrencyCodeAttribute : ValidationAttribute ❶
{
private readonly string[] _allowedCodes; ❷
public CurrencyCodeAttribute(params string[] allowedCodes) ❷
{ ❷
_allowedCodes = allowedCodes; ❷
} ❷
protected override ValidationResult IsValid( ❸
object value, ValidationContext context) ❸
{
if(value is not string code ❹
|| !_allowedCodes.Contains(code)) ❺
{ ❺
return new ValidationResult("Not a valid currency code"); ❺
}
return ValidationResult.Success; ❻
}
}

❶ Derives from ValidationAttribute to ensure that your attribute is used during validation
❷ The attribute takes in an array of allowed currency codes.
❸ The IsValid method is passed the value to validate and a context object.
❹ Tries to cast the value to a string and store it in the code variable
❺ If the value provided isn’t a string, is null, or isn’t an allowed code, returns an error . . .
❻ . . .otherwise, returns a success result.

As you know from chapter 16, Validation occurs in the filter pipeline after model binding, before the action or Razor Page handler executes. The validation framework calls IsValid() for each instance of ValidationAttribute on the model property being validated. The framework passes in value (the value of the property being validated) and the ValidationContext to each attribute in turn. The context object contains details that you can use to validate the property.

Of particular note is the ObjectInstance property. You can use this to access the top-level model being validated when you validate a subproperty. For example, if the CurrencyFrom property of the CurrencyConvertModel is being validated, you can access the top-level object from the ValidationAttribute as follows:

var model = validationContext.ObjectInstance as CurrencyConverterModel;

This can be useful if the validity of a property depends on the value of another property of the model. For example, you might want a validation rule that says that GBP is a valid value for CurrencyTo except when CurrencyFrom is also GBP. ObjectInstance makes these sorts of comparison validations easy.

NOTE Although using ObjectInstance makes it easy to make model-level comparisons like these, it reduces the portability of your validation attribute. In this case, you would be able to use the attribute only in the application that defines CurrencyConverterModel.

Within the IsValid() method, you can cast the value provided to the required data type (in this case, string) and check against the list of allowed codes. If the code isn’t allowed, the attribute returns a ValidationResult with an error message indicating that there was a problem. If the code is allowed, ValidationResult.Success is returned, and the validation succeeds.

Putting your attribute to the test in figure 32.6 shows that when CurrencyTo is an invalid value (£$%), the validation for the property fails, and an error is added to the ModelState. You could do some tidying-up of this attribute to set a custom message, allow nulls, or display the name of the property that’s invalid, but all the important features are there.

alt text

Figure 32.6 The Watch window of Visual Studio showing the result of validation using the custom ValidationAttribute. The user has provided an invalid currencyTo value, £$%. Consequently, ModelState isn’t valid and contains a single error with the message "Not a valid currency code".

The main feature missing from this custom attribute is client- side validation. You’ve seen that the attribute works well on the server side, but if the user entered an invalid value, they wouldn’t be informed until after the invalid value had been sent to the server. That’s safe, and it’s as much as you need to do for security and data-consistency purposes, but client- side validation can improve the user experience by providing immediate feedback.

You can implement client-side validation in several ways, but it’s heavily dependent on the JavaScript libraries you use to provide the functionality. Currently, ASP.NET Core Razor templates rely on jQuery for client-side validation. See the “Custom client-side validation” section of Microsoft’s “Model validation in ASP.NET Core MVC and Razor Pages” documentation for an example of creating a jQuery Validation adapter for your attributes: http://mng.bz/Wd6g.

TIP Instead of using the official jQuery-based validation libraries, you could use the open source aspnet-client- validation library (https://github.com/haacked/aspnet-client- validation) as I describe on my blog at http://mng.bz/AoXe.

Another improvement to your custom validation attribute would be to load the list of currencies from a DI service, such as an ICurrencyProvider. Unfortunately, you can’t use constructor DI in your CurrencyCodeAttribute, as you can pass only constant values to the constructor of an Attribute in .NET. In chapter 22 we worked around this limitation for filters by using [TypeFilter] or [ServiceFilter], but there’s no such solution for ValidationAttribute.

Instead, for validation attributes you must use the service locator pattern. As I discussed in chapter 9, this antipattern is best avoided where possible, but unfortunately it’s necessary in this case. Instead of declaring an explicit dependency via a constructor, you must ask the DI container directly for an instance of the required service.

Listing 32.9 shows how you could rewrite listing 32.8 to load the allowed currencies from an instance of ICurrencyProvider instead of hardcoding the allowed values in the attribute’s constructor. The attribute calls the GetService() method on ValidationContext to resolve an instance of ICurrencyProvider from the DI container. Note that ICurrencyProvider is a hypothetical service that would need to be registered in your application’s ConfigureServices() method in Startup.cs.‌

Listing 32.9 Using the service-locator pattern to access services

public class CurrencyCodeAttribute : ValidationAttribute
{
protected override ValidationResult IsValid(
object value, ValidationContext context)
{
var provider = context ❶
.GetRequiredService<ICurrencyProvider>(); ❶
var allowedCodes = provider.GetCurrencies(); ❷
if(value is not string code ❸
|| !_allowedCodes.Contains(code)) ❸
{ ❸
return new ValidationResult("Not a valid currency code"); ❸
} ❸
return ValidationResult.Success; ❸
}
}

❶ Retrieves an instance of ICurrencyProvider directly from the DI container
❷ Fetches the currency codes using the provider
❸ Validates the property as before

TIP The generic GetRequiredService<T> method is an extension method available in the Microsoft.Extensions.DependencyInjection namespace.‌

The default DataAnnotations validation system can be convenient due to its declarative nature, but this has tradeoffs, as shown by the dependency injection problem above. Luckily, you can replace the validation system your application uses, as shown in the following section.

32.4 Replacing the validation framework with FluentValidation‌

In this section you’ll learn how to replace the DataAnnotations-based validation framework that’s used by default in Razor Pages and MVC Controllers. You’ll see the arguments for why you might want to do this and learn how to use a third-party alternative: FluentValidation. This open- source project allows you to define the validation requirements of your models separately from the models themselves. This separation can make some types of validation easier and ensures that each class in your application has a single responsibility.‌

Validation is an important part of the model-binding process in ASP.NET Core. In chapter 7 you learned that minimal APIs don’t have any validation built in, so you’re free to choose whichever framework you like. I demonstrated using DataAnnotations, but you could easily choose a different validation framework.

In Razor Pages and MVC, however, the DataAnnotations validation framework is built into ASP.NET Core. You can apply DataAnnotations attributes to properties of your binding models to define your requirements, and ASP.NET Core automatically validates them. In section 32.3 we even created a custom validation attribute.

But ASP.NET Core is flexible. You can replace whole chunks of the Razor Pages and MVC frameworks if you like. The validation system is one such area that many people choose to replace.

FluentValidation (https://fluentvalidation.net) is a popular alternative validation framework for ASP.NET Core. It is a mature library, with roots going back well before ASP.NET Core was conceived of. With FluentValidation you write your validation code separately from your binding model code.This gives several advantages:

• You’re not restricted to the limitations of Attributes, such as the dependency injection problem we had to work around in listing 32.9.

• It’s much easier to create validation rules that apply to multiple properties, such as to ensure that an EndDate property contains a later value than a StartDate property. Achieving this with DataAnnotations attributes is possible but difficult.‌

• It’s generally easier to test FluentValidation validators than DataAnnotations attributes.

• The validation is strongly typed compared with DataAnnotations attributes where it’s possible to apply attributes in ways that don’t make sense, such as applying an [EmailAddress] attribute to an int property.

• Separating your validation logic from the model itself arguably better conforms to the single- responsibility principle (SRP).

That final point is sometimes given as a reason not to use FluentValidation: FluentValidation separates a binding model from its validation rules. Some people are happy to accept the limitations of DataAnnotations to keep the model and validation rules together.

Before I show how to add FluentValidation to your application, let’s see what FluentValidation validators look like.

32.4.1 Comparing FluentValidation with DataAnnotations attributes‌

To better understand the difference between the DataAnnotations approach and FluentValidation, we’ll convert the binding models from section 32.3 to use FluentValidation. The following listing shows what the binding model from listing 32.7 would look like when used with FluentValidation. It is structurally identical but has no validation attributes.

Listing 32.10 Currency converter initial binding model for use with FluentValidation

public class CurrencyConverterModel
{
public string CurrencyFrom { get; set; }
public string CurrencyTo { get; set; }
public decimal Quantity { get; set; }
}

In FluentValidation you define your validation rules in a separate class, with a class per model to be validated. Typically, these rules derive from the AbstractValidator<> base class, which provides a set of extension methods for defining your validation rules.‌

The following listing shows a validator for the CurrencyConverterModel, which matches the validations added using attributes in listing 32.7. You create a set of validation rules for a property by calling RuleFor() and chaining method calls such as NotEmpty() from it. This style of method chaining is called a fluent interface, hence the name.

Listing 32.11 A FluentValidation validator for the currency converter binding model

public class CurrencyConverterModelValidator ❶
: AbstractValidator<CurrencyConverterModel> ❶
{
private readonly string[] _allowedValues ❷
= new []{ "GBP", "USD", "CAD", "EUR" }; ❷
public CurrencyConverterModelValidator() ❸
{
RuleFor(x => x.CurrencyFrom) ❹
.NotEmpty() ❺
.Length(3) ❺
.Must(value => _allowedValues.Contains(value)) ❻
.WithMessage("Not a valid currency code"); ❻
RuleFor(x => x.CurrencyTo)
.NotEmpty()
.Length(3)
.Must(value => _allowedValues.Contains(value))
.WithMessage("Not a valid currency code");
RuleFor(x => x.Quantity)
.NotNull()
.InclusiveBetween(1, 1000); ❼
}
}

❶ The validator inherits from AbstractValidator.
❷ Defines the static list of currency codes that are supported
❸ You define validation rules in the validator’s constructor.
❹ RuleFor is used to add a new validation rule. The lambda syntax allows for strong typing.
❺ There are equivalent rules for common DataAnnotations validation attributes.
❻ You can easily add custom validation rules without having to create separate
classes.
❼ Thanks to strong typing, the rules available depend on the property being
validated.

Your first impression of this code might be that it’s quite verbose compared with listing 32.7, but remember that listing 32.7 used a custom validation attribute, [CurrencyCode]. The validation in listing 32.11 doesn’t require anything else. The logic implemented by the [CurrencyCode] attribute is right there in the validator, making it easy to reason about. The Must() method can be used to perform arbitrarily complex validations without having the additional layers of indirection required by custom DataAnnotations attributes.‌

On top of that, you’ll notice that you can define only validation rules that make sense for the property being validated. Previously, there was nothing to stop us from applying the [CurrencyCode] attribute to the Quantity property; that’s not possible with FluentValidation.

Of course, just because you can write the custom [CurrencyCode] logic in-line doesn’t necessarily mean you have to. If a rule is used in multiple parts of your application, it may make sense to extract it into a helper class. The following listing shows how you could extract the currency code logic into an extension method that can be used in multiple validators.

Listing 32.12 An extension method for currency validation

public static class ValidationExtensions
{
    public static IRuleBuilderOptions<T, string> ❶
MustBeCurrencyCode<T>( ❶
this IRuleBuilder<T, string> ruleBuilder) ❶
{
return ruleBuilder ❷
.Must(value => _allowedValues.Contains(value)) ❷
.WithMessage("Not a valid currency code"); ❷
}
private static readonly string[] _allowedValues = ❸
new []{ "GBP", "USD", "CAD", "EUR" }; ❸
}

❶ Creates an extension method that can be chained from RuleFor() for string
properties
❷ Applies the same validation logic as before
❸ The currency code values to allow

You can then update your CurrencyConverterModelValidator to use the new extension method, removing the duplication in your validator and ensuring consistency across country-code fields:

RuleFor(x => x.CurrencyTo)
.NotEmpty()
.Length(3)
.MustBeCurrencyCode();

Another advantage of the FluentValidation approach of using standalone validation classes is that they are created using DI, so you can inject services into them. As an example, consider the [CurrencyCode] validation attribute from listing 32.9, which used a service, ICurrencyProvider, from the DI container. This requires using service location to obtain an instance of ICurrencyProvider using an injected context object.‌

With the FluentValidation library, you can inject the ICurrencyProvider directly into your validator, as shown in the following listing. This requires fewer gymnastics to get the desired functionality and makes your validator’s dependencies explicit.

Listing 32.13 Currency converter validator using dependency injection

public class CurrencyConverterModelValidator
: AbstractValidator<CurrencyConverterModel>
{
public CurrencyConverterModelValidator(ICurrencyProvider provider) ❶
{
RuleFor(x => x.CurrencyFrom)
.NotEmpty()
.Length(3)
.Must(value => provider ❷
.GetCurrencies() ❷
.Contains(value)) ❷
.WithMessage("Not a valid currency code");
RuleFor(x => x.CurrencyTo)
.NotEmpty()
.Length(3)
.MustBeCurrencyCode(provider.GetCurrencies()); ❸
RuleFor(x => x.Quantity)
.NotNull()
.InclusiveBetween(1, 1000);
}
}

❶ Injects the service using standard constructor dependency injection
❷ Uses the injected service in a Must() rule
❸ Uses the injected service with an extension method

The final feature I’ll show demonstrates how much easier it is to write validators that span multiple properties with FluentValidation. For example, imagine we want to validate that the value of CurrencyTo is different from CurrencyFrom. Using FluentValidation, you can implement this with an overload of Must(), which provides both the model and the property being validated, as shown in the following listing.

Listing 32.14 Using Must() to validate that two properties are different

RuleFor(x => x.CurrencyTo) ❶
.NotEmpty()
.Length(3)
.MustBeCurrencyCode()
.Must((InputModel model, string currencyTo) ❷
=> currencyTo != model.CurrencyFrom) ❸
.WithMessage("Cannot convert currency to itself"); ❹

❶ The error message will be associated with the CurrencyTo property.
❷ The Must function passes the top-level model being validated and the current property.
❸ Performs the validation. The currencies must be different.
❹ Uses the provided message as the error message

Creating a validator like this is certainly possible with DataAnnotations attributes, but it requires far more ceremony than the FluentValidation equivalent and is generally harder to test. FluentValidation has many more features for making it easier to write and test your validators, too:

• Complex property validations—Validators can be applied to complex types as well as to the primitive types like string and int shown here in this section.

• Custom property validators—In addition to simple extension methods, you can create your own property validators for complex validation scenarios.

• Collection rules—When types contain collections, such as List<T>, you can apply validation to each item in the list, as well as to the overall collection.

• RuleSets—You can create multiple collections of rules that can be applied to an object in different circumstances. These can be especially useful if you’re using FluentValidation in additional areas of your application.

• Client-side validation—FluentValidation is a server- side framework, but it emits the same attributes as DataAnnotations attributes to enable client- side validation using jQuery.

There are many more features, so be sure to browse the documentation at https://docs.fluentvalidation.net for details. In the next section you’ll see how to add FluentValidation to your ASP.NET Core application.‌

32.4.2 Adding FluentValidation to your application‌

Replacing the whole validation system of ASP.NET Core sounds like a big step, but the FluentValidation library makes it easy to add to your application. Simply follow these steps:

  1. Install the FluentValidation.AspNetCore NuGet package using Visual Studio’s NuGet package manager via the command-line interface (CLI) by running dotnet add package FluentValidation.AspNetCore or by adding a <PackageReference> to your .csproj file:

    <PackageReference Include="FluentValidation.AspNetCore" Version="11.2.2" />
  2. Configure the FluentValidation library for MVC and Razor Pages in Program.cs by calling builder.Services.AddFluentValidationA utoValidation(). You can further configure the library as shown in listing 32.15.

  3. Register your validators (such as the CurrencyConverterModelValidator from listing 32.13) with the DI container. These can be registered manually, using any scope you choose:

    WebApplicationBuilder builder = WebApplication.CreateBuilder(args); builder.Services.AddRazorPages(); builder.Services.AddFluentValidationAutoValidation(); builder.services.AddScoped<IValidator<CurrencyConverterModelValidator>, CurrencyConverterModelValidator>();

Alternatively, you can allow FluentValidation to automatically register all your validators using the options shown in listing 32.15.

For such a mature library, FluentValidation has relatively few configuration options to decipher. The following listing shows some of the options available; in particular, it shows how to automatically register all the custom validators in your application and disable DataAnnotations validation.

Listing 32.15 Configuring FluentValidation in an ASP.NET Core application

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddRazorPages();
builder.Services.AddValidatorsFromAssemblyContaining<Program>(); ❶
builder.Services.AddFluentValidationAutoValidation( ❷
x => x.DisableDataAnnotationsValidation = true) ❷
.AddFluentValidationClientsideAdapters(); ❸
ValidatorOptions.Global.LanguageManager.Enabled = false; ❹

❶ Instead of manually registering validators, FluentValidation can autoregister them for you.
❷ Setting to true disables DataAnnotations validation completely for model binding.
❸ Enables integration with client-side validation via data-* attributes
❹ FluentValidation has full localization support, but you can disable it if you don’t need it.

It’s important to understand that if you don’t set DisableDataAnnotationsValidation to true, ASP.NET Core will run validation with both DataAnnotations and FluentValidation. That may be useful if you’re in the process of migrating from one system to the other, but otherwise, I recommend disabling it. Having your validation split between both places seems like the worst of both worlds!

One final thing to consider is where to put your validators in your solution. There are no technical requirements for this; if you’ve registered your validator with the DI container, it will be used correctly, so the choice is up to you. I prefer to place validators close to the models they’re validating.

For Razor Pages binding-model validators, I create the validator as a nested class of the PageModel, in the same place as I create the InputModel, as described in chapter 16. That gives a class hierarchy in the Razor Page similar to the following:

public class IndexPage : PageModel
{
public class InputModel { }
public class InputModelValidator: AbstractValidator<InputModel> { }
}

That’s my preference. Of course, you’re free to adopt another approach if you prefer.

That brings us to the end of this chapter on custom Razor Pages components. When you combine it with the components in the previous chapter, you’ve got a great base for extending your ASP.NET Core applications to meet your needs. It’s a testament to ASP.NET Core’s design that you can swap out whole sections like the Validation framework entirely. If you don’t like how some part of the framework works, see whether someone has written an alternative!‌

Summary

With Tag Helpers, you can bind your data model to HTML elements, making it easier to generate dynamic HTML. Tag Helpers can customize the elements they’re attached to, add attributes, and customize how they’re rendered to HTML. This can greatly reduce the amount of markup you need to write.

The name of a Tag Helper class dictates the name of the element in the Razor templates, so the SystemInfoTagHelper corresponds to the

<system-info> element. You can choose a different element name by adding the [HtmlTargetElement] attribute to your Tag Helper.

You can set properties on your Tag Helper object from Razor syntax by decorating the property with an [HtmlAttributeName("name")] attribute and providing a name. You can set these properties from Razor using HTML attributes, as in <system- info name="value">.

The TagHelperOutput parameter passed to the Process or ProcessAsync methods control the HTML that’s rendered to the page. You can set the element type with the TagName property and set the inner content using Content.SetContent() or Content.SetHtmlContent().

You can prevent inner Tag Helper content from being processed by calling SupressOutput(), and you can remove the element by setting TagName=null. This is useful if you want to conditionally render elements to the response.

You can retrieve the contents of a Tag Helper by calling GetChildContentAsync() on the TagHelperOutput parameter. You can then render this content to a string by calling GetContent(). This will render any Razor expressions and Tag Helpers to HTML, allowing you to manipulate the contents.

View components are like partial views, but they allow you to use complex business and rendering logic. You can use them for sections of a page, such as the shopping cart, a dynamic navigation menu, or suggested articles.

Create a view component by deriving from the ViewComponent base class and implementing InvokeAsync(). You can pass parameters to this function from the Razor view template using HTML attributes, in a similar way to Tag Helpers.

View components can use DI, access the HttpContext, and render partial views. The partial views should be stored in the Pages/Shared/Components/<Name>/ folder, where Name is the name of the view component. If not specified, view components will look for a default view named Default.cshtml.

You can create a custom DataAnnotations attribute by deriving from ValidationAttribute and overriding the IsValid method. You can use this to decorate your binding model properties and perform arbitrary validation.

You can’t use constructor DI with custom validation attributes. If the validation attribute needs access to services from the DI container, you must use the Service Locator pattern to load them from the validation context, using the GetService<T> method.

FluentValidation is an alternative validation system that can replace the default DataAnnotations validation system. It is not based on attributes, which makes it easier to write custom validations for your validation rules and makes those rules easier to test.

To create a validator for a model, create a class derived from AbstractValidator<> and call RuleFor<>() in the constructor to add validation rules. You can chain multiple requirements on RuleFor<>() in the same way that you could add multiple DataAnnotations attributes to a model.

If you need to create a custom validation rule, you can use the Must() method to specify a predicate. If you wish to reuse the validation rule across multiple models, encapsulate the rule as an extension method to reduce duplication.

To add FluentValidation to your application, install the FluentValidation .AspNetCore NuGet package, call AddFluentValidationAutoValidation() in Program.cs, and register your validators with the DI container. This will add FluentValidation validations in addition to the built-in DataAnnotations system.

To remove the DataAnnotations validation system and use FluentValidation only, set the DisableDataAnnotationsValidation option to true in your call to AddFluentValidationAutoValidation().

Favor this approach where possible to avoid running validation methods from two different systems.

You can allow FluentValidation to automatically discover and register all the validators in your application by calling AddValidatorsFromAssemblyContaining<T>(), where T is a type in the assembly to scan. This means you don’t have to register each validator in your application with the DI container individually.