NET 7 Design Patterns In-Depth 13. Offline Concurrency Design Patterns

Chapter 13
Offline Concurrency Design Patterns
Introduction
To organize offline concurrency, the design patterns of this category can be divided into the following four main sections:

Optimistic offline lock: By identifying the collision and rolling back the transaction, it prevents the occurrence of a collision between the business transactions simultaneously.
Pessimistic offline lock: By making data available to only one transaction, it is possible to prevent collisions between simultaneous transactions.
Coarse-grained lock: With the help of a lock, a lock can be defined on a set of related objects.
Implicit lock: Frameworks are responsible for managing locks.
Note that in writing this chapter, some images in Patterns of Enterprise Applications Architecture by Martin Fowler1's book are used.

Structure
In this chapter, we will cover the following topics:

Offline concurrency design patterns
Optimistic offline lock
Pessimistic offline lock
Coarse-grained lock
Implicit lock
Objectives
In this chapter, you will deal with different transaction concepts and learn how to manage problems related to concurrency and transaction management problems with the help of different design patterns. In this chapter, you will learn how to prevent unwanted problems caused by concurrency by locking different resources and managing requests and give your software the ability to process requests simultaneously and deliver better performance to users.

Offline concurrency design patterns
One of the most complicated parts of software production is dealing with concurrency-related topics. Whenever several threads or processes have access to the same data, there is a possibility of problems related to concurrency, so one should think about concurrency in production software. Different solutions are at different levels for working and managing concurrency in enterprise applications. For example, you can use transactions, internal features of relational databases, and so on. This reason does not prove that concurrency management can be blamed on these methods and tools.

One of the most frequent concurrent problems in programs is the lost update problem. During this problem, the update operation of one transaction is overwritten by another transaction, and the changes of the first transaction are lost. The next concurrency problem is inconsistent reading. During this problem, the data is changed between two reading operations; therefore, the data read at the beginning does not match the data read later.

Both problems endanger the accuracy of the data and the ability to trust the data, which will be the root of strange and wrong behavior in the program. If we focus too much on data accuracy, we end up with a solution where different transactions wait for previous transactions to finish their work. Besides increasing data accuracy, this expectation threatens the data's liveness, and it is necessary always to balance the data's accuracy and liveness.

To manage concurrency, there are different methods. One way is to allow multiple people to read the data, but when saving the changes, accept the changes from someone whose version of the data is the same as the version in the data source. This method is what optimistic offline lock tries to provide. Another method is to lock the data read by one person and not allow another person to read it until the first person finishes his transaction. This method is also what pessimistic offline lock tries to provide. The choice between these two methods is between collision detection and collision prevention.

Deadlock is the important thing that happens when using the pessimistic method. There are different methods to identify and manage deadlocks. One of these methods is sacrificing a party and canceling his operation, so his locks will also be released. The second method is to set the lifetime for the locks so that if the lock is not released within a certain period, the transaction is automatically canceled, and the lock is released so that the deadlock does not occur.

When talking about concurrency, likely, the transaction leg is also involved. The transaction has an important feature: either all the changes are applied in the database or none are applied. Software transactions have four important characteristics known as Atomicity-Consistency-Isolation-Durability (ACID):

Atomicity: Either the whole work is done, or none of the work parts are done. Obviously, in the process of doing the work, if one of the steps is not done, all the changes must be rolled back. Otherwise, the changes can be committed at the end of all the work. For example, in an application that transfers funds from one account to another, the atomicity property ensures that the corresponding credit is made to the other if a debit is made successfully from one account.
Consistency: All resources before and after a transaction must be consistent and healthy. For example, in an application that transfers funds from one account to another, the consistency property ensures that the total value of funds in both accounts is the same at the start and end of each transaction.
Isolation: The results obtained during a transaction should not be accessible to other transactions until the completion of that transaction. For example, in an application that transfers funds from one account to another, the isolation property ensures that another transaction sees the transferred funds in one account or the other but not in both.
Durability: In case of an error or problem, the results of successful transaction changes should not be lost. For example, in an application that transfers funds from one account to another, the durability property ensures that the changes made to each account will not be reversed.
Design transactions, there are three different methods:

Long transaction: Transactions that span several requests.
Request transaction: Transactions related to a request and the transaction are also determined upon completion of the request.
Late transaction: All reading operations are performed outside the transaction, and only data changes are included.
When using a transaction, knowing what will be locked during the transaction is very important. Locking a table during a transaction is usually dangerous. Because it will cause the rest of the transactions that need the table to wait until the lock is removed so they can do their work; also, different isolation levels can be determined by using transactions. Each of these isolation levels has different strengths and behavior. Isolation levels include serializable, repeatable read, read committed and read uncommitted. SQL server offers other isolation levels beyond this book's scope.

When managing concurrency in transactions, it is tried to use optimistic offline lock because it is easier to implement and delivers better output in terms of liveness. In addition to these advantages, the big drawback of this method is that the user will notice an error at the end of his work and when saving, which can lead to dissatisfaction. In this case, the pessimistic method can be useful, but compared to the optimistic method, it is more complicated to implement and has a worse output in terms of liveness.

Another method is not to apply the lock for each object but instead for a group of objects. That is how coarse-grained lock tries to deliver. The better method would be to use the implicit lock, where the supertype layer or the framework will manage the concurrency and apply and release the lock.

Optimistic offline lock
Name:

Optimistic offline lock

Classification:

Offline concurrency design patterns

Also known as:

---

Intent:

Using this design pattern, it is possible to prevent collisions between simultaneous business transactions by identifying the collision and rolling back the transaction.

Motivation, Structure, Implementation, and Sample code:

Suppose a requirement has been raised and requested in its format so that different users can update the authors' data. To implement this mechanism, you can easily implement the update operation. But the problem will be that two users may want to update the data of the same author at the same time. In this case, most of the update operations of one of the users will be lost, and the so-called lost update will occur. To clarify the scenario, consider the following sequence diagram:

Figure13.1.png
Figure 13.1: Update author sequence diagram

As shown in Figure 13.1 diagram, User1 first gets the data related to Author1 and starts changing this information. Meanwhile, User2 also gets Author1's data, changes it, and stores it in the database. Next, User1 saves their changes in the database. In this case, if User1's changes are written to the database, then User2's will be lost. To prevent this, you can prevent User1's changes from being saved and return an error to them.

The important thing is, how can you find out that Author1's data has changed before saving User1's changes? There are different ways to answer this question, one of which is to use the Version field in the database table.

According to the preceding explanation, the proposed requirement can be implemented as follows:

CREATE TABLE author(

AuthorId INT PRIMARY KEY,

FirstName VARCHAR(50),

LastName VARCHAR(50),

Version INT NOT NULL

)

According to the table's structure, the Version column will store the data version. In this way, every time the data of the record changes, the value in Version increases by one unit. In this case, whenever the UPDATE or DELETE operation is sent to the table, along with the conditions sent, the condition related to the Version is also sent. As follows:

SELECT * FROM author WHERE AuthorId = 1

By executing the preceding query, the data related to the author with Id 1 will be delivered to the user. Suppose the data is as follows:

Author: 1

FirstName: Vahid

LastName: Farahmandian

Version: 1

Now that this data has been delivered to User1, this user is busy making changes. Meanwhile, User2 also requests the same data, so the same data is also delivered to them. Next, User2 applies his changes and saves them in the database. For this, you must send the following query to the database:

UPDATE author SET FirstName='Ali', LastName='Rahimi', Version = Version +1

WHERE AuthorId=1 AND Version = 1

As specified in the WHERE section, the condition related to the Version is sent to the database along with other conditions. After applying these changes, the data in the table will change as follows:

Author: 1

FirstName: Ali

LastName: Rahimi

Version: 2

Then User1 sends his changes to the database. The point here is that the Version value delivered to User1 was equal to 1, so the following query will be delivered to the database:

UPDATE author SET FirstName='Vahid', LastName='Hassani', Version = Version +1

WHERE AuthorId=1 AND Version = 1

No record matches the given conditions in the database, and the database will reply that no record has been changed during the sent query. When no record has been changed, another person has already changed the desired record, and you can inform the user of this change by presenting an error. With the preceding explanation, the following codes can be considered for the Author:

public async Task Find(int authorId)

{

IDataReader reader = await new SqlCommand($"" +

$"SELECT * " +

$"FROM author " +

$"WHERE AuthorId={authorId}")

.ExecuteReaderAsync();

reader.Read();

return new Author()

{

AuthorId = (int)reader["AuthorId"],

FirstName = (string)reader["FirstName"],

LastName = (string)reader["LastName"],

Version = (int)reader["Version"]

};

}

public async Task ModifyAuthor(Author author)

{

var result = await new SqlCommand($"" +

$"UPDATE author " +

$"SET FirstName='{author.FirstName}', " +

$"LastName='{author.LastName}' " +

$"Version = Version +1 " +

$"WHERE AuthorId={author.AuthorId} AND " +

$"Version={author.Version}")

.ExecuteNonQueryAsync();

if (result == 0)

throw new DBConcurrencyException();

else

return true;

}

As seen in the preceding codes, when sending the UPDATE query, the Version condition is also sent along with the other conditions. According to the database's response, the collision event is identified, and the user is informed by sending an exception. The same process can be considered for the DELETE operation and send the Version condition to the database along with the other conditions in DELETE.

Notes:

Other methods can be used besides the version to implement this design pattern. For example, columns can be used to identify the person who changed or the time of change. This method has serious and important flaws. Focusing on time can cause errors and problems because sometimes the client and server clock settings differ. Another method is to specify all columns in the WHERE clause when modifying the record. The problem with this method is that the queries may be long, or the database may be unable to use the right index to speed up the work.
Normally, this design pattern cannot be used to identify inconsistent reads. To prevent this problem, you can also read the data with Version. It is necessary to have a proper isolation level in the database (Repeatable Read or stronger). Using this method will cost a lot. A better solution to this problem is using the coarse-grained lock design pattern.
One of the famous applications of this design pattern is in the design of Source Control Management (SCM) systems.
To optimally implement this design pattern, automatic integration strategies can be used during a collision.
Using the layer supertype design pattern to reduce the amount of code in the design of this pattern can be useful.
Using the entity framework, this design pattern will be very simple.
Using a single copy of a record during a transaction can be useful in using the identity map design pattern. During a transaction, we may encounter the phenomenon of inconsistent readings.
You can manage transactions better by combining the UnitOfWork design pattern with this one.
Consequences: Advantages

The implementation of this design pattern is very simple.
There is no need for record locking overhead when using this design pattern.
Consequences: Disadvantages

When the system load is high, and there are many simultaneous transactions, using this design pattern will cause many transactions to be rolled back, and the user experience will suffer.
Applicability:

When the probability of a collision between two different business transactions is low, using this design pattern can be useful; otherwise, using the pessimistic offline lock design pattern will be a better option.
Related patterns:

Some of the following design patterns are not related to the optimistic offline lock design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Coarse-grained lock
Layer supertype
Identity map
Unit of work
Pessimistic offline lock
Pessimistic offline lock
Name:

Pessimistic offline lock

Classification:

Offline concurrency design patterns

Also known as:

---

Intent:

By using this design pattern and making data available to only one transaction, it is possible to prevent the collision between simultaneous transactions.

Motivation, Structure, Implementation, and Sample code:

As seen in the optimistic offline lock design pattern, this design pattern can be used to manage concurrency and prevent collisions. But the method used in this design pattern can identify the collision only at the end of the work and when the changes have been sent to the database to be saved. This will cause the application to face serious problems in scenarios with many concurrent transactions and increase dissatisfaction. Because after doing all the work, the user will find out that his operation could not be saved.

You can use the pessimistic offline lock design pattern to prevent this from happening. Using this design pattern, if the data is provided to a transaction, its operations will be performed, and the changes will be saved. Because by using this design pattern, when retrieving data from the database, a lock is placed on the data so that another transaction cannot retrieve that data.

The implementation of the pessimistic offline lock design pattern includes three steps:

Determining what type of lock is needed: To choose the right type of lock, factors such as providing the maximum possible concurrency, appropriately responding to business needs, and minimizing the complexity of the code should be considered. The different types of locks are:
Exclusive write lock: The data will be locked in modification, and only one transaction will be allowed to modify the data.
Exclusive read lock: The data will be locked for reading. Naturally, this method will impose stricter restrictions on data access.
Read/write lock: This type will combine the previous two locks. For this type of lock, the following rules apply:
Both read and write locks are exclusive; other transactions cannot obtain a write lock if a transaction has a read lock on a record. The opposite is also true, and if a transaction has a write lock on a record, other transactions will not be able to obtain a read lock.
It is possible to provide read locks for multiple transactions, increasing the program's concurrency.
Building the lock management module: The task of this module is to manage the access of transaction requests to get or release the lock. This module must manage access to know what is locked at any given moment and who has this lock. This information can be stored in a database table or an object in memory. If an object in memory is used, this object must be a singleton. The database table method would be reasonable if the web server is clustered. If you use a table, managing the concurrency of the table itself will be very important. For this purpose, you can use the serializable isolation level to access this table. Also, storing procedures and the appropriate isolation level can be useful in this case. The important point for the lock management module is that business transactions must be associated with the module and not access objects in memory or tables, regardless of where and how the lock is stored.
Definition of business procedures and their use in locks: in the definition of these procedures, questions must be answered, including:
What should be locked and when?
The question "when" must be answered first to answer this question. The lock must be applied before delivering the data to the program. Applying the lock or fetching the data is an event directly related to the transaction's isolation level, but applying the lock and then fetching the data can improve reliability.
After "when" is determined, "what" must be answered, what should be locked is usually the table's primary key value or whatever value is used to find the record.
When can the lock be released? The lock must be released whenever the transaction is completed (Commit or Rollback).
What should happen when it is not possible to provide a lock? In this case, the simplest thing that happens is to cancel the transaction. Because basically, the purpose of this design pattern is to inform the user at the beginning of the work if there is no possibility to access the data.
Suppose the optimistic offline lock design pattern complements the pessimistic offline lock in addition to the preceding three steps. In that case, it is necessary to determine which records should be locked.

For this design pattern, the following sequence diagram can be considered:

Figure13.2.png
Figure 13.2: Get author info using Pessimistic Offline Lock

As shown in Figure 13.2 diagram, User1 reads the data of Author1. At this stage, a record is inserted in the lock table as follows:

Owner: User1

Lockable: 1

Next, User1 is busy changing the data of Author1. At the same time, User2 requests access to the data of Author1, but because User1 locks this record, it is impossible to read it for User2 and encounters an error. Then User1 sends his changes to the database for storage and deletes the record in the lock table, which removes the lock.

According to the preceding description, the lock management module can be considered as follows:

public static class LockManager

{

private static bool HasLock(int lockable, string owner)

{

//check if an owner already owns a lock.

return true;

}

}

The HasLock method checks once before a record for the owner is registered in the table that the owner has not locked the desired record before. If he had already locked the record, inserting a new record into the lock table is unnecessary. The characteristic of the record in this example is the value of its primary key, and it is assumed that all the primary keys are of INT type:

public static void AcquireLock(int lockable, string owner)

{

if (!HasLock(lockable, owner))

{

try

{

//Insert into lock table/object

}

catch (SqlException ex)

{

throw new DBConcurrencyException($"unable to lock {lockable}");

}

}

}

Using this method, a lock is defined on a record and given to the owner. In this method, it is assumed that we put the lock on a column in the tables with INT data type, and their value is unique throughout the database. In actual implementations, this part will probably need to be changed. Also, the meaning of owner in these methods can be HTTP SessionID or anything else according to your needs. The important thing about lockable is that its value is unique within the table or object of the lock. Therefore, if two different owners want to lock a lockable, the database will save one of them, and for the second case, it will return a unique value violation error:

public static void ReleaseLock(int lockable, string owner)

{

try

{

//delete from lock table/object

}

catch (SqlException ex)

{

throw new Exception($"unable to release lock {lockable}");

}

}

public static void ReleaseAllLocks(string owner)

{

try

{

//delete all locks for given owner from lock table/object

}

catch (SqlException ex)

{

throw new Exception($"unable to release locks owned by {owner}");

}

}

The user of the two methods ReleaseLock and ReleaseAllLocks is also to release the lock. If we want to place the locks in the database table, this method will be a CRUD operation on the desired table. Now with the presence of the lock management module, it can be used as follows:

public class AuthorDomain

{

public bool Modify(Author author)

{

LockManager.ReleaseAllLocks("Session 1");

LockManager.AcquireLock(author.AuthorId, "Session 1");

// Implementation of transaction requirements

LockManager.ReleaseLock(author.AuthorId, "Session 1");

return true;

}

}

In the preceding method, before doing anything, we first delete all the locks that were for Session1, and then we define a lock for the author that we want to edit, and at the end of the work, we release the defined lock. Real implementation requirements will be more complex than this simple example, and this example is given only to clarify how this design pattern works.

Notes:

Choosing the right strategy for locking is a decision that should be made with the help of domain experts. Because this problem is not just a technical problem, it can affect the entire business.
Choosing an inappropriate locking strategy can make the system become a single-user system.
Identifying and managing inactive meetings is an important point that should be addressed. For example, the client shuts down its system after receiving the lock and in the middle of the operation. Now we are facing an open transaction, and a series of locks have been placed on a series of resources. In this case, you can use different mechanisms, such as the timeout mechanism on the web server. Or, an active time can be set for the records included in the lock maintenance table so that the lock is invalid and released after that time.
Locking everything in the system will cause many problems in the system. Therefore, it is better to use this design pattern only when needed and place it next to the pessimistic offline lock design pattern.
The preceding example assumes that the selected lock type is set in the form of defined transactions. Otherwise, the lock type can also be stored as a column in the lock table.
Consequences: Advantages

By using this design pattern, it is possible to prevent the occurrence of inconsistent readings.
Consequences: Disadvantages

Management of locks is a complex operation.
As the number of users or requests increases, the efficiency decreases.
There is a possibility of a deadlock using this design pattern. Therefore, one of the tasks of the lock management module is to return an error instead of waiting if it is not possible to grant a lock to prevent deadlocks as much as possible.
If this design pattern is used and there are long transactions in the system, the system's efficiency will suffer.
Applicability:

When the probability of collision between two different business transactions is high, using this design pattern can be useful.
Related patterns:

Some of the following design patterns are not related to the pessimistic offline lock design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Pessimistic offline lock
Singleton
Coarse-grained lock
Name:

Coarse-grained lock

Classification:

Offline concurrency design patterns

Also known as:

---

Intent:

Using this design pattern, it is possible to define a lock on a set of related objects with the help of a lock.

Motivation, Structure, Implementation, and Sample code:

Usually, different objects need to be changed in different programs in the form of a transaction. With the approach in the design patterns of pessimistic offline lock and optimistic offline lock, it is necessary to define a lock on each object or record (resources) to manage locks. This process itself can be the source of various problems. For example, all resources must be processed in order, and locks must be created on them. This process becomes more complicated when faced with a complex graph of objects. On the other hand, with the approach that existed in the pessimistic offline lock design pattern, for each node in the graph, it will be necessary to define a record in the table related to the lock, which will cause us to face a large table.

Another approach is to define a lock on a set of related resources and manage concurrency in this way. This process is what the coarse-grained lock design pattern aims to provide. The most concrete example of implementing this method is aggregated. In fact, by using aggregates, a change point can be defined for a set of related objects (Root), and all related objects can be changed only through that point. This feature that aggregates provides the implementation of the coarse-grained design pattern. Because when we are facing aggregate, its members can be accessed from only one point, and the same point can be used to manage the lock (Root Lock). In this case, a lock can be considered for each aggregate. In other words, by applying the lock on aggregate, the lock will be applied to all the members of its subset.

Using root lock, it will always be necessary to use a mechanism to move from the subgroup members to the root and apply the lock on the root. There are different methods for this mechanism. The simplest method is to navigate each object and move it to the root. This method may cause performance problems in more complex graphs, which can be managed using the lazy load design pattern. Figure 13.3 shows a view of the root lock method:

Figure13.3.png
Figure 13.3: Root Lock method

Another way to implement the coarse-grained lock design pattern is to use the shared version mechanism. In this method, every object in a group shares a specific version. Therefore, the lock will be applied to the entire group whenever this shared version increases. This method is very similar to the method presented in the optimistic offline lock design pattern. If we want to present the same method on the design pattern of pessimistic offline lock, then it will be necessary for each member of a group to share a type of token. Since the pessimistic method is often used to complement the optimistic method, using the shared version as a token can be useful. Figure 13.4 shows the shared version method:

Figure13.4.png
Figure 13.4: Shared Version method

According to the preceding description, the shared version method can be implemented in the form of a Shared optimistic offline lock as follows:

Suppose we have the version table as follows:

CREATE TABLE version

(

Id INT PRIMARY KEY,

Value INT NOT NULL,

ModifiedBy VARCHAR(50) NOT NULL,

Modified DATETIME

)

As seen in the preceding table, the Value column will store the subscription version's value. Also, to work with the version table, the Version class can be considered as follows:

public class Version

{

public int Id { get; set; }

public int Value { get; set; }

public string ModifiedBy { get; set; }

public DateTime Modified { get; set; }

public Version(int id, int value, string modifiedBy, DateTime modified)

{

Id = id;

Value = value;

ModifiedBy = modifiedBy;

Modified = modified;

}

public static async Task FindAsync(int id)

{

//Try to get version from cache using Identity Map;

//IdentityMap.GetVersion(id);

Version version = null;

if (version == null)

{

version = await LoadAsync(id);

}

return version;

}

private static async Task LoadAsync(int id)

{

Version version = null;

var result = await new SqlCommand($"" +

$"SELECT * " +

$"FROM version " +

$"WHERE Id ={id}", DB.Connection)

.ExecuteReaderAsync();

if (result.Read())

{

version = new(

(int)result["Id"],

(int)result["Value"],

(string)result["ModifiedBy"],

(DateTime)result["Modified"]);

//put version in cache IdentityMap.Put(version);

}

else

{

throw new DBConcurrencyException($"version {id} not found!");

}

return version;

}

}

As it is clear in the FindAsync method, it tried to load the requested version from the cache. If the version is unavailable in the cache, then the information related to the version is retrieved from the database using the LoadAsync method. After fetching the version information from the database, this data is placed in the cache and returned. Suppose no record is found in the database for the provided id. In that case, another transaction has changed the version, which will be a sign of the possibility of a collision, and for this reason, a DBConcurrencyException exception will occur.

Naturally, if a new object wants to be created, its corresponding record in the version table should also be created. You can use the INSERT command to create a record in the version table. The point, in this case, is that after the version record is created in the table, that record must be placed in the cache:

public async void Insert()

{

await new SqlCommand($"" +

$"INSERT INTO " +

$"version " +

$"VALUES({Id},{Value},'{ModifiedBy}','{Modified}')",

DB.Connection).ExecuteNonQueryAsync();

//put version in cache IdentityMap.Put(version);

}

Regarding the Version class, a mechanism will be needed to increase its value. This can also be done using the UPDATE command. The important thing about this method is that if no record has been updated during the update operation, this can be a sign of a collision and should be reported to the user. Before changing the version value, it should be ensured that the previous version is not in use. That is, the desired records have not been locked before:

public async void Increment()

{

if(!Locked()){

var effectedRowCount = await new SqlCommand($"" +

$"UPDATE version " +

$"SET " +

$"Value = {Value}," +

$"ModifiedBy='{ModifiedBy}',"+

$"Modified='{Modified}' " +

$"WHERE Id = {Id}",

DB.Connection)

.ExecuteNonQueryAsync();

if (effectedRowCount == 0)

{

throw new DBConcurrencyException($"version {Id} not found!");

}

Value++;

}

}

Finally, when the Aggregate is deleted, it will be necessary to delete the version corresponding to that record, for which the DELETE operation can be used. Again, when deleting, if the database declares that no records have been deleted, the possibility of a collision should be reported to the user. The following code shows the Delete operation:

public async void Delete()

{

var effectedRowCount = await new SqlCommand($"" +

$"DELETE FROM version " +

$"WHERE Id = {Id}", DB.Connection)

.ExecuteNonQueryAsync();

if (effectedRowCount == 0)

{

throw new DBConcurrencyException($"version {Id} not found!");

}

}

Now that the necessary mechanism for version management has been prepared, you can use it as follows:

public abstract class BaseEntity

{

public Version Version { get; set; }

protected BaseEntity(Version version) => this.Version = version;

}

BaseEntity class is considered a layer supertype. It is used to set the value of the Version property:

public interface IAggregate { }

public class Author : BaseEntity, IAggregate

{

public string Name { get; set; }

public List

Addresses { get; set; } = new List
();

public Author(Version version, string name) : base(version)

{

Name = name;

}

public Author AddAuthor(string name)=>new Author(Version.Create(), name);

public Address AddAddress(string street)

{

Address address = new Address(Version, street);

Addresses.Add(address);

return address;

}

}

Author class as Aggregate inherits from BaseEntity class. In the AddAuthor method, as soon as an Author object is created, the corresponding version object is also created. The code related to creating the version object in the Version class is as follows:

public static Version Create()

{

Version version = new(NextId(),

0, //Initial version number

GetUser().Name,

DateTime.Now); //modification datetime

return version;

}

Next, when we want to add an address for the Author, we use the existing Version property. Also, whenever a request to update or delete an object is received, the Increment method in the Version class must be called:

public abstract class AbstractMapper

{

public void Insert(BaseEntity entity) => entity.Version.Increment();

public void Update(BaseEntity entity) => entity.Version.Increment();

public void Delete(BaseEntity entity) => entity.Version.Increment();

}

public class AuthorMapper : AbstractMapper

{

public new void Delete(BaseEntity entity)

{

Author author = (Author)entity;

//delete addresses

//delete author

base.Delete(entity);

author.Version.Delete();

}

}

As seen in the preceding code, to delete the author; first, the addresses corresponding to him are deleted, then the author is deleted. In the future, the version will be increased by one. If the version is not updated, there is a possibility of a collision, and an error will occur. The deletion operation can be completed by deleting the record related to the version.

You can also implement the shared version method as a shared pessimistic offline lock. The implementation of this method will be the same as the optimistic method. The only difference will be that in this method, we have to use a mechanism to find out that the data that has been uploaded is the latest version. A simple way to ensure this would be to execute the Increment method within the transaction and before the commit. If the Increment execution is executed successfully, we have the latest version of the data; otherwise, due to a DBConcurrencyException error, we will notice that the data is not updated, and the transaction will be rolled back.

The mechanism will be slightly different to implement the root optimistic offline lock method. Because in this method, there is no shared version. To implement this method, you can use the UnitOfWork design pattern. Before saving the changes in the database, navigate the object and increase the value of the parent version with the Increment method each time. As follows:

public class DomainObject : BaseEntity

{

public DomainObject(Version version) : base(version){}

public int Id { get; set; }

public DomainObject Parent { get; set; }

}

Suppose there is a model as above.

public class UnitOfWork

{

...

public void Commit()

{

foreach (var item in modifiedObjects)

{

if (item.Parent != null)

item.Parent.Version.Increment();

}

foreach (var item in modifiedObjects)

{

//save changes to database

}

}

}

Therefore, when saving the changes in the database, the Increment method is first called and then saved in the database.

Notes:

Both shared version and Root Lock methods have their advantages and disadvantages. For example, if the shared version is used, then to retrieve the data, it will always be necessary to join with the version table, which can have a negative effect on the performance. On the other hand, if root lock is used along with the optimistic method, the important challenge will be to ensure that the data is up to date.
The identity map design pattern will be crucial in implementing the shared optimistic offline lock method because all group members must always refer to a common version.
To implement this design pattern, you can use the layer supertype design pattern for simplicity of design and implementation.
Consequences: Advantages

Applying and releasing the lock in this design pattern will be simple and low-cost.
Consequences: Disadvantages

If this design pattern is not designed and used in line with business requirements, it will lock objects that should not be locked.
Applicability

This design pattern can be used when it is necessary to put a lock on an object along with the related objects.
Related patterns:

Some of the following design patterns are not related to coarse-grained lock design patterns, but to implement this design pattern, checking the following design patterns will be useful:

Pessimistic offline lock
Optimistic offline lock
Lazy load
Layer supertype
Unit of work
Identity map
Implicit lock
Name:

Implicit lock

Classification:

Offline concurrency design patterns

Also known as:

---

Intent:

Using this design pattern, the framework or Layer Supertype is responsible for managing locks.

Motivation, Structure, Implementation, and Sample code:

One of the major problems with offline concurrency management methods is that they are difficult to test. Therefore, a capability should be created so that programmers can use the capabilities created once instead of being involved in the daily implementation of concurrency management methods in the code. The reason for this is that if the concurrency management process is not implemented correctly, it can cause serious damage to the quality of data, the correctness of work, and the efficiency of the program.

The implicit lock design pattern helps use the layer supertype design pattern or any other pattern to implement concurrency management processes in the form of parent classes or framework facilities and be available to programmers for use.

For example, consider the following code:

public interface IMapper

{

DomainObject Find(int id);

void Insert(DomainObject obj);

void Update(DomainObject obj);

void Delete(DomainObject obj);

}

Public class LockingMapper : IMapper

{

private readonly IMapper _mapper;

public LockingMapper(IMapper mapper) => _mapper = mapper;

public DomainObject Find(int id)

{

//Acquire lock

return _mapper.Find(id);

}

public void Delete(DomainObject obj) => _mapper.Delete(obj);

public void Insert(DomainObject obj) => _mapper.Insert(obj);

public void Update(DomainObject obj) => _mapper.Update(obj);

}

As seen in the LockingMapper class, when a record is fetched, it is locked and then fetched. The important thing about this design pattern is that the business transactions do not know anything about the mechanism of locking and releasing locks to apply data changes. All these operations happen behind the scenes.

Figure13.5.png
Figure 13.5: Lock management process using Implicit Lock design pattern

In Figure 13.5, the transaction related to the editing of customer information delivers the request to retrieve customer information to LockingMapper. Behind the scenes, this mapper communicates with the lock management module and receives the lock. After receiving the lock, it retrieves the data and delivers it to the relevant transaction.

Notes:

Using this design pattern, programmers should still consider the consequences of using concurrency management methods and locks.
You can use the data mapper design pattern to implement this design pattern.
To design mappers as best as possible in the preceding design pattern, you can use the decorator design pattern.
Consequences: Advantages

It increases the code's quality and prevents errors related to the lack of proper management of concurrency management processes.
Consequences: Disadvantages

By using this design pattern, there is a possibility that programmers will cause the program to encounter various errors by not paying attention to the consequences of concurrency management methods and locks.
Applicability:

This design pattern should often be used to implement concurrency management mechanisms.
Related patterns:

Some of the following design patterns are not related to the implicit lock design pattern, but to implement this design pattern, checking the following design patterns will be useful:

Layer supertype
Data mapper
Decorator
Conclusion
In this chapter, you got acquainted with different design patterns, including pessimistic and optimistic offline locks, coarse-grained locks, and implicit lock design patterns. You also learned how to manage and solve problems caused by concurrency with the help of these design patterns. In this chapter, you learned that you could sometimes lock readers to solve concurrency problems and sometimes manage concurrency problems by simply locking writers.

In the next chapter, you will learn about session state design patterns.

1 https://www.amazon.com/Patterns-Enterprise-Application-Architecture-Martin/dp/0321127420

Join our book's Discord space

Join the book's Discord Workspace for Latest updates, Offers, Tech happenings around the world, New Release and Sessions with the Authors:

https://discord.bpbonline.com

Leave a Reply

Your email address will not be published. Required fields are marked *