Category Archives: C#

C#中的Lambda表达式

什么是Lambda?

在C#中,一个Lambda表达式就是一个匿名函数。

Lambda的语法结构如下:

(Input Params) => Expression

• 中间的"=>" 是Lambda的操作符,一般读作"goes to"

• 左边的部分"Input Params"是Lambda表达式的输入参数,当且仅当只有一个参数的时候,括号可以忽略,其他情况哪怕参数个数是0个,也不能忽略.

• 右边的部分"Expression"是语句(Expression)或者代码块(Statement),当且仅当只有一行代码时,大括号可以忽略,其他情况均不可忽略,参考以下样例:

参考以下样例:

() => Console.WriteLine("No Params")  // 0 个参数,左边的圆括号不能省略
x => x * x   // 1个参数,左边的圆括号可加可不加
(x, y) => x - y     // 2个参数,左边的圆括号不能省略
(x, y) => { x += y; Console.WriteLine(x); }   // 大括号不能省略

阿隆佐·邱奇(Alonzo Church),美国数学家和逻辑学家,对计算机科学领域做出了重要贡献。邱奇在20世纪30年代提出了lambda演算,这是一种形式化的计算理论,用于研究函数的定义、应用和等价性。他在提出lambda演算时,定义了一种匿名函数的表达方式,即lambda表达式。这种表达方式允许函数没有名字,直接通过参数和表达式来描述,因此得名lambda表达式。邱奇的这一贡献为后来的函数式编程语言奠定了基础。

ASP.NET Core Razor Pages in Action 2 构建您的第一个应用程序

ASP.NET Core Razor Pages in Action 2 构建您的第一个应用程序

本章涵盖

• 创建 Razor Pages 应用程序
• 添加您的第一个页面
• 探索项目文件及其所扮演的角色
• 使用中间件配置应用程序管道

在上一章中,你了解了 Razor Pages Web 开发框架(作为 ASP.NET Core 的一部分)如何适应整个 .NET Framework。您已经发现了可以使用 Razor Pages 构建的应用程序类型,而且重要的是,当它不是最佳解决方案时。您已经了解了使用 Razor Pages 高效工作所需的工具,并希望下载并安装了 Visual Studio 或 VS Code 以及最新版本的 .NET SDK。现在您已经设置了开发环境,是时候开始使用代码了。

在本章中,您将使用 Visual Studio 和 CLI 创建您的第一个 Razor Pages 应用程序,以便您可以在所选的作系统上进行作。大多数 Web 开发框架都提供初学者工具包或项目 — 一个简单的应用程序,构成您自己的应用程序的起点。Razor Pages 也不例外。构成初学者工具包的应用程序只有三个页面,但它包括一个基本配置,您可以在此基础上构建以创建自己的更复杂的应用程序。

创建应用程序并设法在浏览器中启动它后,您将向应用程序添加新页面并包含一些动态内容,以便您可以开始了解 Razor 页面的实际含义。测试页面以确保其正常工作后,您将使用网站的主模板文件将页面添加到网站导航中。

然后,我将讨论该工具生成的应用程序文件,以了解每个生成的文件在 Razor Pages 应用程序中所扮演的角色。本演练将帮助您了解所有 ASP.NET Core 应用程序背后的基础知识。

在本演练的最后,我们将仔细研究主要应用程序配置:请求管道。这是应用程序的核心。它定义应用程序如何处理请求以及向客户端提供响应。您将了解如何从中间件组件构建它,以及如何通过添加自己的中间件来扩展它。

在本章结束时,您应该对 Razor Pages 应用程序的工作原理有一个很好的高级了解,从接收请求到最终将 HTML 发送回客户端。然后,您将准备好在第 3 章中深入探讨如何使用 Razor 页面及其配套 PageModel 类。

2.1 创建您的第一个网站

本部分将介绍如何使用可用工具快速生成功能齐全的 Razor Pages 应用程序。您将在 Windows 10 上使用 Visual Studio 2022 Community Edition,并为非 Windows 读者使用 CLI。我将讨论在 Visual Studio Code 中使用 CLI,尽管您可以使用任何终端应用程序来执行 CLI 命令。因此,以下部分假定您已安装并运行环境,以及支持 .NET 6 开发的 SDK 版本。您可以通过打开命令 shell 并执行以下命令来测试您的机器上是否安装了合适的 SDK 版本:

dotnet --list-sdks

您应该会看到列出了一个或多个版本,每个版本都有自己的安装路径。至少有一个版本应以 6 开头。在此阶段,如果您是第一次使用的用户,您还需要信任自签名证书,该证书是在本地系统上通过 HTTPS 轻松浏览站点所需的(第 14 章中有更详细的介绍)。为此,请执行以下命令:

dotnet dev-certs https --trust

证书本身作为 SDK 安装的一部分进行安装。

2.1.1 使用 Visual Studio 创建网站

如第 1 章所述,Visual Studio 是在 Windows 上工作的 .NET 开发人员的主要 IDE。它包括用于执行最常见任务的简单菜单驱动工作流。Razor Pages 应用程序是在 Visual Studio 中创建为项目,因此打开 Visual Studio 后,您的起点是创建新项目。您可以通过单击启动启动画面上的 Create a New Project 按钮或转到 File > New Project...在主菜单栏中。

在下一个屏幕上,您可以从模板列表中选择要创建的项目类型。在此之前,我建议从右侧窗格顶部的语言选择器中选择 C# 以过滤掉一些干扰。选择 ASP.NET Core Web App 模板 — 名称中没有 (Model-View-Controller) 的模板,还要注意避免选择名称非常相似的 ASP.NET Core Web API 模板。正确的模板带有以下说明:“用于创建 ASP.NET Core 应用程序的项目模板,其中包含 ASP.NET Razor Pages 内容。

为应用程序文件选择合适的位置并移动到下一个屏幕后,请确保您的 Target Framework 选择是 .NET 6,将所有其他选项保留为默认值。Authentication Type 应该设置为 None,应该选中 Configure for HTTPS,并且你应该取消选中 Enable Docker 选项(图 2.1)。对选择感到满意后,单击 Create 按钮。此时,Visual Studio 应该会打开,并在 Solution Explorer 中显示您的新应用程序(图 2.2)。

图 2.1 在点击 Create 按钮之前检查您是否已应用这些设置。

图 2.2 新应用程序将在 Visual Studio 中打开,其中有一个概述页,右侧打开“解决方案资源管理器”窗口,其中显示了 WebApplication1 解决方案及其单个项目(也称为 WebApplication1)的结构和内容。

尽管 Solution Explorer 的内容看起来像文件结构,但并非您看到的所有项实际上都是文件。我们将在本章后面仔细研究这些项目。

2.1.2 使用命令行界面创建网站

如果您已经使用 Visual Studio 构建了应用程序,则可能需要跳过此步骤。但是,我建议您也尝试这种方法来创建应用程序,因为该过程会揭示 Visual Studio 中的新项目创建向导隐藏的一两个令人兴奋的事情。

CLI 是一种基于文本的工具,用于对 dotnet.exe 工具执行命令,这两者都是作为 SDK 的一部分安装的。CLI 的入口点是 dotnet 命令,用于执行 .NET SDK 命令和运行 .NET 应用程序。在接下来的部分中,您将将其用于第一个目的。SDK 的默认安装会将 dotnet 工具添加到您的 PATH 变量中,因此您可以从系统上的任何位置对它执行命令。

可以使用您喜欢的任何命令 shell 调用 CLI 工具,包括 Windows 命令提示符、Bash、终端或 PowerShell(有跨平台版本)。从现在开始,我将 shell 称为终端,主要是因为它在 VS Code 中命名。以下步骤并不假定您使用 VS Code 执行命令,但您可以使用 VS Code 提供的集成终端来执行命令。

首先,在系统上的适当位置创建一个名为 WebApplication1 的文件夹,然后使用终端导航到该文件夹,或在 VS Code 中打开该文件夹。如果您选择使用 VS Code,则可以通过按 Ctrl-' 访问终端。在命令提示符下,键入以下命令,并在每个命令后按 Enter 键。

列表 2.1 使用 CLI 创建 Razor Pages 应用程序

dotnet new sln                                           ❶
dotnet new webapp -o WebApplication1                     ❷
dotnet sln add WebApplication1\WebApplication1.csproj    ❸

❶ 创建解决方案文件
❷ 搭建新的 Razor Pages 应用程序基架,并将输出放入名为 WebApplication1 的子文件夹中
❸ 将 Razor Pages 应用程序添加到解决方案

执行最后一个命令后,所有应用程序文件都应该成功创建。您还应该从终端获得一些与某些 “post-creation actions” 相关的反馈。您到 WebApplication1 的路径可能与我的路径大不相同,如下面的清单所示,但其余的反馈应该相似。

列表 2.2 CLI 执行的创建后作的通知

Processing post-creation actions...
Running 'dotnet restore' on WebApplication1\WebApplication1.csproj...
  Determining projects to restore...
  Restored D:\MyApps\WebApplication1\WebApplication1\WebApplication1.csproj 
(in 80 ms).
Restore succeeded.

CLI 在您的应用程序上执行 dotnet restore 命令,确保您的应用程序所依赖的所有软件包都已获取且是最新的。如果使用 Visual Studio 创建应用程序,将执行相同的命令,但指示它已发生并不那么明显。它显示在 IDE 底部的状态栏中(图 2.3)。

图 2.3 Visual Studio 底部的状态栏显示项目已恢复。

2.1.3 运行应用程序

现在,应用程序已使用您选择的任何方式创建,您可以在浏览器中运行和查看它。要从 Visual Studio 运行应用程序,您只需按 Ctrl-F5 或单击顶部菜单栏中轮廓的绿色三角形(不是实心三角形)。这将负责构建和启动应用程序,以及在浏览器中启动它。如果您使用的是 CLI,请执行以下命令:

dotnet run --project WebApplication1\WebApplication1.csproj

此命令包括 --project 开关,用于指定项目文件的位置。如果从包含 csproj 文件的文件夹中执行命令,则省略 --project 开关。如果您更喜欢在 Visual Studio 中使用 CLI,请按 Ctrl-' 打开集成终端,然后从内部执行命令。

您应该在终端中收到正在构建应用程序的反馈,然后再确认它正在侦听两个 localhost 端口,其中一个使用 HTTP,另一个使用 HTTPS。实际端口号因项目而异:

info: Microsoft.Hosting.Lifetime[0]
      Now listening on: https://localhost:7235
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://localhost:5235

打开浏览器,然后导航到使用 HTTPS 的 URL。在此示例随附的下载中,即 https://localhost:7235。如果您的浏览器警告您该站点不安全,您可能忽略了信任自签名证书所需的命令:dotnet dev-certs https --trust。如果一切顺利,您应该会看到类似于图 2.4 的内容。

图 2.4 首页

该应用程序是初级的。主页包含最少的样式和内容。使用页面顶部的导航或页脚中的链接导航到 Privacy (隐私) 页面。请注意,相同的最小样式也被应用于 Privacy 页面(图 2.5),并且存在导航。

图 2.5 隐私页面包含与主页相同的页眉、页脚和样式。

目前,您可以使用此应用程序执行的其他作不多。目前还没有任何有趣的方式来与它交互,因此是时候向应用程序添加一个页面了。

2.1.4 添加新页面

在本节中,您将向应用程序添加新页面。您还将探索添加到 .NET 6 中的新功能,称为热重载。此功能会导致对代码所做的更改反映在正在运行的应用程序中,而无需重新启动它。这是为 Visual Studio 用户自动激活的。VS Code 用户需要使用略有不同的命令来启用热重载。此功能适用于对现有文件的更改。由于您要添加新文件,因此需要先停止应用程序。Visual Studio 用户只需关闭浏览器即可停止应用程序。如果您使用 CLI 命令启动了应用程序,则应在终端窗口中按 Ctrl-C 以关闭应用程序。

Visual Studio 用户应右键单击 Solution Explorer 中的 Pages 文件夹,然后从可用选项中选择 Add > Razor Page(添加 Razor 页面)(图 2.6)。将文件命名为 Welcome .cshtml。

图 2.6 要在 Visual Studio 中添加新页面,请右键单击 Pages 文件夹,然后选择 Add,然后选择 Razor Page。

VS Code 用户应确保其终端位于项目文件夹(包含 csproj 文件的文件夹)中,然后执行以下命令:

dotnet new page -n Welcome -o Pages -na WebApplication1.Pages  

new page 命令将 Razor 页面添加到应用程序。-n(或 --name)选项指定创建页面时应使用的名称。-o(或 --output)选项指定将放置页面的输出位置。-na(或 --namespace)选项指定应应用于生成的 C# 代码文件的命名空间。或者,您可以导航到 Pages 文件夹以创建页面并省略 -o 选项。如果这样做,则必须记住导航回包含 csproj 文件的文件夹,以便在没有其他参数的情况下执行 run 命令。

Visual Studio 用户不需要指定命名空间。应用于使用 Visual Studio 向导创建的代码文件的默认命名空间是通过将项目名称与其在项目中的位置连接起来自动生成的。

现在运行应用程序。请记住,在 Visual Studio 中是 Ctrl-F5,而 CLI 用户(VS Code 或 Visual Studio)这次应该在终端中执行 dotnet watch run(而不是 dotnet run),然后打开浏览器并导航到记录到终端的第一个 URL。导航到 /welcome。页面应该除了页眉和页脚之外没有任何内容(图 2.7)。

图 2.7 新页面除了页眉和页脚之外是空的。

这里有三个有趣的点需要注意。第一个原因是您导航到 /welcome,并且找到并呈现了您刚刚添加到应用程序的 Welcome 页面。您无需执行任何配置即可实现此目的。ASP.NET Core 框架中负责此作的部分称为路由。它会根据 Razor 页面在项目中的位置自动查找 Razor 页面。第 4 章详细介绍了 routing。

需要注意的第二点是,新页面包括您在主页和隐私页面中看到的导航、页脚和样式。您的页面从布局文件(一种主模板)继承了这些内容。同样,这种情况的发生无需您采取任何具体步骤即可实现。您将在下一章中了解 layout 文件以及如何配置它们。

最后要注意的是页面的标题,如浏览器选项卡中所示:WebApplication1。布局页面也提供此值。

现在,可以向页面添加一些代码。更新 Welcome .cshtml 的内容,使其如下所示。

清单 2.3 向 Welcome 页面添加内容

@page
@model WebApplication1.Pages.WelcomeModel
@{
    ViewData["Title"] = "Welcome";
}
<h1>Welcome!</h1>

您甚至不需要刷新浏览器,您应用的更改就会在保存后立即显示。这是热重载功能的工作原理。您应该会看到一个一级标题,并且浏览器选项卡中的标题已更改为包含您应用于 ViewData[“Title”] 的值(图 2.8)。ViewData 是一种将数据从 Razor 页面传递到其布局的机制。您将在下一章中看到 ViewData 的工作原理。

图 2.8 对 Razor 页面所做的更改可见,无需刷新浏览器。

 2.1.5 修改以包含动态内容

到目前为止,您添加的是静态内容。每次运行此页面时,它看起来都一样。使用 Razor Pages 的全部意义在于显示动态内容,因此现在是时候添加一些内容了。假设您需要在输出中包含当天部分的名称(例如,上午、下午或晚上),也许作为送达确认说明的一部分(例如,“您的包裹将在早上送到您身边”)。首先,您需要根据时间计算一天的一部分,然后您需要渲染它。下面的清单显示了如何从当前时间获取一天中的部分并将其呈现给浏览器。

列表 2.4 向 Razor 页面添加动态内容

@page
@model WebApplication1.Pages.WelcomeModel
@{
    ViewData["Title"] = "Welcome!";

    var partOfDay = "morning";                                        ❶
    if(DateTime.Now.Hour > 12){
        partOfDay= "afternoon";                                       ❷
    }
    if(DateTime.Now.Hour > 18){
        partOfDay= "evening";                                         ❸
    }
}
<h1>Welcome</h1>
<p>It is @partOfDay on @DateTime.Now.ToString(“dddd, dd MMMM”)</p>    ❹

❶ partOfDay 变量被声明并初始化为值 “morning”。
❷ 如果是在中午之后,则使用值 “afternoon” 重新分配变量。
❸ 如果是在下午 6:00 之后,该值将更新为“晚上”。
❹ 变量与当前时间一起呈现到浏览器。

这些更改涉及声明一个名为 partOfDay 的变量,该变量实例化为值 “morning”。两个 if 语句随后会根据一天中的时间更改值。如果是在中午之后,则 partOfDay 将更改为 “afternoon”。下午 6:00 后再次更改为“晚上”。所有这些都是纯 C# 代码,并放置在代码块中,该代码块以 @{ 开头,以结束 } 结尾。然后,您在 Welcome 标题下添加了一个 HTML 段落元素,包括带有两个 C# 表达式的文本,这两个表达式都以 @ 符号为前缀。您刚刚编写了第一段 Razor 模板语法。@ 前缀指示 Razor 呈现 C# 表达式的值。这一次,根据一天中的时间,您应该会在标题下看到呈现给浏览器的新段落,如图 2.9 所示。

图 2.9 浏览器中修改后的 Welcome 页面

2.1.6 将页面添加到导航

接下来,您将新页面添加到站点导航中,因此您不必在浏览器中键入地址即可找到它。在 Pages/Shared 文件夹中找到 _Layout.cshtml 文件并打开它。使用 navbar-nav flex-grow-1 的 CSS 类标识 ul 元素,并在下面的清单中添加粗体代码行。

清单 2.5 将 Welcome 页面添加到主导航中

<ul class="navbar-nav flex-grow-1">
    <li class="nav-item">
        <a class="nav-link text-dark" asp-area="" asp-page="/Index">Home</a>       
    </li>
    <li class="nav-item">
        <a class="nav-link text-dark" asp-area="" asp-page="/Privacy">Privacy</a>
    </li>
    <li class="nav-item">
        <a class="nav-link text-dark" asp-area="" asp-page="/Welcome">Welcome</a>
    </li>
</ul>

再次刷新浏览器;现在,每个页面顶部的导航菜单将包含指向 Welcome 页面的链接。您刚才所做的更改已应用于应用程序中的每个页面。这是因为您更改了布局文件,该文件由应用程序中的所有页面使用。Razor 页面的内容与布局页面中的内容合并,以生成最终输出。

您可能想知道为什么您添加到布局页面以创建链接的锚元素上没有 href 属性。此元素称为锚点标记帮助程序。标记帮助程序是针对常规 HTML 元素的组件,它使服务器端代码能够通过通常以 asp- 开头的特殊属性来影响它们呈现到浏览器的方式。例如,asp-page 属性采用一个值,该值表示要生成链接的页面的名称。标签帮助程序将在下一章中更详细地介绍。

因此,您已经了解了 C# 和 HTML 在 Razor 页面中协同工作以生成 HTML 的一些方法。通常,最好的建议是将 Razor 页面中包含的 C# 代码量限制为仅影响演示文稿所需的代码量。应用程序逻辑(包括确定时间的算法)应保留在 Razor 页面文件中。Razor 页面文件和应用程序逻辑之间的第一级分离是 PageModel 类,该类构成了下一章的重点,以及我已经介绍的其他与视图相关的技术,包括布局、部件和标记帮助程序。

2.2 浏览工程文件

现在,您已经创建了第一个 Razor Pages 应用程序并尝试了一些 Razor 语法,现在是时候更详细地探索构成您刚刚创建的 Web 应用程序的每个文件夹和文件的内容,以了解每个文件夹和文件在应用程序中所扮演的角色。在此过程中,您将更清楚地了解 ASP.NET Core 应用程序的工作原理。您还将了解磁盘上的物理文件与您在 Visual Studio 的“解决方案资源管理器”窗口中看到的内容之间的区别。

2.2.1 WebApplication1.sln 文件

SLN 文件称为解决方案文件。在 Visual Studio 中,解决方案充当管理相关项目的容器,解决方案文件包含每个项目的详细信息,包括项目文件的路径。Visual Studio 在打开解决方案时使用此信息加载所有相关项目。

较大的 Web 应用程序通常由多个项目组成:一个负责 UI 的 Web 应用程序项目和多个类库项目,每个项目负责应用程序中的一个逻辑层,例如数据访问层或业务逻辑层。也可能有一些单元测试项目。然后,您可能会看到其他项目添加了表示其用途的后缀:WebApplication1.Tests、WebApplication1.Data 等。

此应用程序由单个项目组成。因此,它实际上根本不需要放在解决方案中,但 Visual Studio 仍然会创建解决方案文件。如果使用 CLI 创建应用程序,则通过 dotnet new sln 命令创建了解决方案文件。然后,通过 dotnet sln add 命令将 WebApplication1 项目显式添加到解决方案中。您可以跳过这些步骤,仅在需要向应用程序添加其他项目时才创建解决方案文件。

2.2.2 WebApplication1.csproj 文件

CSPROJ 文件是一个基于 XML 的文件,其中包含有关生成系统(称为 MSBuild)的项目的信息,它负责将源代码文件转换为可针对 .NET 运行时执行的格式。首先,项目文件包含与项目目标的 .NET Framework 版本和您正在使用的 SDK 相关的信息。Microsoft.NET.Sdk 是基本 SDK,用于构建控制台和类库项目等。Web 应用程序是针对 Microsoft.NET.Sdk.Web SDK 构建的。

项目文件包括两个附加属性:Nullable 和 ImplicitUsings。这些功能使您能够切换新的 C# 功能。第一个选项为项目设置可为 null 的注释和警告上下文。简单来说,这控制了您从代码分析器获得的反馈级别,这些代码分析器在代码中查找 NullReferenceException 的潜在来源。此异常是整个 中更多混淆和问题的原因。比其他任何社区都专注于 NET。该功能称为可为 null 的引用类型,默认处于启用状态。您可以通过将值更改为 disable 来关闭它。

ImplicitUsings 属性用于启用或禁用 C# 10 功能,该功能可减少代码文件中所需的显式 using 指令的数量。相反,它们是在 SDK 中全局设置的。已全局启用的 using 指令的选择包括以下常用 API:

• System
• System.Collections.Generic
• System.Linq
• System.Threading.Tasks

此外,该列表还包括一系列特定于 ASP.NET Core 的 API。默认情况下,此功能也处于启用状态。您可以通过将值设置为 disable 或删除该属性来禁用它。

随着时间的推移,项目文件将包含有关项目所依赖的包或外部库的信息。您可以手动将包添加到此文件中,或者更常见的是使用工具添加包(包管理器),该工具将为您更新工程文件的内容。您可以编辑文件的内容以自定义构建的元素。

项目文件在 Visual Studio 中的“解决方案资源管理器”中不可见。您可以通过右键单击 Solution Explorer 中的项目并选择 Edit Project File(编辑项目文件)来访问它。如果您使用的是 VS Code,则该文件在文件资源管理器中可见,您可以像访问任何其他文件一样访问和编辑它。

2.2.3 bin 和 obj 文件夹

bin 和 obj 文件夹在构建过程中使用。这两个文件夹又细分为两个文件夹(Debug 和 Release),它们对应于构建项目时使用的构建配置。最初,bin 和 obj 文件夹仅包含 Debug 文件夹。只有在 Release 模式下构建后,才会创建 Release 文件夹。除非您在上一节中按 Ctrl-F5 时更改了任何配置设置,否则您的应用程序目前仅在 Debug 模式下构建。

obj 文件夹包含构建过程中使用的工件,bin 文件夹包含构建的最终输出。在第 14 章中发布应用程序时,您将更详细地了解此输出。如果删除 bin 或 obj 文件夹,则会在下次生成项目时重新创建它们。

默认情况下,这两个文件夹在解决方案资源管理器中都不可见。但是,如果单击“显示所有文件”选项,则可以看到它们以虚线轮廓表示。此指示符表示文件夹不被视为项目本身的一部分。同样,它们并没有对 VS Code 用户隐藏。

2.2.4 Properties 文件夹

Properties 文件夹包含特定于项目的资源和设置。当前文件夹中的唯一项目是 launchSettings.json 文件,其中包含运行应用程序时要使用的设置的详细信息。

第一组设置与用于在本地运行应用程序的 IIS Express Web 服务器配置相关。IIS Express 是完整 IIS Web 服务器的轻量级版本,与 Visual Studio 一起安装。

第二组设置表示不同的启动配置文件。IIS Express 配置文件指定应用程序应在 IIS Express 上运行。请注意,applicationUrl 包含一个端口号。为 SSL 端口提供了不同的端口号。这些是按项目生成的。如果您愿意,您可以自由更改端口号。

第二个配置文件使用项目名称来标识自身。如果选择此配置文件来启动应用程序,它将完全在其内部或进程内 Web 服务器上运行。默认服务器实现称为 Kestrel。您将在本章后面了解更多信息。最终配置文件 (WSL 2) 与在适用于 Linux 的 Windows 子系统中运行应用程序有关。本书不涉及 WSL,但如果您想了解更多信息,Microsoft 文档提供了一个很好的起点:https://docs.microsoft.com/en-us/windows/wsl/

2.2.5 wwwroot 文件夹

wwwroot 文件夹是 Web 应用程序中的一个特殊文件夹。它在 Solution Explorer 中有一个地球图标。它是 Web 根目录,包含静态文件。由于是 Web 根目录,wwwroot 被配置为允许直接浏览其内容。它是样式表、JavaScript 文件、图像和其他内容的正确位置,这些内容在下载到浏览器之前不需要任何处理。因此,您不应将任何不希望用户能够访问的文件放在 wwwroot 文件夹中。可以将备用位置配置为 Web 根目录,但新位置不会在“解决方案资源管理器”中获得特殊图标。

项目基架在 wwwroot 文件夹中创建了三个文件夹:css、js 和 lib。css 文件夹包含一个 site.css 文件,其中包含模板站点的一些基本样式声明。js 文件夹包含一个名为 site.js 的文件,除了一些注释外,它什么都没有。一般的想法是,您将自己的 JavaScript 文件放在此文件夹中。lib 文件夹包含外部样式和脚本库。模板提供的库是 Bootstrap,一种流行的 CSS 框架;jQuery,一个跨浏览器的 JavaScript 实用程序库;以及两个基于 jQuery 的验证库。它们用于验证表单提交。

wwwroot 中的文件夹结构不是一成不变的。你可以随心所欲地移动东西。

2.2.6 Pages 文件夹

按照约定,Pages 文件夹配置为 Razor 页面文件的主页。这是框架希望找到 Razor 页面的位置。

项目模板从三个页面开始。您已经看到了其中两个 - 索引(或主页)和隐私页面。当然,您的示例包括您创建的 Welcome 页面。项目模板提供的第三个页面是 Error。查看磁盘上的实际文件夹,您会注意到每个页面都包含两个文件:一个扩展名为 .cshtml 的文件(Razor 文件),另一个以 .cshtml.cs 结尾的文件(C# 代码文件)。当您查看 Solution Explorer 时,这可能不是立即显而易见的。默认情况下,文件是嵌套的(图 2.10)。您可以通过在解决方案资源管理器顶部的工具栏中禁用文件嵌套或单击页面旁边的展开器图标来查看它们,这不仅会显示嵌套文件,还会显示一个显示 C# 类大纲(包括属性、字段和方法)的树。

图 2.10 解决方案资源管理器自动嵌套相关文件。您可以使用 menu 命令切换文件嵌套。

顶级文件 (.cshtml 文件) 是 Razor 页面文件。它也称为内容页面文件或视图文件。为了保持一致性,我将其称为 Razor 页面(单数,带有小写 p 以区别于 Razor Pages 框架)。如上一节所示,此文件充当视图模板,包含 Razor 语法,该语法是 C# 和 HTML 的混合体,因此,文件扩展名是 cs 和 html。第二个文件是一个 C# 代码文件,其中包含一个派生自 PageModel 的类。此类充当 Razor 页面的组合控制器和视图模型。您将在下一章中详细介绍这些文件。

Pages 文件夹中还有两个文件 — 一个名为 _ViewStart.cshtml,另一个名为 _ViewImports.cshtml。以前导下划线命名的 Razor 文件不应直接呈现。这两个文件在应用程序中起着重要作用,不应重命名它们。这些文件的用途将在下一章中解释。

Pages 文件夹还包含一个 Shared 文件夹。其中还有另外两个 Razor 文件,名称中都有前导下划线。_Layout.cshtml 文件充当其他文件的主模板,其中包含常见内容,包括您在上一节中更改的导航。另一个 Razor 文件 _ValidationScriptsPartial .cshtml) 是部分文件。部分文件通常用于包含可插入页面或布局的 UI 代码片段。它们支持 HTML 和 Razor 语法。此特定部分文件包含对客户端验证库的一些脚本引用。您将在第 5 章中介绍这些内容。最后一个文件是一个 CSS 样式表,它有一个奇怪的名字:_Layout .cshtml.css。它包含应用于 _Layout.cshtml 文件的样式声明。命名约定由 .NET 6 中的一项新功能使用,称为 CSS 隔离。您将在第 11 章中了解这是什么以及它是如何工作的。

2.2.7 应用设置文件

应用程序设置文件用作存储应用程序范围的配置设置信息的地方。项目模板由两个应用程序设置文件组成:appSettings.json 和 appSettings.Development.json。第一个 appSettings.json 是将与已发布应用程序一起部署的生产版本。另一个版本是开发应用程序时使用的版本。文件内容的结构为 JSON。

这两个版本都包含用于日志记录的基本配置。开发版本还包含一个名为 DetailErrors 的配置条目,该条目设置为 true。这样就可以将应用程序中发生的任何错误的完整详细信息呈现到浏览器。主机筛选是在生产版本中配置的。您几乎可以在 app-settings 文件中存储任何应用程序配置信息。稍后,您将使用它们来存储数据库连接字符串和电子邮件设置。

应用程序设置文件并不是您可以存储配置信息的唯一位置。许多其他位置(包括环境变量)都是开箱即用的,您可以配置自己的位置。您将在第 14 章中了解有关配置的更多信息。

2.2.8 Program.cs

熟悉 C# 编程的读者都知道,Program.cs 提供了控制台应用程序的入口点。按照约定,它包含一个静态 Main 方法,其中包含用于执行应用程序的逻辑。此文件没有什么不同,只是没有可见的 Main 方法。项目模板利用了一些较新的 C# 语言功能,这些功能在 C# 10 中引入,其中之一是顶级语句。此功能允许您省略 Program.cs 中的类声明和 Main 方法,并开始编写可执行代码。编译器将生成 class 和 Main 方法,并在该方法中调用您的可执行代码。

Program.cs 文件中的代码负责配置或引导 Web 应用程序并启动它。在 .NET 5 及更早版本中,此代码被拆分为两个单独的文件。大部分应用程序配置被委托给一个名为 Startup 的单独类。随着 .NET 6 的发布,ASP.NET 背后的开发人员试图降低过去存在于基本应用程序配置中的复杂性。他们没有将代码跨两个文件,而是将其合并到一个文件中,利用一些新的 C# 功能来进一步减少样板,然后引入了他们所说的最小托管 API,以获取启动和运行 Razor Pages 应用程序所需的最少代码,代码最少为 15 行。在以前的版本中,它接近 80 行代码,分布在两个文件中。

第一行代码创建一个 WebApplicationBuilder:

var builder = WebApplication.CreateBuilder(args);

请记住,此代码将在编译器生成的 Main 方法中执行,因此传递给 CreateBuilder 方法的 args 是由调用应用程序的任何进程传递到任何 C# 控制台应用程序的 Main 方法的标准 args。

WebApplicationBuilder 是 .NET 6 中的新增功能,与另一种新类型(WebApplication)一起构成了最小托管 API 的一部分,您稍后将介绍它。WebApplicationBuilder 具有多个属性,每个属性都支持对应用程序的各个方面进行配置:

• Environment - 提供有关应用程序运行的 Web 托管环境的信息
• Services — 表示应用程序的服务容器(请参阅 第 7 章)
• Configuration - 启用配置提供程序的组合(请参阅 14)
• Logging — 通过 ILoggingBuilder 启用日志记录配置
• Host — 支持配置特定于应用程序主机的服务,包括第三方 DI 容器
• WebHost — 启用 Web 服务器配置

应用程序主机负责引导应用程序、启动和关闭应用程序。术语 bootstrapping 是指应用程序本身的初始配置。此配置包括以下内容:

• 设置内容根路径,这是包含应用程序内容文件的目录的绝对路径
• 从传入 args 参数、app-settings 文件和环境变量的任何值加载配置信息
• 配置日志记录提供程序

所有 .NET 应用程序都以这种方式进行配置,无论它们是 Web 应用程序、服务还是控制台应用程序。最重要的是,为 Web 应用程序配置了 Web 服务器。Web 服务器通过 WebHost 属性进行配置,该属性表示 IWebHostBuilder 类型的实现。默认 Web 服务器是名为 Kestrel 的轻量级且速度极快的 Web 服务器。Kestrel 服务器已合并到您的应用程序中。IWebHostBuilder 还配置主机筛选以及与 Internet Information Services (IIS)(即 Windows Web 服务器)的集成。

IWebHostBuilder 对象公开了多个扩展方法,这些方法支持进一步配置应用程序。例如,前面我讨论了将 wwwroot 文件夹的替代路径配置为 Web 根路径。WebHost 属性使您能够在有充分理由的情况下执行此作。在下面的清单中,Content 文件夹被配置为 wwwroot 的替代品。

列表 2.6 配置静态文件位置

builder.WebHost.UseWebRoot("content");

Services 属性提供依赖项注入容器的入口点,该容器是应用程序服务的集中位置。您将在第 7 章中更详细地探讨依赖关系注入,但与此同时,只需知道容器负责管理应用程序服务的生命周期并根据需要为应用程序的任何部分提供实例就足够了。默认模板包括以下代码行,这些代码行使 Razor Pages 基础结构所依赖的基本服务可供应用程序使用:

builder.Services.AddRazorPages();

这些服务包括 Razor 视图引擎、模型绑定、请求验证、标记帮助程序、内存缓存和 ViewData。如果这些术语看起来不熟悉,请不要担心。在阅读本书时,您将更详细地了解它们。需要注意的重要一点是,Services 属性为您提供了一个位置,可以根据需要注册和配置其他服务。

有时,这些服务是你选择启用的框架的一部分(如 Razor Pages 示例),有时它们表示你作为单独包安装的服务。通常,它们将是您自己编写的包含应用程序逻辑的服务,例如获取和保存数据。

Build 方法将配置的应用程序作为 WebApplication 类型的实例返回。此类型表示其他三种类型的合并:

• IApplicationBuilder — 允许配置应用程序的请求或中间件管道
• IEndpointRouteBuilder - 启用将传入请求映射到特定页面的配置
• IHost - 提供启动和停止应用程序的方法

WebApplication 允许您注册中间件组件来构建和配置应用程序的请求管道。现在,让我们从高级角度看一下以下清单中的默认配置。您将在本书的后面详细了解 pipeline 中更有趣的部分。

列表 2.7 默认请求管道

if (!app.Environment.IsDevelopment())
{
    app.UseExceptionHandler("/Error");
    app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthorization();
app.MapRazorPages();
app.Run();

每个中间件都通过 IApplicationBuilder 类型的扩展方法添加到管道中,该方法由 WebApplication 实现。IWebHost- Environment 可通过 Environment 属性访问,该属性包含有关当前环境的信息。您将在第 14 章中了解有关环境的更多信息,但目前,只需说此属性用于确定应用程序当前是否在 Development 模式下运行就足够了,如果是,则调用 UseException- Handler 方法,该方法添加中间件以捕获错误并在浏览器中显示其详细信息。否则,您在 Pages 文件夹中看到的错误页面将用于显示一条平淡无奇的消息,该消息向用户隐藏了有关错误细节的任何敏感信息,例如包含用户凭据的数据库连接字符串或有关服务器上文件路径的信息。添加 HTTP 严格传输安全标头的中间件也已注册 (app.UseHsts()),但前提是应用程序未在开发模式下运行。此标头告诉浏览器在访问网站时仅使用 HTTPS。我在第 13 章中更详细地介绍了这一点。

UseHttpsRedirection 方法添加了中间件,以确保任何 HTTP 请求都重定向到 HTTPS。然后,在此之后,注册静态文件中间件。默认情况下,ASP.NET Core 应用程序不支持提供静态文件,例如图像、样式表和脚本文件。您必须选择使用此功能,并且可以通过添加静态文件中间件来实现。此中间件将 wwwroot 文件夹配置为允许直接请求静态文件,并将其提供给客户端。

路由中间件负责根据请求中包含的信息选择应执行的端点。我在第 4 章中讨论了路由在 Razor Pages 中的工作原理。然后,注册授权中间件,它负责确定当前用户是否有权访问所请求的资源。授权在第 10 章中介绍。

最后,MapRazorPages 方法将中间件添加到最初将 Razor Pages 配置为终结点的管道。此后,此中间件还负责执行请求。

2.3 理解 middleware

哇。那是很多抽象的术语!端点、中间件、管道 ...但它们实际上意味着什么呢?他们代表什么?在下一节中,我们将更详细地探讨它们。

注意 ASP.NET Core 中间件是一个相当大的话题。我将只介绍可能在大多数 Razor Pages 应用程序中使用的区域。如果您想探索更高级的中间件概念,例如分支管道,我推荐 Andrew Lock 的 ASP.NET Core in Action, Second Edition(Manning,2021 年)。

首先,鉴于 Razor Pages 应用程序的目的是提供对 HTTP 请求的响应,因此查看和了解 HTTP 请求的性质以及它在 Razor Pages 应用程序中的表示方式是合适的。这将构成您了解管道和终端节点的基础。

2.3.1 HTTP 刷新器

超文本传输协议 (HTTP) 是万维网的基础。它是在客户端-服务器模型中的系统之间传输信息的协议。HTTP 事务可以看作由两个基本元素组成:请求和响应。请求是输入,响应是输出。客户端发起请求,服务器提供响应,如图 2.11 所示。

图 2.11 客户端(浏览器)发起 HTTP 请求,该请求被发送到服务器。服务器负责将请求路由到已配置的应用程序并返回 HTTP 响应。

HTTP 请求包含许多数据。请求消息的第一行 (起始行) 包括以下内容:

• HTTP 方法
• 资源的标识符
• 协议版本(例如 HTTP/1.1)

该方法由动词(例如 GET、POST、PUT、DELETE、TRACE 或 CONNECT)或名词(例如 HEAD 或 OPTIONS)表示。向网站请求最常用的方法是 GET 和 POST,其中 GET 主要用于从服务器请求数据,POST 主要用于将数据传输到服务器,尽管 POST 方法也可能导致数据被发送回客户端。这是本书中将介绍的仅有的两种方法。

该标识符由统一资源标识符 (URI) 表示。此特定数据通常也称为统一资源定位符 (URL),就好像它们表示同一事物一样。从技术上讲,它们有所不同。就本书而言,知道所有 URL 都是 URI,但并非所有 URI 都是 URL 就足够了。RFC3986 的 1.1.3 节详细解释了差异: https://www.ietf.org/rfc/rfc3986.txt.在示例中,我将使用的 URI 类型在所有情况下都是 URL。

该请求还包括一组标头 — 名称-值对,可用于向服务器提供可能影响其响应的其他信息。例如,If-Modified-Since 标头指定日期时间值。如果请求的资源在指定时间后未被修改,则服务器应返回 304 Not Modified 状态码;否则,它应该发送修改后的资源。其他标头可能会通知服务器响应的首选语言或请求者可以处理的内容类型。

该请求还可以包括 cookie,即浏览器存储的信息片段,这些信息片段可能特定于网站用户,也可能不特定于网站用户。Cookie 的最常见用途包括:在用户登录到网站后存储用户的身份验证状态,或存储令牌,用于唯一标识访客以进行 Analytics 跟踪。

请求还可以包括 body。通常,这适用于 POST 请求,其中正文包含提交给服务器的表单值。

服务器返回的响应的结构与此类似。它有一个状态行,该行指定正在使用的协议版本、HTTP 状态代码和一些用于描述结果的文本 - 正式名称为原因短语。状态行示例可能如下所示:

HTTP/1.1 200 OK

响应还可以包含标头,这些标头可以指定所发送数据的内容类型、大小以及用于对响应进行编码的方法(如果已编码),例如 gzip。响应通常包括一个包含已请求数据的正文。

2.3.2 HttpContext

HTTP 事务中的所有信息都需要可供 Razor Pages 应用程序使用。用于封装当前 HTTP 事务(请求和响应)的详细信息的对象是 HttpContext 类。处理请求的进程内 Web 服务器负责使用实际 HTTP 请求中的详细信息创建 HttpContext 的实例。它为您(开发人员)提供了通过正式 API 访问请求数据的权限,而不是强迫您自己解析 HTTP 请求以获取此信息。HttpContext 还封装了此特定请求的响应。Web 服务器创建 HttpContext 后,它就可供请求管道使用。HttpContext 以各种形式在整个应用程序中显示,因此您可以根据需要使用其属性。表 2.1 详细介绍了 HttpContext 的主要属性以及它们所代表的内容。

表 2.1 HttpContext 属性

Property Description
Request Represents the current HTTP request (see table 2.2)
Response Represents the current HTTP response (see table 2.3)
Connection Contains information about the underlying connection for the request, including the port number and the IP address information of the client
Session Provides a mechanism for storing data scoped to a user, while they browse the website
User Represents the current user (see chapters 9 and 10)

Request 属性由 HttpRequest 类表示。表 2.2 详细介绍了此类的主要属性及其用途。

表 2.2 主要 HttpRequest 属性

Property

Description

Body

A Stream containing the request body.

ContentLength

The value of the content-length header detailing the size of the request, measured in bytes.

ContentType

The value of the content-type header detailing the media type of the request.

Cookies

Provides access to the cookies collection.

Form

Represents submitted form data. You won’t work with this directly. You are more likely to use model binding to access this data (see chapter 5).

Headers

Provides access to all request headers.

IsHttps

Indicates whether the current request was made over HTTPS.

Method

The HTTP verb used to make the request

Path

The part of the URL after the domain and port

Query

Provides access to query string values as key-value pairs

The Response property is represented by the HttpResponse class. Table 2.3 details the main members of this class and their purpose.

Table 2.3 Primary HttpResponse members

Property

Description

ContentLength

The size of the response in bytes, which is assigned to the content-length header.

ContentType

The media type of the response, which is assigned to the content-type header.

Cookies

The cookie collection of the outgoing response.

HasStarted

Indicates whether the response headers have been sent to the client. If they have, you should not attempt to alter the response. If you do, the values provided in the content-length and content-type headers may no longer be valid, leading to unpredictable results at the client.

Headers

Provides access to the response headers.

StatusCode

The HTTP status code for the response (e.g., 200, 302, 404, etc.).

WriteAsync

An extension method that writes text to the response body, using UTF-8 encoding.

Redirect

Returns a temporary (302) or permanent (301) redirect response to the client, together with the location to redirect to.

Response 属性由 HttpResponse 类表示。表 2.3 详细说明了该类的主要成员及其用途。

表 2.3 主要 HttpResponse 成员

Property

Description

ContentLength

The size of the response in bytes, which is assigned to the content-length header.

ContentType

The media type of the response, which is assigned to the content-type header.

Cookies

The cookie collection of the outgoing response.

HasStarted

Indicates whether the response headers have been sent to the client. If they have, you should not attempt to alter the response. If you do, the values provided in the content-length and content-type headers may no longer be valid, leading to unpredictable results at the client.

Headers

Provides access to the response headers.

StatusCode

The HTTP status code for the response (e.g., 200, 302, 404, etc.).

WriteAsync

An extension method that writes text to the response body, using UTF-8 encoding.

Redirect

Returns a temporary (302) or permanent (301) redirect response to the client, together with the location to redirect to.

上表中详述的方法和属性在直接处理请求和响应时非常有用,例如,在创建自己的中间件时将执行此作。

2.3.3 应用程序请求管道

当 Web 服务器将请求路由到您的应用程序时,应用程序必须决定如何处理它。需要考虑许多因素。请求应定向或路由到何处?是否应记录请求的详细信息?应用程序是否应该只返回文件的内容?它应该压缩响应吗?如果在处理请求时遇到异常,会发生什么情况?发出请求的人是否真的被允许访问他们请求的资源?应如何处理 Cookie 或其他与请求相关的数据?

此决策过程称为请求管道。在 ASP.NET Core 应用程序中,请求管道由一系列软件组件组成,每个组件都有自己的单独责任。其中一些组件在请求进入应用程序的途中作用于请求,而其他组件则对应用程序返回的响应进行作。有些人可能会两者兼而有之。执行这些功能的各个组件称为中间件。

图 2.12 说明了这个概念,显示了一个来自 Web 服务器的请求,然后通过多个中间件组件的管道传递,然后到达标记为 Razor Pages 的实际应用程序本身。

图 2.12 请求进入顶部的管道,流经所有中间件,直到到达 Razor Pages,在那里进行处理并作为响应返回。

这就是对示例应用程序主页的请求的流动方式。每个中间件都会检查请求,并确定在将请求传递到管道中的下一个中间件之前是否需要执行任何作。请求到达 Razor Pages 并得到处理后,响应将流回服务器,因为管道继续沿相反方向进行。管道本身在 Web 服务器上开始和结束。在图 2.13 中,静态文件中间件做出决策,并将控制权传递给下一个中间件,或者使进程短路并返回响应。

图 2.13 中间件处理请求,并在请求针对已知文件时返回响应。

静态文件中间件会检查到达它的每个请求,以确定该请求是否针对已知文件,即驻留在 wwwroot 文件夹中的文件。如果是这样,静态文件中间件只会返回文件,从而使管道的其余部分短路。否则,请求将传递到管道中的下一个中间件。

2.3.4 创建 middleware

现在,您已经更好地了解了中间件所扮演的角色,您应该了解它是如何实现的,以便您可以为请求管道提供自己的自定义功能。本节将介绍如何创建您自己的中间件组件并将其注册到管道中。

中间件组件作为 RequestDelegate实现,即,将 HttpContext 作为参数并返回 Task 的 .NET 委托,或者换句话说,表示 HttpContext 上的异步作的方法:

public delegate Task RequestDelegate(HttpContext context) 

代表 101:快速复习

.NET 中的委托是表示方法签名和返回类型的类型。下面的示例声明一个名为 MyDelegate 的委托,该委托将 DateTime 作为参数并返回一个整数:

delegate int MyDelegate(DateTime dt);

任何具有相同签名和返回类型的方法都可以分配给 MyDelegate 的实例并调用,包括下面显示的两个方法。

根据匹配的签名和返回类型为委托分配方法

int GetMonth(DateTime dt)                    ❶
{
 return dt.Month;
}
int PointlessAddition(DateTime dt)           ❶
{
    return dt.Year + dt.Month + dt.Day;
}

MyDelegate example1 = GetMonth;              ❷
MyDelegate example2 = PointlessAddition;     ❷
Console.WriteLine(example1(DateTime.Now));   ❸
Console.WriteLine(example2(DateTime.Now));   ❸

❶ 两种方法都采用 DateTime 参数并返回一个整数。
❷ 将两种方法都分配给委托实例。
❸ 通过委托实例调用方法。

你可以将内联匿名方法分配给委托:

MyDelegate example3 = delegate(DateTime dt) { 
return dt.Now.AddYears(-100).Year; };
Console.WriteLine(example3(DateTime.Now));

更常见的是,您将看到以 lambda 表达式形式编写的匿名内联方法,其中推断了方法参数的数据类型:

MyDelegate example4 = (dt) => { return dt.Now.AddYears(-100).Year; };
Console.WriteLine(example4(DateTime.Now));

因此,任何将 HttpContext 作为参数并返回任务的方法都可以用作中间件。

如前所述,中间件是通过 WebApplication 添加到管道中的。通常,中间件创建为通过扩展方法注册的单独类,但也可以将 RequestDelegate直接添加到管道。清单 2.8 展示了一个简单的方法,该方法将 HttpContext 作为参数并返回一个 Task,这意味着它满足 RequestDelegate 类型规范。如果您想尝试此示例,可以将方法添加到 Program.cs。您还需要向 Startup 类添加 using 指令,以将 Microsoft.AspNetCore.Http 引入范围。

示例 2.8 RequestDelegate 将 HttpContext 作为参数并返回 Task

async Task TerminalMiddleware(HttpContext context)
{
    await context.Response.WriteAsync("That’s all, folks!");
}

此特定中间件将消息写入响应。控制权不会传递给任何其他中间件组件,因此这种类型的中间件称为终端中间件。它会终止管道中的进一步处理。终端中间件通过 WebApplication 对象的 Run 方法注册:

app.Run(TerminalMiddleware);

RequestDelegate 是标准的 .NET 委托,因此也可以使用 lambda 表达式将其内联编写为匿名函数,而不是命名方法。

列表 2.9 使用 lambda 表达式内联指定主体的委托

app.Run(async context => 
     await context.Response.WriteAsync("That’s all, folks!")
);

尝试使用任一方法通过放置应用程序来注册此中间件。在管道的开头运行 call — 在检查当前环境是否为 Development 的条件之前。

列表 2.10 将中间件添加到管道的开头

app.Run(async context => 
     await context.Response.WriteAsync("That’s all, folks!")
);
if (app.Environment.IsDevelopment())
{
   ...

然后运行应用程序。您应该看到如图 2.14 所示的输出。

图 2.14 中间件的输出

下一个清单说明了一个中间件,它有条件地将处理传递给管道中的下一个中间件。

列表 2.11 有条件地将控制权传递给下一个中间件的中间件

async Task PassThroughMiddleware(HttpContext context, Func<Task> next)
{
    if (context.Request.Query.ContainsKey("stop"))
    {
        await context.Response.WriteAsync("Stop the world");
    }
    else
    {
         await next();
    }
}

此示例将 HttpContext 作为参数,但它也采用返回 Task 的 Func,表示管道中的下一个中间件。如果请求包含名为 stop 的查询字符串参数,则中间件会将管道短路,并将 Stop the world! 写入响应。不会调用其他中间件。否则,它将调用传入的 Func<Task>,将控制权传递给下一个中间件。将控制权传递给管道中下一个组件的中间件使用 Use 方法注册:

app.Use(PassThroughMiddleware);

同样,此中间件可以编写为内联 lambda。

清单 2.12 使用 Use 方法内联注册中间件

app.Use(async (context, next) =>
{
    if (context.Request.Query.ContainsKey("stop"))
    {
        await context.Response.WriteAsync("Stop the world");
    }
    await next();
});

你可以通过将代码放在 await next() 之后,将代码添加到控制权传递给下一个中间件后运行。假设没有其他中间件使管道短路,则您放置在其中的任何 logic 都将在管道反转其方向返回 Web 服务器时执行。例如,您可能希望执行此作以包括 logging。

Listing 2.13 在调用其他中间件后执行函数

app.Use(async (context, next) =>
{
    if (context.Request.Query.ContainsKey("stop"))
    {
        await context.Response.WriteAsync("Stop the world");
    }
    else
    {
        await next();
        logger.LogInformation("The world keeps turning");
    }
});

注册中间件时,位置很关键。如果要将此中间件放在管道的开头,它将针对每个请求执行并记录信息消息,除非找到指定的查询字符串项。假设你要在 static files middleware 之后注册此中间件。在这种情况下,它只会执行和记录对非静态文件资源的请求,因为静态文件中间件在返回静态文件时会使管道短路。

2.3.5 中间件类

到目前为止,您看到的所有示例中间件都已添加为内联 lambda。这种方法适用于你目前看到的简单中间件,但如果你的中间件涉及任何复杂程度,则很快就会达不到要求,可重用性和可测试性都会受到不利影响。此时,您可能会在中间件自己的类中编写中间件。

有两种方法可以实现中间件类。第一种选择是使用基于约定的方法,该方法从一开始就是 ASP.NET Core 的一部分。第二个选项涉及实现 IMiddleware 接口,该接口与 Razor Pages 同时引入 ASP.NET Core 2.0。

基于约定的方法

约定是必须应用于某些组件设计的规则,这些组件旨在与框架一起使用,以便它们按预期方式运行。可能必须以特定方式命名类,以便框架可以识别它的意图。例如,MVC 中的 controller 类就是这种情况,其名称必须包括 Controller 作为后缀。或者,可能适用一个约定,指定为特定用例设计的类必须包含以某种方式命名并带有预定义签名的方法。

必须应用于基于约定的中间件类的两个约定是:(1) 声明一个构造函数,该构造函数将 RequestDelegate 作为参数,表示管道中的下一个中间件,以及 (2) 一个名为 Invoke 或 InvokeAsync 的方法,该方法返回一个 Task 并至少具有一个参数,第一个参数是 HttpContext。

要尝试此作,请将名为 IpAddressMiddleware 的新类添加到应用程序中。为简单起见,以下示例直接添加到项目的根目录中。将代码替换为下一个列表,该列表说明了一个中间件类,该类实现这些约定并记录访客 IP 地址的值。

列表 2.14 基于约定的方法的中间件类

namespace WebApplication1
{
    public class IpAddressMiddleware
    {
        private readonly RequestDelegate _next;
        public IpAddressMiddleware(RequestDelegate next) => _next =
next;                                                               ❶

        public async Task InvokeAsync(HttpContext context, 
         ILogger<IpAddressMiddleware> logger)                     ❷
        {
            var ipAddress = context.Connection.RemoteIpAddress;
            logger.LogInformation($"Visitor is from {ipAddress}");  ❸
            await _next(context);                                   ❹
        }
    }
}

❶ 构造函数将 RequestDelegate 作为参数。
❷ InvokeAsync 方法返回一个任务,并将 HttpContext 作为第一个参数。任何其他服务都将注入到 Invoke/InvokeAsync 方法中。
❸ 在 InvokeAsync 方法中执行处理
❹ 将控制权传递给管道中的下一个中间件

接下来,将 using 指令添加到 Program.cs 文件的顶部,以将 WebApplication1 命名空间引入范围:

using WebApplication1;
var builder = WebApplication.CreateBuilder(args);

中间件类通过 WebApplication 上的 UseMiddleware 方法添加到管道中。此方法有两个版本。第一个选项将类型作为参数:

app.UseMiddleware(typeof(IpAddressMiddleware));

第二个版本采用一个泛型参数,表示中间件类。这个版本是你更有可能遇到的版本:

app.UseMiddleware<IpAddressMiddleware>();

或者,建议您在 IApplicationBuilder 上创建自己的扩展方法来注册中间件。以下示例(如下面的清单所示)放置在名为 Extensions 的类中,该类也已添加到项目的根目录中。

清单 2.15 使用扩展方法注册中间件

namespace WebApplication1
{
    public static class Extensions
    {
        public static IApplicationBuilder UseIpAddressMiddleware(this IApplicationBuilder app)
        {
            return app.UseMiddleware<IpAddressMiddleware>();
        }
    }
}

然后,扩展方法的使用方式与注册框架中间件时遇到的所有其他扩展方法相同:

app.UseIpAddressMiddleware();

在这种情况下,您可能希望在 static files 中间件之后注册此中间件,这样它就不会为每个请求的文件记录同一访问者的 IP 地址。

遵循基于约定的方法的中间件在应用程序首次启动时创建为单一实例,这意味着在应用程序的生命周期内只创建一个实例。此实例将重复用于到达它的每个请求。

实现中间件

编写新中间件类的推荐方法涉及实现 IMiddleware 接口,该接口公开一种方法:

Task InvokeAsync(HttpContext context, RequestDelegate next)
下一个清单显示了您使用基于约定的方法创建的相同 IpAddressMiddleware,并进行了重构以实现 IMiddleware。

列表 2.16 重构 IpAddressMiddleware 以实现 IMiddleware

public class IpAddressMiddleware : IMiddleware                             ❶
{
    private ILogger<IpAddressMiddleware> _logger;
    public IpAddressMiddleware(ILogger<IpAddressMiddleware> logger)
        => _logger = logger;                                               ❷

    public async Task InvokeAsync(HttpContext context, RequestDelegate next)❸
    {
        var ipAddress = context.Connection.RemoteIpAddress;
        _logger.LogInformation($"Visitor is from {ipAddress}");
        await next(context);
    }
}

❶ 中间件类实现 IMiddleware 接口。
❷ 依赖项被注入到构造函数中。
❸ InvokeAsync 将 HttpContext 和 RequestDelegate 作为参数。

InvokeAsync 与使用基于约定的方法编写的 InvokeAsync 非常相似,不同之处在于这次的参数是 HttpContext 和 RequestDelegate。该类所依赖的任何服务都是通过中间件类的构造函数注入的,因此需要字段来保存注入的服务的实例。

此中间件的注册方式与基于约定的示例完全相同:通过 UseMiddleware 方法或扩展方法。但是,基于 IMiddle ware 的组件还需要执行一个额外的步骤:它们还必须注册到应用程序的服务容器中。在第 7 章中,您将了解有关服务和依赖关系注入的更多信息,但目前,只需知道您需要将下一个清单中的粗体代码行添加到 Program 类就足够了。

清单 2.17 将 IMiddleware 注册为服务

builder.Services.AddRazorPages();
builder.Services.AddScoped<IpAddressMiddleware>();

那么,为什么有两种不同的方法可以创建中间件类,您应该使用哪一种呢?嗯,基于约定的方法要求您学习特定的约定并记住它们。没有编译时检查来确保你的 middleware 正确实现约定。这种方法称为弱类型。通常,当您第一次发现忘记将方法命名为 Invoke 或 InvokeAsync 或第一个参数应该是 HttpContext 时,它会崩溃。如果你和我一样,你经常会发现你得回头查阅文档,以提醒自己约定的细节。

第二种方法会产生强类型中间件,因为您必须实现 IMiddleware 接口的成员;否则,编译器会抱怨,您的应用程序甚至不会构建。因此,IMiddleware 方法不太容易出错,并且实现起来可能更快,尽管您必须采取额外的步骤来向服务容器注册中间件。

这两种方法之间还有另一个区别。我之前提到过,在首次构建管道时,基于约定的中间件被实例化为单例。IMiddleware 组件由实现 IMiddlewareFactory 接口的组件针对每个请求进行实例化,并且这种差异会根据中间件的生存期对中间件所依赖的服务产生影响。我在第 7 章中更详细地解释了服务生命周期。现在,请理解 lifetime 不是 singleton 的服务不应该被注入到 singleton 的构造函数中。这意味着非单例服务不应该被注入到基于约定的中间件的构造函数中。但是,它们可以注入到 IMiddleware 组件的构造函数中。请注意,可以将非单例服务安全地注入到基于约定的中间件的 Invoke/InvokeAsync 方法中。

需要注意的是,大多数框架中间件都是使用基于约定的方法编写的。这主要是因为它大部分是在引入 IMiddleware 之前编写的。虽然没有迹象表明框架设计人员认为有必要将现有组件迁移到 IMiddleware,但他们建议您将 IMiddleware 用于您自己创建的任何中间件。

我们已经详细研究了如何使用中间件来构建请求管道,但尚未真正详细地介绍已添加到默认项目模板中的中间件。这将在接下来的章节中更深入地介绍。具体来说,我们将在第 4 章中介绍路由和端点中间件如何组合,在第 10 章中介绍授权的工作原理,在第 12 章中介绍如何管理自定义错误页面。

总结

Razor Pages 应用程序的起点基于模板。
Razor Pages 应用程序创建为项目。
解决方案是用于管理项目的容器。
Razor 语法可用于向页面添加动态内容。
Razor 语法支持将 C# 代码嵌入到 HTML 中。
Razor 运行时编译通过刷新浏览器使对 Razor 文件的更改可见。
布局页面充当整个网站的主模板。
Razor Pages 应用程序是以 Main 方法作为入口点的控制台应用程序。Main 方法作为 C# 10 中顶级语句功能的一部分隐藏在视图中。
WebApplicationBuilder 用于配置应用程序的服务和请求管道。
请求管道确定应用程序的行为。
请求管道由中间件组件组成。
中间件作为 RequestDelegate 实现,RequestDelegate 是一个将 HttpContext 作为参数并返回 Task 的函数。
中间件通过 WebApplication 对象添加到管道中。中间件可以终止管道或将控制权传递给下一个中间件。
Middleware 将按照其注册顺序进行调用。
可以使用内联 lambda 表达式添加简单的中间件。
复杂中间件可以创建为单独的类,并使用 IApplicationBuilder 类型的扩展方法进行注册。
中间件类应使用约定或实现 IMiddleware 接口。
基于约定的中间件实例化为单一实例,并且应该通过 Invoke/InvokeAsync 方法获取依赖项。
IMiddleware 按请求实例化,并且可以通过其构造函数获取依赖项。

ASP.NET Core Razor Pages in Action 1 Razor Pages 入门

ASP.NET Core Razor Pages in Action 1 Razor Pages 入门
本章涵盖

• 什么是 Razor Pages
• 为什么你应该使用 Web 开发框架
• 您可以使用 Razor Pages 做什么
• 何时以及为何应选择 Razor Pages
• 使用 Razor Pages 所需的工具

感谢您购买此 Razor Pages in Action 副本,无论是实体版还是虚拟版。通过这样做,您将了解什么是 Razor Pages、可以使用 Razor Pages 做什么,以及在决定 Razor Pages 是否是构建下一个 Web 应用程序的不错选择时需要考虑的事项。剧透警告:如果您想开发以页面为中心的交互式 Web 应用程序,那就好了!

本章将探讨 Razor Pages 的技术,并研究 Razor Pages 与其他 Web 开发框架之间的异同。完成本章后,您应该知道 Razor Pages 是否适合您的下一个应用程序,并期待在下一章中使用 Razor Pages 构建您的第一个应用程序。

如果可以的话,我将对你做一些假设。我假设您已经了解 Web 的核心技术 — HTTP、HTML、CSS 和 JavaScript— 以及它们如何协同工作。我假设您知道 Bootstrap 不仅仅是鞋类的固定。我假设您已经了解 C# 或类似的面向对象语言,或者您能够在学习 Razor Pages 的同时学习 C#。最后,我以您了解关系数据库的基础知识为前提。我提到这一切是因为我在本书中没有详细介绍这些主题中的任何一个,尽管我可能会给你一个奇怪的复习,我认为它有助于提供上下文。

还在我身边?好!我们走吧!

1.1 什么是 Razor Pages?

Razor Pages 是 Microsoft 提供的服务器端、跨平台、开源 Web 开发框架,使您能够将现有的 HTML、CSS 和 JavaScript 知识与 C# 语言结合使用,以构建以页面为中心的新式 Web 应用程序。现在,这有点拗口,所以让我们稍微分解一下。

1.1.1 Web 开发框架

首先,让我们看看什么是 Web 开发框架以及为什么您可能需要它。图 1.1 显示了本书出版商网站的主页 Manning.com。

图 1.1 Manning.com 屏幕截图

看看您可以在此网站上做的一些事情:

• 您可以搜索网站内容。
• 您可以从此站点购买东西。
• 您可以创建一个帐户并登录。
• 您可以注册时事通讯。
• 您可以查看最新的图书发行。
• 您可以查看您之前访问时查看的项目。
• 您可以阅读对作者的采访。

这是很多复杂的功能,而且 Manning 有这么多的书籍和作者,必须有大量的页面需要维护。想想重新设计网站以使其焕然一新所需的工作,将更改应用于所有这些无数的页面!

Web 开发框架通过为常见任务提供预构建的解决方案来减轻这些复杂性,因此您可以继续构建应用程序。以显示所有这些书籍的详细信息的任务为例。不必为每本书创建一个页面,框架(如 Razor Pages)将为您提供创建模板以显示任何书籍的功能。它还包括占位符,因此可以从中央存储(例如数据库)获取特定书籍的详细信息,例如其标题、作者、ISBN 和页数(很像邮件合并文档的工作方式)。现在,您只需管理所有书籍的一页,而不是每本书一页。

管理用户信息的任务怎么样?您将需要某种方法来存储此信息并将其与用户提供的登录详细信息进行匹配。您还需要提供一种机制来识别当前用户已成功登录,这样他们就不必为要查看的每个后续页面再次进行身份验证。您需要安全地完成所有这些作,采用可接受级别的加密技术。同样,一个好的框架将为您提供这些功能。您所要做的就是了解这些功能的工作原理并将它们插入到您的应用程序中,将实现加密和哈希等低级专业任务的谜团留给知道自己在做什么的专家。

这些示例涉及 Web 开发框架提供的几个功能。(图 1.2)。但名单并不止于此。想想开发 Web 应用程序可能需要您执行的任何常见重复性任务:处理传入的数据请求、映射不包含文件扩展名的 URL、与数据库通信、处理和验证表单提交、处理文件、发送电子邮件。使用包含这些功能的框架时,所有这些任务都会变得更加容易。当您完成本书时,您将能够使用 Razor Pages 轻松完成所有这些任务。

图 1.2 工作流图显示了涉及使用模板的过程在 Razor Pages 中的工作原理。此工作流从左下角开始,客户端请求 /book/razor-pages-in-action 或类似内容。白色箭头显示通过 Internet 到 Web 服务器的行进方向,该服务器找到正确的应用程序,然后将处理传递给 Razor 页面(其中包含 func())。然后,控制权将传递给应用程序服务层,该层负责从数据库中检索详细信息。数据将发送到服务层(请参阅灰色箭头),然后发送到 Razor 页面,在那里它与视图模板(带有 @ 符号的模板)合并以创建 HTML。生成的 HTML 通过应用程序传递到 Web 服务器,然后返回给客户端。

除了为功能需求提供解决方案外,框架通常还提供构建和部署应用程序的标准方法。它们可能会鼓励您在构建应用程序时采用经过验证的软件设计模式,以使结果更易于测试和维护。

从本质上讲,Web 开发框架可以通过为常见的重复性任务提供预构建和测试的解决方案来加快开发 Web 应用程序的过程。他们可以通过鼓励您按照一组标准工作来帮助您产生一致的结果。

1.1.2 服务器端框架

接下来,我们将了解一下 Razor Pages 是服务器端框架的含义。在开发动态 Web 应用程序时,您必须确定 HTML 的生成位置。您可以选择在用户的设备(客户端)或 Web 服务器上生成 HTML。

在客户端上生成 HTML 的应用程序或单页应用程序 (SPA) 在可以使用的技术方面受到限制。直到最近,你还只能真正使用 JavaScript 来创建这类应用程序。自从 Blazor 推出以来,这种情况发生了变化,它使你能够使用 C# 作为应用程序编程语言。若要详细了解此内容,请参阅 Chris Sainty 的 Blazor in Action(Manning,2021 年)。由于大多数应用程序处理都在用户的设备上进行,因此您必须注意其资源,您无法控制这些资源。在编写代码时,您还必须考虑浏览器功能之间的差异。另一方面,客户端应用程序可以带来丰富的用户体验,甚至可以与桌面应用程序非常相似。主要在客户端上呈现的应用程序的优秀示例包括 Facebook 和 Google Docs。

在服务器上呈现 HTML 的应用程序可以利用服务器支持的任何框架或语言,并拥有服务器可以提供的尽可能多的处理能力。这意味着 HTML 生成是可控且可预测的。此外,所有应用程序逻辑都部署到服务器本身,这意味着它与服务器一样安全。由于处理的输出应该是符合标准的 HTML,因此您不需要太担心浏览器的怪癖。

1.1.3 跨平台功能

可以在各种平台上创建和部署 Razor Pages 应用程序。Windows、Linux、macOS 和 Docker 均受支持。如果您想在超薄且昂贵的 MacBook Air 或 Surface Pro 上创建应用程序,您可以。或者,如果您更喜欢使用运行 Debian 或 Ubuntu 的翻新 ThinkPad,没问题。您仍然可以与使用不同平台的同事共享您的源代码。您的部署选项同样不受限制,这意味着您可以利用您的网络托管公司提供的最优惠价格。

1.1.4 开源

过去,当我第一次被授予 Microsoft 最有价值专业人士(MVP,Microsoft 评判为通过分享技术专业知识为社区做出重大贡献的人的年度奖项)时,该奖项的好处之一是可以直接访问负责 MVP 专业领域的 Microsoft 产品组。就我而言(我确信这是错误的身份之一),专业领域是 ASP.NET,Microsoft 的 Web 开发框架。

能够访问 ASP.NET 产品组是一个特权地位。请记住,在那个年代,Microsoft 在很大程度上是一家闭源公司。Microsoft MVP 比社区其他成员更早地了解了 Microsoft 在其领域的一些新产品计划。他们甚至可能会被邀请对他们的新产品进行一些 beta 测试或提供改进建议,尽管所有主要设计决策通常是在您获得访问权限时做出的。

几年后,Microsoft 已经转变为一家开源公司。他们开发平台的源代码在 GitHub 上供所有人查看。不仅如此,我们鼓励每个人通过提交可能的错误并提供改进、新功能、错误修复或更好的文档来为源代码做出贡献。与其被告知 Microsoft 将在遥远的将来发布什么,不如参与关于框架应该采取的方向的对话。任何人都可以在 GitHub 上询问有关框架的问题,通常可以从 Microsoft 开发人员那里获得答案。

Microsoft 在这种方法上取胜,因为他们受益于公司外部的专家,增加了他们的技术专长,甚至增加了时间,而框架的用户则受益,因为他们获得了其他真实用户影响的更好的产品。在撰写本文时,Razor Pages 所属的 ASP.NET 的当前版本 ASP.NET Core 拥有超过 1,000 个活跃的贡献者。

1.1.5 使用您现有的知识

Razor Pages 支持的服务器端语言是 C#,而视图模板主要由 Web 语言(HTML、CSS 和 JavaScript)组成。前面讨论的动态内容的占位符是 C# 代码。使用 Razor(一种简单易学的模板语法)在视图模板中嵌入服务器端表达式和代码。您无需学习任何新语言即可使用 Razor Pages。您甚至不需要真正了解 SQL 即可访问数据库,因为 .NET 包含您将用于生成数据库的框架。

1.2 您可以使用 Razor Pages 做什么?

Razor Pages 是一个以页面为中心的框架。它的主要目的是生成 HTML。因此,它适用于创建任何 Web 应用程序或由网页组成的基于 Web 的应用程序的任何部分。事实上,列出你不能用 Razor Pages 做的事情可能更容易!

您之前查看了 Manning 的网站 — 一个在线目录和电子商务网站。我被可靠地告知它不是用 Razor Pages 构建的,但它可能是。我在博客和教程网站上使用了 Razor Pages,其中数据存储在数据库中或作为需要转换为 HTML 的 Markdown 文件。我还在日常工作中使用 Razor Pages 来构建杂志网站,使用基于 Web 的内部工具来管理与业务相关的工作流程和报告,甚至是自定义内容管理系统。将页面作为要求的一部分的任何类型的 Web 应用程序都是 Razor Pages 的候选对象 - 从简单的博客网站到下一个 eBay。

Razor Pages 特别适用于任何类型的基于表单的应用程序。创建、读取、更新和删除通常称为 CRUD 应用程序,代表与模型的持久存储相关的四个基本作。这些工具可用于快速搭建用于管理任何实体的表单集合,您将在本书的后面部分使用这些工具。

1.3 支撑 Razor Pages 的技术

Razor Pages 位于从 .NET 6 开始的堆栈的顶部,.NET 6 是 Microsoft 的一个大型框架,支持各种跨平台应用程序的开发,包括桌面、移动、云、游戏,当然还有 Web(图 1.3)。基层也称为基类库 (BCL),包括大多数开发类型通用的较低级别库,例如提供数据类型或支持处理集合、文件、数据、线程异常、电子邮件等的库。

图 1.3 .NET 堆栈。Razor Pages 是 MVC 框架的一项功能,而 MVC 框架又是 ASP.NET Core 框架的一部分,该框架代表 Web 开发层。

堆栈的 Web 层称为 ASP.NET Core。它包括用于处理 HTTP、路由、身份验证的库,以及用于支持 Razor 语法和 HTML 生成的类。除了我之前提到的 Blazor 之外,ASP.NET Core 还包括 SignalR,这是一个用于将数据从服务器推送到连接的客户端的框架。SignalR 用例的最简单示例是聊天应用程序。

除了 SignalR 和 Blazor 之外,还有 ASP.NET Core 模型-视图-控制器 (MVC) 框架,顶部是 Razor Pages。Razor Pages 是 MVC 框架的一项功能,它支持开发遵循 MVC 设计模式的 Web 应用程序。要理解这意味着什么,有必要了解 ASP.NET Core MVC 框架的性质。

1.3.1 ASP.NET Core MVC 框架

ASP.NET Core MVC 是 Microsoft 的原始跨平台 Web 应用程序框架。这就是所谓的固执己见的框架。框架设计者对框架的用户应该应用的架构决策、约定和最佳实践有意见,以产生最高质量的结果。然后,框架设计人员生成一个框架,引导用户采用这些架构决策、约定和最佳实践。整个 Microsoft 的开发人员将此过程描述为帮助客户陷入“成功的深渊”。

1.3.2 模型-视图-控制器

MVC 框架背后的开发人员的主要架构决策是支持实现 MVC 模式的 Web 应用程序的开发,因此,框架的名称也应运而生。之所以做出这一决定,是因为 MVC 是 Web 开发中一种众所周知的表示设计模式,其目的是强制分离关注点 — 具体而言,应用程序模型及其表示的关注点。

MVC 中的 V 是视图或页面。M 是应用程序模型,它是一个模糊的术语,表示应用程序中不是视图或控制器的所有内容。该模型包括数据访问代码、业务或域对象(在曼宁的情况下,您的应用程序的全部内容(书籍、作者和客户))以及旨在管理它们的编程逻辑(即业务逻辑)。然后,根据其他良好的软件设计实践,应用程序模型需要进一步分离,但这不是 MVC 的业务,它纯粹是一种表示设计模式。在 UI 和模型的其余部分之间强制分离的主要原因是提高维护和可测试性。如果应用程序逻辑与 HTML 混合在一起,则很难测试应用程序逻辑。

MVC 的控制器部分是模型和视图之间分离的主要方式。它的作用是接受请求,然后使用请求中的信息对模型执行命令。然后,它将获取该处理的结果并将其传递给视图进行显示。

控制器可以通过不同的方式实现。您可以创建类似前端控制器的东西来处理对整个应用程序或应用程序子集的请求,也可以使用页面控制器模式来处理对单个页面的请求。最初的 ASP.NET MVC 框架实现利用了前端控制器方法,其中单个控制器负责协调与应用程序中的功能或业务区域相关的多个端点(AuthorController、BookController 等)的处理。Razor Pages 实现页面控制器方法,控制器是从 PageModel 派生的类。

ASP.NET MVC 框架中的前端控制器单独负责的不仅仅是页面控制器(图 1.4)。他们必须协调与特定业务领域相关的所有作的处理 — 创建、更新、删除、获取列表、获取详细信息等。随着时间的推移,前端控制器可能会增长到数百行(如果不是数千行)代码。它们采用的依赖项数量增加,这肯定表明控制器做得太多了。它们变得难以管理。另一方面,页面控制器要简单得多,只需要管理其单个页面的处理。其中一些几乎没有任何代码。

图 1.4 MVC 中使用的前端控制器协调多个视图的处理,可能会变得非常繁忙和复杂。在 Razor Pages 中,每个页面都有自己的控制器,使它们保持精简且更易于使用。

1.3.3 Razor Pages 的设计目标

正如您已经了解到的,MVC 框架是一个固执己见的框架。如果您想使用它,则需要使用框架作者的约定或开发某种解决方法。ASP.NET MVC 包含许多用于命名文件并将其放置在应用程序中的约定。例如,假设您的客户或老板希望您向现有 MVC 应用程序添加新功能。请记住,前端控制器类按照约定是功能驱动的,您必须将表示该功能的新类文件添加到 Models 文件夹,将新的控制器类添加到 Controllers 文件夹,将新功能的文件夹添加到 Views 文件夹中,将新的 Razor 视图添加到该文件夹,最后添加 viewmodel 类来表示视图的数据。如果要对该功能进行任何更改,则必须在整个代码库中插入和退出文件夹和文件。

不熟悉 MVC 模式的开发人员可能会发现使用 ASP.NET 实现的复杂性相当令人生畏。如果您不熟悉 ASP.NET MVC 应用程序的结构,并且发现自己对我刚才描述的工作流有点迷茫,欢迎加入我的目标受众!甚至 Microsoft 自己也把这个框架描述为具有 “高概念数”。因此,Razor Pages (https://github.com/aspnet/mvc/issues/494) 的设计目标是在该背景下设定的,并隐式地将使用 Razor Pages 与 MVC 框架进行比较。它们包括(引用的 GitHub 问题)以下内容:

• 使用 ASP.NET Core 使动态 HTML 和表单更加容易,例如,在页面中打印 Hello World 需要多少个文件和概念,构建 CRUD 表单等。
• 减少以页面为中心的 MVC 方案所需的文件数量和文件夹结构的大小
• 简化实现常见的以页面为中心的模式所需的代码,例如动态页面、CRUD 表单等。
• 启用在必要时返回非 HTML 响应的功能,例如 404s
• 尽可能多地使用和公开现有的 MVC 基元(组件)

最终,引入了 Razor Pages,使使用 MVC 模式比使用现有框架更简单。这并不是说 Razor Pages 仅适用于简单的场景 — 远非如此,尽管您可能会在各种网站上找到这种视图。但是,当被追问时,您会发现大多数持有这种观点的人都承认没有尝试过 Razor Pages。

1.4 什么时候应该使用 Razor Pages?

为了与我的说法保持一致,列出 Razor Pages 不能执行的作可能更容易,我将通过查看何时不应考虑使用 Razor Pages 的示例来开始本节:

• 单页应用程序 - 作为服务器端开发框架,Razor Pages 不是构建单页应用程序的合适工具,在单页应用程序中,应用程序通常用 JavaScript 编写并在浏览器中执行,除非需要服务器呈现 (http://mng.bz/YGWB)。
• 静态内容站点 – 如果站点仅由静态内容组成,则启动 Razor Pages 项目不会有任何好处。您只是不需要一个主要目的是在服务器上动态生成 HTML 的框架。
• Web API - Razor Pages 主要是一个 UI 生成框架。但是,Razor 页面处理程序可以返回任何类型的内容,包括 JSON。不过,如果您的应用程序主要是基于 Web 的服务,则 Razor Pages 不是正确的工具。您应该考虑改用 MVC API 控制器。应该指出的是,如果您的要求是生成 HTML 以及通过 HTTP 提供服务,那么在同一个项目中混合使用 Razor 页面和 API 控制器是完全可能的(并且很容易的)。
• 从旧版本的 MVC 迁移 – 如果您希望将现有 MVC 应用程序从早期版本的 .NET Framework 迁移到 ASP.NET Core,则移植到 ASP.NET Core MVC 可能更有意义,因为您的许多现有代码无需修改即可重复使用。迁移后,您可以将 Razor Pages 用于迁移的应用程序中的所有以页面为中心的新功能,因为 MVC 控制器和 Razor Pages 可以愉快地位于同一应用程序中。

Razor Pages 是在 Visual Studio 中构建基于页面的 Web 应用程序的默认项目类型,因此,在除上述例外情况之外的所有情况下,都应将 Razor Pages 用于以页面为中心的应用程序,无论其复杂程度如何。

ASP.NET Core 的设计将性能作为一流的功能。该框架经常在备受推崇的 TechEmpower Web 框架性能评级 (https://www.techempower.com/benchmarks) 中名列前茅。因此,如果您需要一个提供 HTML 的高性能应用程序,Razor Pages 有一个很好的基础。

ASP.NET Core 应用程序设计为模块化。也就是说,您只包含应用程序所需的功能。如果您不需要某个功能,则不包括在内。这样做的好处是使已发布的应用程序的占用空间尽可能小。如果限制已部署应用程序的整体大小对您很重要,Razor Pages 也可以勾选该框。

最后,ASP.NET Core 背后的团队一定做对了什么,因为根据 Stack Overflow 的 2020 年开发人员调查,ASP.NET Core 是“最受欢迎”的 Web 开发框架(参见 https://insights.stackoverflow.com/survey/2020#technology-most-loved-dreaded-and-wanted-web-frameworks)。

1.5 使用 Razor Pages

此时,您知道什么是 Razor Pages、它的工作原理以及它可以为您做什么。您现在应该知道它是否适合您的应用程序。如果是,您需要知道从何处获取 Razor Pages 以及可以使用哪些工具来使用框架。下一节将提供这些问题的答案。首先,我们将介绍如何获取 Razor Pages;然后,我们将介绍使用该框架开发 Web 应用程序所需的工具。

1.5.1 如何获得 Razor Pages?

要开始开发 Razor Pages 应用程序,您需要 .NET 软件开发工具包 (SDK)。当您首次安装 Visual Studio(Microsoft 的旗舰软件开发环境)时,将自动包含此密钥。之后,您可能需要手动安装 SDK 的更新版本。如果您使用的编辑器不包含 SDK,则需要手动安装 SDK。SDK 可在 https://dotnet.microsoft.com/download 获取。

版本可用于 Windows、Linux、macOS 和 Docker(图 1.5)。当前版本已明确标记并推荐使用,因为它包含最新的错误修复和其他改进。一个版本也将被标记为长期支持 (LTS) 版本;这可能是也可能不是当前版本。LTS 版本会在较长一段时间内继续接收关键错误修复。当前版本 .NET 6 是 LTS 版本,自其发布日期(2021 年 11 月)起,将继续受支持三年。Microsoft 的目标是使从一个 LTS 版本迁移到下一个 LTS 版本成为一种相对轻松的体验。

图 1.5 SDK 下载页面图

下载页提供对每个 .NET/.NET Core 版本的 SDK 和运行时的访问。SDK 包括运行时和一组用于开发应用程序的工具,包括用于 .NET 的命令行界面 (CLI)。CLI 提供对一系列命令的访问,这些命令使您能够开发、构建、运行和发布 .NET 应用程序。

运行时仅包括运行 .NET 应用程序所需的组件。运行时主要用于在不进行开发的计算机上进行部署。您可以在计算机上安装多个版本的 SDK 和/或运行时。他们快乐地比肩生活。

1.5.2 选择开发环境

从理论上讲,您可以只使用命令行开发 Razor Pages 应用程序,也许还可以使用 Windows 记事本等基本文本编辑器,但现实情况是,您将需要使用旨在支持 .NET Core 开发的工具,从而减轻您的大部分繁重工作。这些工具中最强大的是集成开发环境 (IDE),将包括源代码编辑器,这些编辑器具有语法突出显示、代码完成、静态代码分析以及用于调试、编译和发布应用程序的功能。IDE 通常支持常见的工作流程,例如创建应用程序和基于现有模板添加各种类型的文件。它们通常还包括与数据库和版本控制系统的集成。

用于 .NET 开发的最流行的 IDE 是 Microsoft 的 Visual Studio。要享受 .NET 6 支持,您需要使用 2022 版本。它有三个版本:Community、Professional 和 Enterprise。社区版是 Visual Studio 的完整版,与专业版的不同仅在于其许可证。社区版对个人和小型公司(如许可条款 (https://visualstudio.microsoft.com/vs/community/ 中所定义)免费,也可供学术使用或参与开源项目。企业版旨在供大型团队使用,并相应地定价。所有版本都仅适用于 Windows(图 1.6)。

图 1.6 https://visualstudio.microsoft.com/ 截图,读者可以获取目前提到的所有三个 IDE

有一个适用于 Mac 用户的 Visual Studio 版本,但它不是 Windows 版本的直接移植。它是 Xamarin Studios 的改编版本,主要是移动应用程序开发环境。但是,它支持 Razor Pages 开发,并且提供免费的社区版。

Visual Studio Code (VS Code) 是一种流行的免费跨平台代码编辑器(与开发环境相反)。大量且不断增长的扩展可用,使 VS Code 中的 .NET Core 开发变得非常容易,包括 C# 语言集成、调试和版本控制集成。VS Code 不包含 Visual Studio 提供的用于处理 Razor Pages 的相同类型的工具集成,但它确实具有集成终端,可轻松访问 .NET CLI,并且出色的 OmniSharp 扩展为 VS Code 中的 C# 开发提供了出色的支持。本书将讨论如何使用 VS Code 终端执行 CLI 命令;您可以从 https://code.visualstudio.com/ 下载 VS Code。

如果您想在 Mac 或 Linux 系统上进行开发,VS Code 是一个不错的选择。或者,JetBrains 的 Rider 是一个跨平台的 .NET IDE,提供 30 天免费试用。

在本书中,我将向您展示如何使用 Visual Studio Community Edition 和 VS Code 开发 Razor Pages 应用程序,但无论您选择使用哪个平台,您都可以按照这些示例进行作。

 1.5.3 选择数据库系统

Web 应用程序需要一种方法来持久保存数据。ASP.NET Core 不会对您的选项施加任何技术限制。如果需要,可以将数据存储为一系列文本文件,但最常用的数据存储是某种关系数据库。您还需要一种方法来在应用程序和数据库之间建立连接、执行数据库命令以及访问任何生成的数据。.NET 6 包括一种称为 ADO.NET 的低级数据访问技术。它以类似于内存中数据库表或视图的结构向应用程序公开数据。如果要访问数据片段,则必须使用索引器和转换或强制转换:

var myInt = Convert.ToInt32(dataTable.Rows[1][4]);

这是一种丑陋且容易出错的应用程序开发方法。它只需要有人更改上一个 C# 语句所依赖的 SQL 语句中的列顺序,因为目标位置的具体化值无法再转换为 int。如今,开发人员通常更喜欢将数据作为对象(例如,Book 类或 Author 类)来处理,并将使用对象关系映射 (ORM) 工具来管理数据库和应用程序之间的通信。ORM 还负责(除其他外)将数据从数据库查询映射到指定的对象或对象集合。

.NET 开发人员可以使用多种 ORM 工具。他们中的大多数由第三方拥有和管理。我为本书选择的 ORM 是 Entity Framework Core (EF Core)。我将使用这个 ORM,因为它是一种 Microsoft 技术,是 .NET 的一部分。图 1.7 是图 1.3 的更新版本,显示了 EF Core 在 .NET 堆栈中的位置。

图 1.7 Entity Framework Core 是一个可选组件,但它可用于支持在 .NET 6 上构建的各种应用程序类型(包括 ASP.NET、桌面、移动、云和游戏)中的数据访问。

定义提供程序是处理 C# 应用程序代码与数据存储本身之间的通信的组件。像 EF Core 这样的 ORM 的真正好处之一是,您不需要用数据存储特定的语言编写命令。您可以使用 C# 来表达数据命令,这与数据存储无关。每个单独的提供商都负责生成所选数据存储支持的域特定语言 (DSL)(除许多其他事项外)。在大多数情况下,此 DSL 是 SQL。

使用 EF Core 将提高您的工作效率,但也会根据专业提供商的可用性和/或成本,在数据库系统方面为您提供的选项增加限制因素。话虽如此,EF Core 支持大量数据库系统,尤其是最流行的数据库系统。要检查是否有适用于您首选数据库系统的提供商,请参阅官方文档:https://docs.microsoft.com/en-us/ef/core/providers/

当您使用 EF Core 等 ORM 时,数据库系统之间的差异或多或少完全隐藏在应用程序本身之外。您为一个数据库系统的数据存储和检索编写的 C# 代码在另一个系统上的工作方式完全相同。一个系统与另一个系统之间唯一真正的区别是初始配置。在本书中,我选择了两个数据库系统:一个 SQL Server 版本,适用于仅限 Windows 的开发人员,以及 SQLite,适用于希望了解其他作系统的读者。我将强调它们之间出现的罕见差异。

在 Microsoft 世界中工作,您比其他任何选择都更有可能遇到他们的旗舰关系数据库系统 SQL Server。安装 Visual Studio 时,可以很容易地安装 SQL Server 的一个版本 LocalDB。它不是为生产用途而设计的,并且仅包含运行 SQL Server 数据库所需的最小文件集。因此,我选择了 LocalDB 作为想要使用 Windows 的读者使用的版本。

您使用 LocalDB 创建的任何数据库也可以与完整版的 SQL Server 一起使用。Visual Studio 包含一项称为“服务器资源管理器”的功能,该功能使您能够从 IDE 中连接到数据库并执行基本的数据库管理任务,例如修改表和运行查询。或者,您可以免费下载和安装 SQL Server Management Studio (SSMS) (https://learn.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-ver16)。SSMS 是一个功能更强大的工具,包括用于管理数据库、分析查询性能和管理 SQL Server 安装的功能。

有大量的跨平台数据库选项可用,包括免费且非常流行的 MySQL。但是,我为希望在非 Windows 环境中进行开发的读者选择了基于文件的 SQLite 数据库系统,这仅仅是从易用性的角度来看。它已经安装在大多数版本的 Linux 和 macOS 上。话虽如此,如果您在 Windows 上进行开发,则没有理由不使用 SQLite。对于较小的网站来说,这是一个相当不错的选择。它与其余应用程序文件一起部署,从而可能简化部署并降低托管成本。在管理 SQLite 数据库方面,我使用免费的 SQL 数据库浏览器,它是跨平台的,可在 https://sqlitebrowser.org/ 使用。

无论您选择使用哪种数据库系统,您现在都应该准备好继续开发 Razor Pages 应用程序。您了解 Razor Pages 在 Web 开发侨民中的作用,以及使其成为绝佳选择的关键功能。它是现代且快速的,不会妨碍开发过程。在下一章中,您将立即生成第一个有效的 Razor Pages 应用程序,并学习构建更复杂应用程序的基础知识。

 总结

Razor Pages 是一个以页面为中心的框架,用于开发动态 Web 应用程序。
Razor Pages 是一项 Microsoft 技术。
Razor Pages 是 ASP.NET Core 的一部分,而 Core 又是 .NET 6 的一部分。
Razor Pages 是跨平台的。
Razor Pages 是开源且免费的。
Razor Pages 建立在 ASP.NET Core MVC 的最佳部分之上。
Razor Pages 是使用页面控制器模式的 MVC 实现。
Razor Pages 主要关注在 Web 服务器上生成 HTML。
使用 C# 对 Razor Pages 应用程序进行编程。
HTML 是基于 Razor 语法(HTML 和 C# 的混合)从模板动态生成的。
Razor Pages 适用于数据库。

后端架构演进介绍

后端架构演进介绍

10年前,你只需要知道 GoF模式,现在需要掌握:N层、DDD、六边形Hexagon、洋葱Onion、Clean架构。

想想回到过去的美好时光,根本没有所谓架构,那些日子是多么幸福啊,只有了解 GoF 模式,你就能称自己为架构师。然而,计算机变得更加强大,用户的需求增加,导致应用程序的复杂性增加。开发人员解决的第一件事是将UI 与业务逻辑分离。根据 UI 框架的不同,诞生了不同的类似 MVC 的模式:

这在一段时间内有所帮助,但效果并不那么明显。如果你来自 C# 社区,可能会错误地认为那些图中名为Model的黄色框只是 DTO。这一切都是因为微软。这张图让我们对他们的 ASP MVC 框架感到困惑。事实上,这里的Model代表领域模型,也称为业务逻辑,这在任何应用程序中都相当关键。

你能打赌,上面这三个组件中哪一个造成的问题最多?虽然视图只是简单的图像和按钮,但控制器充当中间人,所有复杂性都集中在模型中。

那是一个 GoF 模式根本不够用的时期。因此必须出现新的想法。我们如何处理复杂性?

分而治之。我们已经使用 MVC 做到了,所以让我们再做一次。

2002 — N 层

理想的架构并不是凭空出现的。与所有事情一样,它在尝试和错误中走自己的路。

Jesu.. khe Martin Fowler是软件开发架构的先驱,并在未来十年影响了一代又一代的开发人员:

《企业应用架构模式》描述了N层架构。想法很简单,将所有相关代码组合在一起并依次调用这些层中代码。

然而,还有更多的事情要做。MF知道不一致的危害很大。因此,为了防止我们朝自己的腿开枪,他试图指导我们一些限制:

• 您可以按照您想要的方式命名图层
• 您可以根据需要拥有任意数量的层
• 你可以在中间添加层
• 同一层中可以有多个组件
• 只需确保各层之间存在清晰的 层次结构,并逐一相互引用

它不仅帮助开发人员消除代码重复,而且最终帮助他们构建代码。尽管这些规则非常灵活,但实际上,3 层对于大多数项目来说已经足够了。

• 用户界面(UI)——负责与用户交互。
• 业务逻辑层 (BLL) — 表示业务概念。它规定了您的应用程序正在执行的操作,并使其与其他应用程序相比如此独特。
• 数据访问层 (DAL) — 将数据保留在内存中并保持应用程序的状态

对业务逻辑和 UI 进行了明确的分离。事实证明,数据库与业务规则一样重要,因此它值得拥有自己的层。实际上,所有外部技术也可以进入最后一层。

如果您想知道这些彩色矩形和箭头对您意味着什么,请不要担心,这很简单。这些层只是解决方案中的项目,箭头表示这些层之间的依赖关系。

这种分离不一定是项目的物理分离,而可以是文件夹的逻辑分离。您还可以结合使用这两种方法。使用最适合您的。

文件夹和项目之间的区别很大。项目实际上允许您控制依赖项。对于文件夹,您甚至可能不会注意到,当一个层开始使用另一层的组件时。另一方面,项目太多,代码变得更加脆弱且难以维护。

请记住,这没有严格的规则。亲自尝试一下,看看什么最适合您。这是可靠性和复杂性之间的权衡。我的建议是不要创建太多项目,除非您确实需要它们,每层一个项目就足够了。每一层通过其 API 调用下面的一层,该 API 通常以interface.每个类上的访问修饰符与这些层一样重要:

现在这对你来说似乎是显而易见的,但这只是因为你并没有经历过真正艰难的时期。它总是很容易使用,但很难发明。

2003 — 领域驱动设计

《领域驱动设计:解决软件核心的复杂性》,这让世界上至少有一个马丁感到非常难过。

Evans 同意 Fowler 的所有想法,即项目依赖关系应该针对一个方向。然而,他也提到,低级模块调用上面的模块是可以的,除非它不违反依赖方向规则。可以通过回调、观察者模式等来实现。

他还发现控制器有太多逻辑,因此他将其移动到另一个称为“应用程序Application”层。我们开始获得用例的萌芽,但尚未完全。但埃文斯所做的最重要的事情是说“去它的数据库,业务逻辑更重要”。 他这么说,然后什么也没做。不过,从架构的角度来看,他并没有太大的改变。

在他的架构中,定义了下一层:

• 表示层——负责与用户交互。
• 应用层——协调任务并将工作委托给域对象。
• 领域层——代表业务概念。它规定了您的应用程序正在执行的操作,并使其与其他应用程序相比如此独特。
• 基础设施层——将数据保存在内存中并保持应用程序的状态

你可以看到,他做了一些重命名。

用户界面意味着您有用户,但情况并非总是如此。有时是用户的GUI(图形用户界面),有时是开发人员的CLI(命令行界面),更多时候是程序的API(应用程序编程接口)。表示层只是一个更通用和合适的名称。

业务逻辑对于一些开发人员来说是令人困惑的,特别是对于那些根本不做业务的人来说,因此引入了一个新名称——Domain领域层。

数据库不是我们使用的唯一外部工具,因此所有电子邮件发送器、事件总线、SQL 和其他都移至基础设施。

基本上就是这样。这里有一些重命名。在那里加上一个新层。我们为该领域付出了很多努力。但它是具有相同依赖关系的相同架构。如果他知道依赖倒置原则就好了。

2005 年 — Hexagon(端口和适配器)

以前,模块必须引用行中的下一个模块。随着依赖倒置DI/IOC的发现,一切都改变了。这对于软件开发人员来说是一个难以置信的机会。我们终于学会了如何控制依赖项的方向,以我们喜欢的方式指向它们!这意味着业务逻辑不再引用数据访问。

潜力的人是阿利斯泰尔·科伯恩(Alistair Cockburn)。那家伙很嗨,画了一个六边形,试图召唤撒旦,等等。我不需要告诉你,你自己更清楚摇滚派对是如何进行的。这里没什么特别的,有一天你抽了一些麻醉,第二天,早上醒来,宿醉很厉害,发现自己意外地发现了一个新的架构。

阿利斯泰尔厌倦了矩形,所以他画了一个六边形,为所有东西想出了两个名字,试图让它变得神圣。但别吓到我的开发者小伙子。事实上,这种架构并不比 N 层架构复杂多少:

阿利斯泰尔让埃文斯梦想成真。现在,领域已成为系统的核心组成部分,不仅在言语上,而且在行动上。它不引用任何其他项目。

为了强调它确实是心脏,Business Logic 更名为Core。

基础设施模块分为两半——抽象(接口)和实现。抽象成为业务逻辑的一部分,并被重命名为端口Port。实施停留在基础设施层。现在它们被称为适配器Adapter。

实践证明,UI 和 DB 是同一个框架层,所以也遭遇了同样的命运。

在您的业务逻辑中拥有基础设施的接口,可以使域具有自主性和无依赖性。

因此,业务逻辑可以在任何环境中使用任何工具工作。您想更改数据库吗?只需更改实现,实现所需的适配器,并将其“插入”到可用端口即可。

任何适配器(数据库、电子邮件发送器、用户界面)的更改都不会影响业务逻辑。接口保持不变。

每个组件都可以单独部署。如果更改数据访问,则只需重建数据访问。如果更改 UI,则仅更改 UI。

由于模块可以单独部署,也就意味着可以单独开发。

调用我们系统的适配器称为主(驱动)。那些被我们系统调用的称为次要(驱动)。

就解决方案结构而言,这些最适合我:

同样,文件夹与项目是您应该自己决定的。

只需遵循参考文献并确保它们不会交叉到不应该交叉的地方:

2008 — 洋葱架构

杰弗里·巴勒莫。这是一个充满悲伤和黑暗的悲伤故事,讲述了一个男孩天真的童年被洋葱的残酷沉思所破坏的故事。随着他的成长,一种炽热的仇恨在他内心熊熊燃烧,燃烧着他有一天会实现的复仇承诺:

相信我,他永远信守了自己的诺言。他的小洋葱让全世界数以百万计的开发者一边哭一边跑到妈妈的怀里。

这种架构在端口和适配器方面得到了很大的增强。它仍然涉及依赖倒置。它通过抽象和实现来分割代码。端口仍然是业务逻辑的一部分。只是这次巴勒莫从埃文斯模式添加了应用程序层,它也可以包含一些端口。

这种架构的最大挑战是模块之间的依赖关系,它导致了如此多的混乱。

然而,规则很简单:任何外层只能且仅依赖于内层。

域位于最中间。它内部没有内层,因此它不应该依赖于任何其他层。

应用程序仅包装域,因此这正是它应该具有的唯一依赖项。

基础设施层和表示层位于同一级别,它们不能相互依赖,但可以依赖于应用程序和域,其中定义了所有需要的接口。

您还可以看到它具有 DDD 架构中的所有模块,但处理方式不同。

这实际上是一件大事!这里的关键是,中间的组件很少修改,而边缘的组件经常更改。应用程序或任何其他层的更改不会影响域,只会影响依赖层。域发生变化的唯一原因是业务逻辑发生变化,而这种情况无论如何都会影响整个系统。

理论看起来就是这样。实际上,您的组合根(Main()注册所有依赖项并将模块组合在一起的函数)将是表示层(ASP、WPF、CLI)的一部分,因此该图将具有以下外观:

你看起来很熟悉吗?它是N层架构,但组件的顺序不同。

无论它看起来如何,六边形,端口或洋葱,您的最终目标应该是让您的依赖关系以无环图或树的形式指出。

2012 — 清洁Clean架构

有一个叫鲍勃叔叔的人,

他是工作中最干净的程序员,

凭借他敏捷的动作和架构,

他会让你的代码焕然一新,

他看到了围绕架构的所有炒作,并决定破坏这个聚会。马丁知道任何开发人员的主要秘密,所以他甚至不想隐藏它。只是厚颜无耻地窃取别人的想法并称其为自己的。

开个玩笑,现在没什么原创的想法了,大家互相抄袭

我们可怜的域名再次被重命名。现在是实体。然而,不仅如此。这意味着您没有领域服务和贫乏的模型,而是具有数据和行为的丰富类。

存储库和其他端口的接口从域移至应用层。这又得到了一个更合适的名称——用例。

表示层和基础设施层保持不变。然而,Martin 还在上面添加了一层额外的层,其中包括框架、DLL 和其他外部依赖项。这并不一定意味着您的数据库将引用实体,它只是阻止您从内层引用那些外部工具。

再次强调,没有严格的规则。您可以根据需要在任何级别添加任意数量的层。因此,如果您想为域服务定义一个层,则可以。

马丁还在架构大图附近画了一个小图。

它显示用户通过触发控制器的端点与系统进行通信,该端点调用用例,然后通过演示器返回数据(黑线)。用例可以通过接口(绿线)调用任何类似的端口。而实际实现是外层的一部分(橙色线)。

它试图强调执行流程(虚线)并不总是对应于依赖方向(直线),这就是依赖倒置原则。

基本上,它再次强调了控制反转的用法。当我们讨论端口和适配器时,您已经看到了这一点。

通常在 ASP 中我们没有单独的 Presenter 组件。这也是由控制器完成的。因此整个图可以用如下代码表示:

class OrderController : ControllerBase、IInputPort、IOutputPort
{
[ HttpGet ]
public IActionResult Get ( int id )

{
_getOrderUserCase.Execute(id); }
return DisplayOutputPortResult();
}
}

其他形式的隔离

所有这些架构的目标都是通过划分职责来将一个代码与另一个代码隔离。然而,还有其他形式的隔离:垂直切片、有界上下文、模块、微服务等等。这里的目标是按功能拆分代码。

有些人不认为它们是“真正的”架构方法,而有些人则认为。它是由你决定。最终,他们将发展到仍然会使用上面的任何架构风格,甚至是这些风格的组合:

结论

在本文中,我们讨论了 N 层、DDD、Hexagon、Onion 和 Clean 架构。这些并不是唯一存在的架构。然而,所描述的是最著名的。您可能还听说过 BCE、DCI 等。

尽管细节上存在细微差别,但所有架构几乎都是相同的。它们都服务于同一个目的——分担责任。他们都是通过将代码拆分到不同的层来实现的。全部区别在于定义了哪些组件以及这些层之间存在哪些依赖关系。

https://www.jdon.com/Backend-Architecture.html

Mastering Minimal APIs in ASP.NET Core

Mastering Minimal APIs in ASP.NET Core
Copyright © 2022 Packt Publishing

In memory of my mother and father, Giovanna and Francesco, for their sacrifices and for supporting me in studying and facing new challenges every day.
为了纪念我的父母 Giovanna 和 Francesco,感谢他们的牺牲,以及支持我学习和每天面对新的挑战。
– 安德里亚·土里
– Andrea Tosato

To my family, friends, and colleagues, who have always believed in me during this journey.
– Marco Minerva
感谢我的家人、朋友和同事,他们在这段旅程中一直相信我。
– 马可·密涅瓦

In memory of my beloved mom, and to my wife, Francesca, for her sacrifices and understanding.
Last but not least, to my son, Leonardo. The greatest success in my life.
– Emanuele Bartolesi
为了纪念我敬爱的妈妈,以及我的妻子弗朗西斯卡,感谢她的牺牲和理解。
最后但并非最不重要的一点是,感谢我的儿子莱昂纳多。我一生中最大的成功。
– 埃马努埃莱·巴托莱西

Contributors
贡献

About the authors
作者简介

Andrea Tosato is a full stack software engineer and architect of .NET applications. Andrea has successfully developed .NET applications in various industries, sometimes facing complex technological challenges. He deals with desktop, web, and mobile development but with the arrival of the cloud, Azure has become his passion. In 2017, he co-founded Cloudgen Verona (a .NET community based in Verona, Italy) with his friend, Marco Zamana. In 2019, he was named Microsoft MVP for the first time in the Azure category. Andrea graduated from the University of Pavia with a degree in computer engineering in 2008 and successfully completed his master’s degree, also in computer engineering, in Modena in 2011. Andrea was born in 1986 in Verona, Italy, where he currently works as a remote worker. You can find Andrea on Twitter.
Andrea Tosato 是一名全栈软件工程师和 .NET 应用程序架构师。Andrea 在各个行业成功开发了 .NET 应用程序,有时面临复杂的技术挑战。他处理桌面、Web 和移动开发,但随着云的到来,Azure 已成为他的热情所在。2017 年,他与朋友 Marco Zamana 共同创立了 Cloudgen Verona(一个位于意大利维罗纳的 .NET 社区)。2019 年,他首次被评为 Azure 类别的 Microsoft MVP。Andrea 于 2008 年毕业于帕维亚大学,获得计算机工程学位,并于 2011 年在摩德纳成功完成了计算机工程硕士学位。Andrea 于 1986 年出生于意大利维罗纳,目前在那里担任远程工作者。你可以在 Twitter 上找到 Andrea。

Marco Minerva has been a computer enthusiast since elementary school when he received an old Commodore VIC-20 as a gift. He began developing with GW-BASIC. After some experience with Visual Basic, he has been using .NET since its first introduction. He got his master’s degree in information technology in 2006. Today, he lives in Taggia, Italy, where he works as a freelance consultant and is involved in designing and developing solutions for the Microsoft ecosystem, building applications for desktop, mobile, and web. His expertise is in backend development as a software architect. He runs training courses, is a speaker at technical events, writes articles for magazines, and regularly makes live streams about coding on Twitch. He has been a Microsoft MVP since 2013. You can find Marco on Twitter.
Marco Minerva 从小学开始就是一个计算机爱好者,当时他收到了一台旧的 Commodore VIC-20 作为礼物。他开始使用 GW-BASIC 进行开发。在具备一些 Visual Basic 经验后,他自首次引入 .NET 以来就一直在使用 .NET。他于 2006 年获得信息技术硕士学位。如今,他住在意大利塔吉亚,在那里他是一名自由顾问,参与为 Microsoft 生态系统设计和开发解决方案,构建桌面、移动和 Web 应用程序。他的专长是作为软件架构师进行后端开发。他举办培训课程,在技术活动中发表演讲,为杂志撰写文章,并定期在 Twitch 上制作有关编码的直播。自 2013 年以来,他一直是 Microsoft MVP。您可以在 Twitter 上找到 Marco。

Emanuele Bartolesi is a Microsoft 365 architect who is passionate about frontend technologies and everything related to the cloud, especially Microsoft Azure. He currently lives in Zurich and actively participates in local and international community activities and events. Emanuele shares his love of technology through his blog. He has also become a Twitch affiliate as a live coder, and you can find him as kasuken on Twitch to write some code with him. Emanuele has been a Microsoft MVP in the developer technologies category since 2014, and a GitHub Star since 2022. You can find Emanuele on Twitter.
Emanuele Bartolesi 是一名 Microsoft 365 架构师,他对前端技术以及与云相关的一切(尤其是 Microsoft Azure)充满热情。他目前居住在苏黎世,积极参与当地和国际社区活动。Emanuele 通过他的博客分享了他对技术的热爱。他还作为实时编码员成为 Twitch 的附属机构,您可以在 Twitch 上找到他作为 kasuken 与他一起编写一些代码。Emanuele 自 2014 年以来一直是开发人员技术类别的 Microsoft MVP,自 2022 年以来一直是 GitHub Star。您可以在 Twitter 上找到 Emanuele。

About the reviewers
关于审稿人

Marco Parenzan is a senior solution architect for Smart Factory, IoT, and Azure-based solutions at beanTech, a tech company in Italy. He has been a Microsoft Azure MVP since 2014 and has been playing with the cloud since 2010. He speaks about Azure and .NET development at major community events in Italy. He is a community lead for 1nn0va, a recognized Microsoft-oriented community in Pordenone, Italy, where he organizes local community events. He wrote a book on Azure for Packt Publishing in 2016. He loves playing with his Commodore 64 and trying to write small retro games in .NET or JavaScript.
Marco Parenzan 是意大利科技公司 beanTech 的智能工厂、IoT 和基于 Azure 的解决方案的高级解决方案架构师。自 2014 年以来,他一直是 Microsoft Azure MVP,自 2010 年以来一直在玩云。他在意大利的主要社区活动中谈论 Azure 和 .NET 开发。他是 1nn0va 的社区负责人,这是意大利波代诺内一个公认的面向 Microsoft 的社区,他在那里组织当地社区活动。他在 2016 年为 Packt Publishing 撰写了一本关于 Azure 的书。他喜欢玩他的 Commodore 64,并尝试用 .NET 或 JavaScript 编写小型复古游戏。

Marco Zamana lives in Verona in the magnificent hills of Valpolicella. He has a background as a software developer and architect. He was Microsoft’s Most Valuable Professional for 3 years in the artificial intelligence category. He currently works as a cloud solution architect in engineering at Microsoft. He is the co-founder of Cloudgen Verona, a Veronese association that discusses topics related to the cloud and, above all, Azure.
Marco Zamana 住在维罗纳 Valpolicella 壮丽的山丘上。他拥有软件开发人员和架构师的背景。他在人工智能类别中连续 3 年被评为 Microsoft 最有价值专家。他目前在 Microsoft 担任工程部门的云解决方案架构师。他是 Cloudgen Verona 的联合创始人,这是一个 Veronese 协会,讨论与云相关的主题,尤其是 Azure。

Ashirwad Satapathi works as an associate consultant at Microsoft and has expertise in building scalable applications with ASP.NET Core and Microsoft Azure. He is a published author and an active blogger in the C# Corner developer community. He was awarded the title of C# Corner Most Valuable Professional (MVP) in September 2020 and September 2021 for his contributions to the developer community. He is also a member of the Outreach Committee of the .NET Foundation.
Ashirwad Satapathi 是 Microsoft 的助理顾问,拥有使用 ASP.NET Core 和 Microsoft Azure 构建可缩放应用程序的专业知识。他是 C# Corner 开发人员社区的出版作者和活跃的博客作者。他于 2020 年 9 月和 2021 年 9 月被授予 C# Corner 最有价值专家 (MVP) 称号,以表彰他对开发者社区的贡献。他还是 .NET Foundation 外展委员会的成员。

Table of Contents
目录

Preface
前言

Part 1: Introduction
第 1 部分:简介

1 Introduction to Minimal APIs
最小 API 简介

2 Exploring Minimal APIs and Their Advantages
探索最小 API 及其优势

3 Working with Minimal APIs
使用最少的 API

Part 2: What’s New in .NET 6?
第 2 部分:.NET 6 中的新增功能

4 Dependency Injection in a Minimal API Project
最小 API 项目中的依赖关系注入

5 Using Logging to Identify Errors
使用日志记录识别错误

6 Exploring Validation and Mapping
探索验证和映射

7 Integration with the Data Access Layer
与 Data Access Layer 集成

Part 3: Advanced Development and Microservices Concepts
第 3 部分:高级开发和微服务概念

8 Adding Authentication and Authorization
添加身份验证和授权

9 Leveraging Globalization and Localization
利用全球化和本地化

10 Evaluating and Benchmarking the Performance of Minimal APIs
评估最小 API 的性能并对其进行基准测试

Index
索引

Other Books You May Enjoy
您可能喜欢的其他书籍

Preface

前言

The simplification of code is every developer’s dream. Minimal APIs are a new feature in .NET 6 that aims to simplify code. They are used for building APIs with minimal dependencies in ASP.NET Core. Minimal APIs simplify API development through the use of more compact code syntax.
简化代码是每个开发人员的梦想。最小 API 是 .NET 6 中的一项新功能,旨在简化代码。它们用于在 ASP.NET Core 中构建具有最小依赖项的 API。最少的 API 通过使用更紧凑的代码语法简化了 API 开发。

Developers using minimal APIs will be able to take advantage of this syntax on some occasions to work more quickly with less code and fewer files to maintain. Here, you will be introduced to the main new features of .NET 6 and understand the basic themes of minimal APIs, which weren’t available in .NET 5 and previous versions. You’ll see how to enable Swagger for API documentation, along with CORS, and how to handle application errors. You will learn to structure your code better with Microsoft’s new .NET framework called Dependency Injection. Finally, you will see the performance and benchmarking improvements in .NET 6 that are introduced with minimal APIs.
使用最少 API 的开发人员将能够在某些情况下利用此语法,以更少的代码和更少的文件更快地工作。在这里,将向您介绍 .NET 6 的主要新功能,并了解最小 API 的基本主题,这些主题在 .NET 5 和以前的版本中不可用。您将了解如何为 API 文档以及 CORS 启用 Swagger,以及如何处理应用程序错误。您将学习如何使用 Microsoft 的新 .NET 框架(称为 Dependency Injection)更好地构建代码。最后,您将看到 .NET 6 中的性能和基准测试改进,这些改进是通过最少的 API 引入的。

By the end of this book, you will be able to leverage minimal APIs and understand in what way they are related to the classic development of web APIs.
在本书结束时,您将能够利用最少的 API,并了解它们与 Web API 的经典开发有何关系。

Who this book is for
这本书是给谁的

This book is for .NET developers who want to build .NET and .NET Core APIs and want to study the new features of .NET 6. Basic knowledge of C#, .NET, Visual Studio, and REST APIs is assumed.
本书适用于想要构建 .NET 和 .NET Core API 并希望学习 .NET 6 新功能的 .NET 开发人员。假定您具备 C#、.NET、Visual Studio 和 REST API 的基本知识。

What this book covers
本书涵盖的内容

Chapter 1, Introduction to Minimal APIs, introduces you to the motivations behind introducing minimal APIs within .NET 6. We will explain the main new features of .NET 6 and the work that the .NET team is doing with this latest version. You will come to understand the reasons why we decided to write the book.
第 1 章 最小 API 简介,介绍了在 .NET 6 中引入最小 API 的动机。我们将解释 .NET 6 的主要新功能以及 .NET 团队正在使用此最新版本所做的工作。您将了解我们决定写这本书的原因。

Chapter 2, Exploring Minimal APIs and Their Advantages, introduces you to the basic ways in which minimal APIs differ from .NET 5 and all previous versions. We will explore in detail routing and serialization with System.Text.JSON. Finally, we will end with some concepts related to writing our first REST API.
第 2 章“探索最小 API 及其优势”介绍了最小 API 与 .NET 5 和所有以前版本的基本区别。我们将详细探讨 System.Text.JSON 的路由和序列化。最后,我们将介绍与编写第一个 REST API 相关的一些概念。

Chapter 3, Working with Minimal APIs, introduces you to the advanced ways in which minimal APIs differ from .NET 5 and all previous versions. We will explore in detail how to enable Swagger for API documentation. We will see how to enable CORS and how to handle application errors.
第 3 章 使用最小 API 介绍了最小 API 与 .NET 5 和所有以前版本的不同之处。我们将详细探讨如何为 API 文档启用 Swagger。我们将了解如何启用 CORS 以及如何处理应用程序错误。

Chapter 4, Dependency Injection in a Minimal API Project, introduces you to Dependency Injection and goes over how to use it with a minimal API.
第 4 章 最小 API 项目中的依赖注入 介绍了依赖注入,并介绍了如何将其与最小 API 一起使用。

Chapter 5, Using Logging to Identify Errors, teaches you about the logging tools that .NET provides. A logger is one of the tools that developers have to use to debug an application or understand its failure in production. The logging library has been built into ASP.NET with several features enabled by design.
第 5 章 使用日志记录识别错误,介绍 .NET 提供的日志记录工具。记录器是开发人员用来调试应用程序或了解其在生产中的故障的工具之一。日志记录库已内置于 ASP.NET 中,并通过设计启用了多项功能。

Chapter 6, Exploring Validation and Mapping, will teach you how to validate incoming data to an API and how to return any errors or messages. Once the data is validated, it can be mapped to a model that will then be used to process the request.
第 6 章 探索验证和映射 将教您如何验证 API 的传入数据以及如何返回任何错误或消息。验证数据后,可以将其映射到模型,然后该模型将用于处理请求。

Chapter 7, Integration with the Data Access Layer, helps you understand the best practices for accessing and using data in minimal APIs.
第 7 章 与数据访问层集成 可帮助您了解在最小 API 中访问和使用数据的最佳实践。

Chapter 8, Adding Authentication and Authorization, looks at how to write an authentication and authorization system by leveraging our own database or a cloud service such as Azure Active Directory.
第 8 章 添加身份验证和授权,介绍如何利用我们自己的数据库或云服务(如 Azure Active Directory)编写身份验证和授权系统。

Chapter 9, Leveraging Globalization and Localization, shows you how to leverage the translation system in a minimal API project and provide errors in the same language of the client.
第 9 章 利用全球化和本地化 向您展示如何在最小的 API 项目中利用翻译系统,并以客户端的相同语言提供错误。

Chapter 10, Evaluating and Benchmarking the Performance of Minimal APIs, shows the improvements in .NET 6 and those that will be introduced with the minimal APIs.
第 10 章 评估最小 API 的性能并对其进行基准测试,介绍了 .NET 6 中的改进以及最小 API 将引入的改进。

To get the most out of this book
充分利用本书

You will need Visual Studio 2022 with ASP.NET and a web development workload or Visual Studio Code and K6 installed on your computer.
您的计算机上需要带有 ASP.NET 和 Web 开发工作负载的 Visual Studio 2022 或 Visual Studio Code 和 K6。

All code examples have been tested using Visual Studio 2022 and Visual Studio Code on the Windows OS.
所有代码示例均已在 Windows作系统上使用 Visual Studio 2022 和 Visual Studio Code 进行了测试。

If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.
如果您使用的是本书的数字版本,我们建议您自己输入代码或从本书的 GitHub 存储库访问代码(下一节中提供了链接)。这样做将帮助您避免与复制和粘贴代码相关的任何潜在错误。

Basic development skills for Microsoft web technology are required to fully understand this book.
要完全理解本书,需要具备 Microsoft Web 技术的基本开发技能。

Download the example code files
下载示例代码文件

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6. If there’s an update to the code, it will be updated in the GitHub repository.
您可以从 GitHub 下载本书的示例代码文件,网址为 https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6。如果代码有更新,它将在 GitHub 存储库中更新。

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
我们还在 https://github.com/PacktPublishing/ 上提供了丰富的书籍和视频目录中的其他代码包。看看他们吧!

Download the color images
下载彩色图像

We also provide a PDF file that has color images of the screenshots and diagrams used in this book.You can download it here: https://packt.link/GmUNL
我们还提供了一个 PDF 文件,其中包含本书中使用的屏幕截图和图表的彩色图像。您可以在此处下载:https://packt.link/GmUNL

Conventions used
使用的约定

There are a number of text conventions used throughout this book.
本书中使用了许多文本约定。

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “In minimal APIs, we define the route patterns using the Map methods of the WebApplication object.”
文本中的代码:指示文本中的代码词、数据库表名称、文件夹名称、文件名、文件扩展名、路径名、虚拟 URL、用户输入和 Twitter 句柄。下面是一个示例:“在最小的 API 中,我们使用 WebApplication 对象的 Map
方法定义路由模式。

A block of code is set as follows:
代码块设置如下:

app.MapGet("/hello-get", () => "[GET] Hello World!"); 
app.MapPost("/hello-post", () => "[POST] Hello World!"); 
app.MapPut("/hello-put", () => "[PUT] Hello World!"); 
app.MapDelete("/hello-delete", () => "[DELETE] Hello World!");

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
当我们希望您注意到代码块的特定部分时,相关行或项目以粗体设置:

if (app.Environment.IsDevelopment()) 
{
    app.UseSwagger(); 
    app.UseSwaggerUI(); 
}

Any command-line input or output is written as follows:
任何命令行输入或输出的编写方式如下:

dotnet new webapi -minimal -o Chapter01

Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “Open Visual Studio 2022 and from the main screen, click on Create a new project.”
粗体:表示新词、重要字词或您在屏幕上看到的字词。例如,菜单或对话框中的单词以粗体显示。这是一个例子:“打开 Visual Studio 2022,然后在主屏幕上单击创建新项目。

Tips or important notes
提示或重要说明
Appear like this.
如下所示。

Get in touch
联系我们

Feedback from our readers is always welcome.
我们始终欢迎读者的反馈。

General feedback: If you have questions about any aspect of this book, email us at customercare@packtpub.com and mention the book title in the subject of your message.
一般反馈:如果您对本书的任何方面有任何疑问,请发送电子邮件至 customercare@packtpub.com 并在邮件主题中提及书名。

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.
勘误表: 尽管我们已尽一切努力确保内容的准确性,但错误还是会发生。如果您发现本书中有错误,如果您能向我们报告,我们将不胜感激。请访问 www.packtpub.com/support/errata 并填写表格。

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at copyright@packt.com with a link to the material.
盗版:如果您在互联网上发现任何形式的非法复制我们的作品,如果您能向我们提供位置地址或网站名称,我们将不胜感激。请通过 copyright@packt.com 与我们联系,并提供材料链接。

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
如果您有兴趣成为作者:如果您擅长某个主题,并且您对写作或为一本书做出贡献感兴趣,请访问 authors.packtpub.com。

Share Your Thoughts
分享您的想法

Once you’ve read Mastering Minimal APIs in ASP.NET Core, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.
阅读了掌握 ASP.NET Core 中的最小 API 后,我们很想听听你的想法!请单击此处直接进入本书的亚马逊评论页面并分享您的反馈。

Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.
您的评论对我们和技术社区都很重要,这将有助于我们确保我们提供卓越的内容质量。

Part 1: Introduction

第 1 部分:简介

In the first part of the book, we want to introduce you to the context of the book. We will explain the basics of minimal APIs and how they work. We want to add, brick by brick, the knowledge needed to take advantage of all the power that minimal APIs can grant us.
在本书的第一部分,我们想向您介绍这本书的背景。我们将解释最小 API 的基础知识及其工作原理。我们希望一砖一瓦地添加所需的知识,以利用最小 API 可以赋予我们的所有功能。

We will cover the following chapters in this part:
我们将在这部分介绍以下章节:

Chapter 1, Introduction to Minimal APIs
第 1 章 最小 API 简介

Chapter 2, Exploring Minimal APIs and Their Advantages
第 2 章 探索最小 API 及其优点

Chapter 3, Working with Minimal APIs
第 3 章 使用最少的 API

1 Introduction to Minimal APIs

1 最小 API 简介

In this chapter of the book, we will introduce some basic themes related to minimal APIs in .NET 6.0, showing how to set up a development environment for .NET 6 and more specifically for developing minimal APIs with ASP.NET Core.
在本书的这一章中,我们将介绍一些与 .NET 6.0 中的最小 API 相关的基本主题,展示如何为 .NET 6 设置开发环境,更具体地说,如何为 ASP.NET Core 开发最小 API。

We will first begin with a brief history of minimal APIs. Then, we will create a new minimal API project with Visual Studio 2022 and Visual Code Studio. At the end, we will take a look at the structure of our project.
首先,我们将从最小 API 的简要历史开始。然后,我们将使用 Visual Studio 2022 和 Visual Code Studio 创建一个新的最小 API 项目。最后,我们将看看我们项目的结构。

By the end of this chapter, you will be able to create a new minimal API project and start to work with this new template for a REST API.
在本章结束时,您将能够创建一个新的最小 API 项目,并开始为 REST API 使用这个新模板。

In this chapter, we will be covering the following topics:
在本章中,我们将介绍以下主题:

• A brief history of the Microsoft Web API
• Creating a new minimal API project
• Looking at the structure of the project

Technical requirements
技术要求

To work with the ASP.NET Core 6 minimal APIs you need to install, first of all, .NET 6 on your development environment.
要使用 ASP.NET Core 6 最小 API,您需要首先在开发环境中安装 .NET 6。

If you have not already installed it, let’s do that now:
如果您还没有安装它,我们现在就安装它:

  1. Navigate to the following link: https://dotnet.microsoft.com.
    导航到以下链接:https://dotnet.microsoft.com

  2. Click on the Download button.
    点击 下载 按钮。

  3. By default, the browser chooses the right operating system for you, but if not, select your operating system at the top of the page.
    默认情况下,浏览器会为您选择合适的作系统,如果没有,请在页面顶部选择您的作系统。

  4. Download the LTS version of the .NET 6.0 SDK.
    下载 .NET 6.0 SDK 的 LTS 版本。

  5. Start the installer.
    启动安装程序。

  6. Reboot the machine (this is not mandatory).
    重新启动计算机(这不是强制性的)。

You can see which SDKs are installed on your development machine using the following command in a terminal:
您可以在终端中使用以下命令查看开发计算机上安装了哪些 SDK:

dotnet –list-sdks

Before you start coding, you will need a code editor or an Integrated Development Environment (IDE). You can choose your favorite from the following list:
在开始编码之前,您需要一个代码编辑器或集成开发环境 (IDE)。您可以从以下列表中选择您最喜欢的:

• Visual Studio Code for Windows, Mac, or Linux
• Visual Studio 2022
• Visual Studio 2022 for Mac

In the last few years, Visual Studio Code has become very popular not only in the developer community but also in the Microsoft community. Even if you use Visual Studio 2022 for your day-to-day work, we recommend downloading and installing Visual Studio Code and giving it a try.
在过去的几年里,Visual Studio Code 不仅在开发人员社区中非常流行,而且在 Microsoft 社区中也非常流行。即使您将 Visual Studio 2022 用于日常工作,我们也建议您下载并安装 Visual Studio Code 并试一试。

Let’s download and install Visual Studio Code and some extensions:
让我们下载并安装 Visual Studio Code 和一些扩展:

  1. Navigate to https://code.visualstudio.com.
    导航到 https://code.visualstudio.com

  2. Download the Stable or the Insiders edition.
    下载 Stable 或 Insiders 版本。

  3. Start the installer.
    启动安装程序。

  4. Launch Visual Studio Code.
    启动 Visual Studio Code。

  5. Click on the Extensions icon.
    单击 Extensions 图标。

You will see the C# extension at the top of the list.
您将在列表顶部看到 C# 扩展。

  1. Click on the Install button and wait.
    点击 Install 安装 按钮并等待。

You can install other recommended extensions for developing with C# and ASP.NET Core. If you want to install them, you see our recommendations in the following table:
您可以安装其他推荐的扩展,以便使用 C# 和 ASP.NET Core 进行开发。如果您想安装它们,您可以在下表中看到我们的建议:

Additionally, if you want to proceed with the IDE that’s most widely used by .NET developers, you can download and install Visual Studio 2022.
此外,如果您想继续使用 .NET 开发人员使用最广泛的 IDE,您可以下载并安装 Visual Studio 2022。

If you don’t have a license, check if you can use the Community Edition. There are a few restrictions on getting a license, but you can use it if you are a student, have open source projects, or want to use it as an individual. Here’s how to download and install Visual Studio 2022:
如果您没有许可证,请检查是否可以使用 Community Edition。获得许可证有一些限制,但如果您是学生、拥有开源项目或想以个人身份使用它,则可以使用它。以下是下载和安装 Visual Studio 2022 的方法:

  1. Navigate to https://visualstudio.microsoft.com/downloads/.
    导航到 https://visualstudio.microsoft.com/downloads/

  2. Select Visual Studio 2022 version 17.0 or later and download it.
    选择 Visual Studio 2022 版本 17.0 或更高版本并下载它。

  3. Start the installer.
    启动安装程序。

  4. On the Workloads tab, select the following:
    在 Workloads (工作负载) 选项卡上,选择以下选项:

• ASP.NET and web development
• Azure Development

  1. On the Individual Components tab, select the following:
    在 Individual Components 选项卡上,选择以下选项:

• Git for Windows

All the code samples in this chapter can be found in the GitHub repository for this book at https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter01.
本章中的所有代码示例都可以在本书的 GitHub 存储库中找到,网址为 https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter01

Now, you have an environment in which you can follow and try the code used in this book.
现在,您有一个环境,可以在其中遵循和尝试本书中使用的代码。

A brief history of the Microsoft Web API
Microsoft Web API 简史

A few years ago in 2007, .NET web applications went through an evolution with the introduction of ASP.NET MVC. Since then, .NET has provided native support for the Model-View-Controller pattern that was common in other languages.
几年前的 2007 年,随着 ASP.NET MVC 的推出,.NET Web 应用程序经历了一场演变。从那时起,.NET 就为其他语言中常见的 Model-View-Controller 模式提供了本机支持。

Five years later, in 2012, RESTful APIs were the new trend on the internet and .NET responded to this with a new approach for developing APIs, called ASP.NET Web API. It was a significant improvement over Windows Communication Foundation (WCF) because it was easier to develop services for the web. Later, in ASP.NET Core these frameworks were unified under the name ASP.NET Core MVC: one single framework with which to develop web applications and APIs.
五年后,即 2012 年,RESTful API 成为 Internet 上的新趋势,.NET 以一种称为 ASP.NET Web API 的 API 开发新方法对此做出了回应。与 Windows Communication Foundation (WCF) 相比,这是一个重大改进,因为它更容易开发 Web 服务。后来,在 ASP.NET Core 中,这些框架统一为 ASP.NET Core MVC:一个用于开发 Web 应用程序和 API 的单一框架。

In ASP.NET Core MVC applications, the controller is responsible for accepting inputs, orchestrating operations, and at the end, returning a response. A developer can extend the entire pipeline with filters, binding, validation, and much more. It’s a fully featured framework for building modern web applications.
在 ASP.NET Core MVC 应用程序中,控制器负责接受输入、编排作,并在最后返回响应。开发人员可以使用过滤器、绑定、验证等来扩展整个管道。它是一个功能齐全的框架,用于构建现代 Web 应用程序。

But in the real world, there are also scenarios and use cases where you don’t need all the features of the MVC framework or you have to factor in a constraint on performance. ASP.NET Core implements a lot of middleware that you can remove from or add to your applications at will, but there are a lot of common features that you would need to implement by yourself in this scenario.
但在现实世界中,也有一些场景和用例不需要 MVC 框架的所有功能,或者必须考虑性能约束。ASP.NET Core 实现了许多中间件,你可以随意从应用程序中删除或添加到应用程序中,但在这种情况下,有许多常见功能需要你自己实现。

At last, ASP.NET Core 6.0 has filled these gaps with minimal APIs.
最后,ASP.NET Core 6.0 用最少的 API 填补了这些空白。

Now that we have covered a brief history of minimal APIs, we will start creating a new minimal API project in the next section.
现在我们已经简要介绍了最小 API 的历史,我们将在下一节中开始创建一个新的最小 API 项目。

Creating a new minimal API project
创建新的最小 API 项目

Let’s start with our first project and try to analyze the new template for the minimal API approach when writing a RESTful API.
让我们从第一个项目开始,尝试在编写 RESTful API 时分析最小 API 方法的新模板。

In this section, we will create our first minimal API project. We will start by using Visual Studio 2022 and then we will show how you can also create the project with Visual Studio Code and the .NET CLI.
在本节中,我们将创建我们的第一个最小 API 项目。我们将从使用 Visual Studio 2022 开始,然后我们将展示如何使用 Visual Studio Code 和 .NET CLI 创建项目。

Creating the project with Visual Studio 2022
使用 Visual Studio 2022 创建项目

Follow these steps to create a new project in Visual Studio 2022:
按照以下步骤在 Visual Studio 2022 中创建新项目:

  1. Open Visual Studio 2022 and on the main screen, click on Create a new project:
    打开 Visual Studio 2022 并在主屏幕上单击 Create a new project:

Figure 1.1 – Visual Studio 2022 splash screen
图 1.1 – Visual Studio 2022 初始屏幕

  1. On the next screen, write API in the textbox at the top of the window and select the template called ASP.NET Core Web API:
    在下一个屏幕上,在窗口顶部的文本框中编写 API,然后选择名为 ASP.NET Core Web API 的模板:

    Figure 1.2 – Create a new project screen
    图 1.2 – Create a new project 屏幕

  2. Next, on the Configure your new project screen, insert a name for the new project and select the root folder for your new solution:
    接下来,在 Configure your new project 屏幕上,插入新项目的名称,然后选择新解决方案的根文件夹:

    Figure 1.3 – Configure your new project screen
    图 1.3 – 配置您的新项目屏幕

For this example we will use the name Chapter01, but you can choose any name that appeals to you.
在此示例中,我们将使用名称 Chapter01,但您可以选择任何吸引您的名称。

  1. On the following Additional information screen, make sure to select .NET 6.0 (Long-term-support) from the Framework dropdown. And most important of all, uncheck the Use controllers (uncheck to use minimal APIs) option.
    在下面的 Additional information 屏幕上,确保从 Framework 下拉列表中选择 .NET 6.0 (Long-term-support)。最重要的是,取消选中 Use controllers (取消选中以使用最少的 API) 选项。

Figure 1.4 – Additional information screen

  1. Click Create and, after a few seconds, you will see the code of your new minimal API project.
    单击 Create(创建),几秒钟后,您将看到新的最小 API 项目的代码。

Now we are going to show how to create the same project using Visual Studio Code and the .NET CLI.
现在,我们将展示如何使用 Visual Studio Code 和 .NET CLI 创建相同的项目。

Creating the project with Visual Studio Code
使用 Visual Studio Code 创建项目

Creating a project with Visual Studio Code is easier and faster than with Visual Studio 2022 because you don’t have to use a UI or wizard, rather just a terminal and the .NET CLI.
使用 Visual Studio Code 创建项目比使用 Visual Studio 2022 更容易、更快捷,因为您不必使用 UI 或向导,而只需使用终端和 .NET CLI。

You don’t need to install anything new for this because the .NET CLI is included with the .NET 6 installation (as in the previous versions of the .NET SDKs). Follow these steps to create a project using Visual Studio Code:
您无需为此安装任何新内容,因为 .NET CLI 包含在 .NET 6 安装中(与以前版本的 .NET SDK 一样)。按照以下步骤使用 Visual Studio Code 创建项目:

  1. Open your console, shell, or Bash terminal, and switch to your working directory.
    打开您的控制台、shell 或 Bash 终端,然后切换到您的工作目录。

  2. Use the following command to create a new Web API application:
    使用以下命令创建新的 Web API 应用程序:

    dotnet new webapi -minimal -o Chapter01

As you can see, we have inserted the -minimal parameter in the preceding command to use the minimal API project template instead of the ASP.NET Core template with the controllers.
如您所见,我们在前面的命令中插入了 -minimal 参数,以使用最小 API 项目模板,而不是控制器的 ASP.NET Core 模板。

  1. Now open the new project with Visual Studio Code using the following commands:
    现在使用以下命令使用 Visual Studio Code 打开新项目:

    cd Chapter01
    code.

Now that we know how to create a new minimal API project, we are going to have a quick look at the structure of this new template.
现在我们知道如何创建一个新的最小 API 项目,我们将快速了解一下这个新模板的结构。

Looking at the structure of the project
查看项目结构

Whether you are using Visual Studio or Visual Studio Code, you should see the following code in the Program.cs file:
无论您使用的是 Visual Studio 还是 Visual Studio Code,您都应该在 Program.cs 文件中看到以下代码:

var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
// Learn more about configuring Swagger/OpenAPI at https://aka.
ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
var summaries = new[]
{
"Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm",
"Balmy", "Hot", "Sweltering", "Scorching"
};
app.MapGet("/weatherforecast", () =>
{
var forecast = Enumerable.Range(1, 5).Select(index =>
new WeatherForecast
(
DateTime.Now.AddDays(index),
Random.Shared.Next(-20, 55),
summaries[Random.Shared.Next(summaries.Length)]
))
.ToArray();
return forecast;
})
.WithName("GetWeatherForecast");
app.Run();
internal record WeatherForecast(DateTime Date, int
TemperatureC, string? Summary)
{
public int TemperatureF => 32 + (int)(TemperatureC /
0.5556);
}

First of all, with the minimal API approach, all of your code will be inside the Program.cs file. If you are a seasoned .NET developer, it’s easy to understand the preceding code, and you’ll find it similar to some of the things you’ve always used with the controller approach.
首先,使用最小 API 方法,您的所有代码都将位于 Program.cs 文件中。如果您是一位经验丰富的 .NET 开发人员,则很容易理解前面的代码,并且您会发现它类似于您一直使用控制器方法的一些内容。

At the end of the day, it’s another way to write an API, but it’s based on ASP.NET Core.
归根结底,这是编写 API 的另一种方式,但它基于 ASP.NET Core。

However, if you are new to ASP.NET, this single file approach is easy to understand. It’s easy to understand how to extend the code in the template and add more features to this API.
但是,如果您不熟悉 ASP.NET,这种单文件方法很容易理解。很容易理解如何扩展模板中的代码并向此 API 添加更多功能。

Don’t forget that minimal means that it contains the minimum set of components needed to build an HTTP API but it doesn’t mean that the application you are going to build will be simple. It will require a good design like any other .NET application.
不要忘记,minimal 意味着它包含构建 HTTP API 所需的最少组件集,但这并不意味着您要构建的应用程序会很简单。与任何其他 .NET 应用程序一样,它需要良好的设计。

As a final point, the minimal API approach is not a replacement for the MVC approach. It’s just another way to write the same thing.
最后一点,最小 API 方法不能替代 MVC 方法。这只是另一种写同样东西的方法。

Let’s go back to the code.
让我们回到代码。

Even the template of the minimal API uses the new approach of .NET 6 web applications: a top-level statement.
即使是最小 API 的模板也使用 .NET 6 Web 应用程序的新方法:顶级语句。

It means that the project has a Program.cs file only instead of using two files to configure an application.
这意味着项目只有一个 Program.cs 文件,而不是使用两个文件来配置应用程序。

If you don’t like this style of coding, you can convert your application to the old template for ASP.NET Core 3.x/5. This approach still continues to work in .NET as well.
如果您不喜欢这种编码样式,可以将应用程序转换为 ASP.NET Core 3.x/5 的旧模板。此方法在 .NET 中也将继续有效。

Important note : We can find more information about the .NET 6 top-level statements template at https://docs.microsoft.com/dotnet/core/tutorials/top-level-templates.
重要提示 : 我们可以在 https://docs.microsoft.com/dotnet/core/tutorials/top-level-templates 中找到有关 .NET 6 顶级语句模板的更多信息。

By default, the new template includes support for the OpenAPI Specification and more specifically, Swagger.
默认情况下,新模板包括对 OpenAPI 规范的支持,更具体地说,包括对 Swagger 的支持。

Let’s say that we have our documentation and playground for the endpoints working out of the box without any additional configuration needed.
假设我们有现成的端点文档和 Playground,无需任何额外的配置。

You can see the default configuration for Swagger in the following two lines of codes:
您可以在以下两行代码中看到 Swagger 的默认配置:

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

Very often, you don’t want to expose Swagger and all the endpoints to the production or staging environments. The default template enables Swagger out of the box only in the development environment with the following lines of code:
通常,您不希望将 Swagger 和所有终端节点公开给生产或暂存环境。默认模板仅在开发环境中启用开箱即用的 Swagger,代码行如下:

if (app.Environment.IsDevelopment())
{
         app.UseSwagger();
         app.UseSwaggerUI();
}

If the application is running on the dev elopment environment, you must also include the Swagger documentation, but otherwise not.
如果应用程序在 dev elopment 环境中运行,则还必须包含 Swagger 文档,否则不得包含。

Note : We’ll talk in detail about Swagger in Chapter 3, Working with Minimal APIs.
注意:我们将在第 3 章 使用最小 API 中详细讨论 Swagger。

In these last few lines of code in the template, we are introducing another generic concept for .NET 6 web applications: environments.
在模板的最后几行代码中,我们引入了 .NET 6 Web 应用程序的另一个通用概念:环境。

Typically, when we develop a professional application, there are a lot of phases through which an application is developed, tested, and finally published to the end users.
通常,当我们开发专业应用程序时,应用程序会经历许多开发、测试并最终发布给最终用户的阶段。

By convention, these phases are regulated and called development, staging, and production. As developers, we might like to change the behavior of the application based on the current environment.
按照惯例,这些阶段受到监管,称为开发、暂存和生产。作为开发人员,我们可能希望根据当前环境更改应用程序的行为。

There are several ways to access this information but the typical way to retrieve the actual environment in modern .NET 6 applications is to use environment variables. You can access the environment variables directly from the app variable in the Program.cs file.
有多种方法可以访问此信息,但在现代 .NET 6 应用程序中检索实际环境的典型方法是使用环境变量。您可以直接从 Program.cs 文件中的 app 变量访问环境变量。

The following code block shows how to retrieve all the information about the environments directly from the startup point of the application:
以下代码块演示如何直接从应用程序的启动点检索有关环境的所有信息:

if (app.Environment.IsDevelopment())
{
           // your code here
}
if (app.Environment.IsStaging())
{
           // your code here
}
if (app.Environment.IsProduction())
{
           // your code here
}

In many cases, you can define additional environments, and you can check your custom environment with the following code:
在许多情况下,您可以定义其他环境,并且可以使用以下代码检查您的自定义环境:

if (app.Environment.IsEnvironment("TestEnvironment"))
{
           // your code here
}

To define routes and handlers in minimal APIs, we use the MapGet, MapPost, MapPut, and MapDelete methods. If you are used to using HTTP verbs, you will have noticed that the verb Patch is not present, but you can define any set of verbs using MapMethods.
要在最小的 API 中定义路由和处理程序,我们使用 MapGet、MapPost、MapPut 和 MapDelete 方法。如果您习惯使用 HTTP 动词,您会注意到动词 Patch 不存在,但您可以使用 MapMethods 定义任何动词集。

For instance, if you want to create a new endpoint to post some data to the API, you can write the following code:

例如,如果要创建一个新的终端节点以将一些数据发布到 API,则可以编写以下代码:

app.MapPost("/weatherforecast", async (WeatherForecast 
    model, IWeatherService repo) =>
{
         // ...
});

As you can see in the short preceding code, it’s very easy to add a new endpoint with the new minimal API template.
正如您在前面的简短代码中所看到的,使用新的最小 API 模板添加新终端节点非常容易。

It was more difficult previously, especially for a new developer, to code a new endpoint with binding parameters and use dependency injection.
以前,使用绑定参数编写新终端节点并使用依赖项注入更加困难,尤其是对于新开发人员而言。

Important note : We’ll talk in detail about routing in Chapter 2, Exploring Minimal APIs and Their Advantages, and about dependency injection in Chapter 4, Dependency Injection in a Minimal API Project.
重要提示 : 我们将在第 2 章 探索最小 API 及其优势中详细讨论路由,并在第 4 章 最小 API 项目中的依赖注入。

Summary
总结

In this chapter, we first started with a brief history of minimal APIs. Next, we saw how to create a project with Visual Studio 2022 as well as Visual Studio Code and the .NET CLI. After that, we examined the structure of the new template, how to access different environments, and how to start interacting with REST endpoints.
在本章中,我们首先从最小 API 的简要历史开始。接下来,我们了解了如何使用 Visual Studio 2022 以及 Visual Studio Code 和 .NET CLI 创建项目。之后,我们检查了新模板的结构、如何访问不同的环境以及如何开始与 REST 端点交互。

In the next chapter, we will see how to bind parameters, the new routing configuration, and how to customize a response.
在下一章中,我们将了解如何绑定参数、新的路由配置以及如何自定义响应。

2 Exploring Minimal APIs and Their Advantages

探索最小 API 及其优势

In this chapter of the book, we will introduce some of the basic themes related to minimal APIs in .NET 6.0, showing how they differ from the controller-based web APIs that we have written in the previous version of .NET. We will also try to underline both the pros and the cons of this new approach of writing APIs.
在本书的这一章中,我们将介绍与 .NET 6.0 中的最小 API 相关的一些基本主题,展示它们与我们在早期版本的 .NET 中编写的基于控制器的 Web API 有何不同。我们还将尝试强调这种编写 API 的新方法的优缺点。

In this chapter, we will be covering the following topics:
在本章中,我们将介绍以下主题:

• Routing
• Parameter binding
• Exploring responses
• Controlling serialization
• Architecting a minimal API project

Technical requirements
技术要求

To follow the descriptions in this chapter, you will need to create an ASP.NET Core 6.0 Web API application. You can either use one of the following options:
要按照本章中的描述进行作,您需要创建一个 ASP.NET Core 6.0 Web API 应用程序。您可以使用以下选项之一:

• Option 1: Click on the New | Project command in the File menu of Visual Studio 2022 – then, choose the ASP.NET Core Web API template. Select a name and the working directory in the wizard and be sure to uncheck the Use controllers (uncheck to use minimal APIs) option in the next step.
选项 1:点击新建 |Visual Studio 2022 的 File (文件) 菜单中的 Project (项目) 命令,然后选择 ASP.NET Core Web API 模板。在向导中选择一个名称和工作目录,并确保在下一步中取消选中 Use controllers (不选中使用最少的 API) 选项。

• Option 2: Open your console, shell, or Bash terminal, and change to your working directory. Use the following command to create a new Web API application:
选项 2:打开您的控制台、shell 或 Bash 终端,然后切换到您的工作目录。使用以下命令创建新的 Web API 应用程序:

dotnet new webapi -minimal -o Chapter02

Now, open the project in Visual Studio by double-clicking the project file, or in Visual Studio Code, by typing the following command in the already open console:
现在,通过在 Visual Studio 中双击项目文件或在 Visual Studio Code 中通过在已打开的控制台中键入以下命令来打开项目:

cd Chapter02
code.

Finally, you can safely remove all the code related to the WeatherForecast sample, as we don’t need it for this chapter.
最后,您可以安全地删除与 WeatherForecast 示例相关的所有代码,因为本章不需要它。

All the code samples in this chapter can be found in the GitHub repository for this book at https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter02.
本章中的所有代码示例都可以在本书的 GitHub 存储库中找到,网址为 https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter02

Routing
路由

According to the official Microsoft documentation available at https://docs.microsoft.com/aspnet/core/fundamentals/routing, the following definition is given for routing:
根据 https://docs.microsoft.com/aspnet/core/fundamentals/routing 上提供的官方 Microsoft 文档,路由给出了以下定义:

Routing is responsible for matching incoming HTTP requests and dispatching those requests to the app’s executable endpoints. Endpoints are the app’s units of executable request-handling code. Endpoints are defined in the app and configured when the app starts. The endpoint matching process can extract values from the request’s URL and provide those values for request processing. Using endpoint information from the app, routing is also able to generate URLs that map to endpoints.
路由负责匹配传入的 HTTP 请求并将这些请求分派到应用程序的可执行端点。端点是应用程序的可执行请求处理代码单元。终端节点在应用程序中定义,并在应用程序启动时进行配置。终端节点匹配过程可以从请求的 URL 中提取值,并提供这些值以供请求处理。使用应用程序中的终端节点信息,路由还能够生成映射到终端节点的 URL。

In controller-based web APIs, routing is defined via the UseEndpoints() method in Startup.cs or using data annotations such as Route, HttpGet, HttpPost, HttpPut, HttpPatch, and HttpDelete right over the action methods.
在基于控制器的 Web API 中,路由是通过 Startup.cs 中的 UseEndpoints() 方法定义的,或者使用作方法上的数据注释(如 Route、HttpGet、HttpPost、HttpPut、HttpPatch 和 HttpDelete)来定义。

As mentioned in Chapter 1, Introduction to Minimal APIs in minimal APIs, we define the route patterns using the Map methods of the WebApplication object. Here’s an example:
如第 1 章 最小 API 简介中所述,在最小 API 中,我们使用 WebApplication 对象的 Map
方法定义路由模式。下面是一个示例:

app.MapGet("/hello-get", () => "[GET] Hello World!");
app.MapPost("/hello-post", () => "[POST] Hello World!");
app.MapPut("/hello-put", () => "[PUT] Hello World!");
app.MapDelete("/hello-delete", () => "[DELETE] Hello
                World!");

In this code, we have defined four endpoints, each with a different routing and method. Of course, we can use the same route pattern with different HTTP verbs.
在此代码中,我们定义了四个终端节点,每个终端节点都有不同的路由和方法。当然,我们可以对不同的 HTTP 动词使用相同的路由模式。

Note : As soon as we add an endpoint to our application (for example, using MapGet()), UseRouting() is automatically added at the start of the middleware pipeline and UseEndpoints() at the end of the pipeline.
注意 : 一旦我们将端点添加到应用程序(例如,使用 MapGet()),UseRouting() 就会自动添加到中间件管道的开头,UseEndpoints() 会自动添加到管道的末尾。

As shown here, ASP.NET Core 6.0 provides Map methods for the most common HTTP verbs. If we need to use other verbs, we can use the generic MapMethods:
如此处所示,ASP.NET Core 6.0 为最常见的 HTTP 动词提供了 Map
方法。如果我们需要使用其他动词,我们可以使用通用的 MapMethods:

app.MapMethods("/hello-patch", new[] { HttpMethods.Patch }, 
    () => "[PATCH] Hello World!");
app.MapMethods("/hello-head", new[] { HttpMethods.Head }, 
    () => "[HEAD] Hello World!");
app.MapMethods("/hello-options", new[] { 
    HttpMethods.Options }, () => "[OPTIONS] Hello World!");

In the following sections, we will show in detail how routing works effectively and how we can control its behavior.
在以下部分中,我们将详细展示路由如何有效工作以及如何控制其行为。

Route handlers
路由处理程序

Methods that execute when a route URL matches (according to parameters and constraints, as described in the following sections) are called route handlers. Route handlers can be a lambda expression, a local function, an instance method, or a static method, whether synchronous or asynchronous:
当路由 URL 匹配时执行的方法(根据参数和约束,如以下部分所述)称为路由处理程序。路由处理程序可以是 lambda 表达式、本地函数、实例方法或静态方法,无论是同步方法还是异步方法:

• Here’s an example of a lambda expression (inline or using a variable):
以下是 lambda 表达式的示例(内联或使用变量):

app.MapGet("/hello-inline", () => "[INLINE LAMBDA]

             Hello World!");

var handler = () => "[LAMBDA VARIABLE] Hello World!";

app.MapGet("/hello", handler);

• Here’s an example of a local function:
下面是一个本地函数的示例:

string Hello() => "[LOCAL FUNCTION] Hello World!";

app.MapGet("/hello", Hello);

• The following is an example of an instance method:
以下是实例方法的示例:

var handler = new HelloHandler();

app.MapGet("/hello", handler.Hello);

class HelloHandler

{

    public string Hello()

      => "[INSTANCE METHOD] Hello

           World!";

}

• Here, we can see an example of a static method:
在这里,我们可以看到一个静态方法的示例:

app.MapGet("/hello", HelloHandler.Hello);

class HelloHandler

{

    public static string Hello()

      => "[STATIC METHOD] Hello World!";

}

Route parameters
路由参数

As with the previous versions of .NET, we can create route patterns with parameters that will be automatically captured by the handler:
与以前版本的 .NET 一样,我们可以创建路由模式,其中包含处理程序将自动捕获的参数:

app.MapGet("/users/{username}/products/{productId}", 
          (string username, int productId) 
         => $"The Username is {username} and the product Id 
              is {productId}");

A route can contain an arbitrary number of parameters. When a request is made to this route, the parameters will be captured, parsed, and passed as arguments to the corresponding handler. In this way, the handler will always receive typed arguments (in the preceding sample, we are sure that the username is string and the product ID is int).
路由可以包含任意数量的参数。当向此路由发出请求时,参数将被捕获、解析并作为参数传递给相应的处理程序。这样,处理程序将始终接收类型化参数(在前面的示例中,我们确保 username 是 string,产品 ID 是 int)。

If the route values cannot be casted to the specified types, then an exception of the BadHttpRequestException type will be thrown, and the API will respond with a 400 Bad Request message.
如果无法将路由值强制转换为指定类型,则将引发 BadHttpRequestException 类型的异常,并且 API 将以 400 Bad Request 消息进行响应。

Route constraints
路由约束

Route constraints are used to restrict valid types for route parameters. Typical constraints allow us to specify that a parameter must be a number, a string, or a GUID. To specify a route constraint, we simply need to add a colon after the parameter name, then specify the constraint name:
路由约束用于限制路由参数的有效类型。典型约束允许我们指定参数必须是数字、字符串或 GUID。要指定路由约束,我们只需要在参数名称后添加一个冒号,然后指定约束名称:

app.MapGet("/users/{id:int}", (int id) => $"The user Id is 
                                            {id}");
app.MapGet("/users/{id:guid}", (Guid id) => $"The user Guid 
                                              is {id}");

Minimal APIs support all the route constraints that were already available in the previous versions of ASP.NET Core. You can find the full list of route constraints at the following link: https://docs.microsoft.com/aspnet/core/fundamentals/routing#route-constraint-reference.
最小 API 支持以前版本的 ASP.NET Core 中已经提供的所有路由约束。您可以在以下链接中找到路由约束的完整列表:https://docs.microsoft.com/aspnet/core/fundamentals/routing#route-constraint-reference

If, according to the constraints, no route matches the specified path, we don’t get an exception. Instead we obtain a 404 Not Found message, because, in fact, if the constraints do not fit, the route itself isn’t reachable. So, for example, in the following cases we get 404 responses:
如果根据约束,没有路由与指定的路径匹配,则不会收到异常。相反,我们会收到 404 Not Found 消息,因为事实上,如果约束不合适,则路由本身无法访问。因此,例如,在以下情况下,我们会收到 404 个响应:

Table 2.1 – Examples of an invalid path according to the route constraints
表 2.1 – 根据路由约束的无效路径示例

Every other argument in the handler that is not declared as a route constraint is expected, by default, in the query string. For example, see the following:
默认情况下,处理程序中未声明为路由约束的所有其他参数都应在查询字符串中。例如,请参阅以下内容:

// Matches hello?name=Marco
app.MapGet("/hello", (string name) => $"Hello, {name}!"); 

In the next section, Parameter binding, we’ll go deeper into how to use binding to further customize routing by specifying, for example, where to search for routing arguments, how to change their names, and how to have optional route parameters.
在下一节 参数绑定 中,我们将更深入地介绍如何使用 binding 进一步自定义路由,例如,指定在何处搜索路由参数、如何更改其名称以及如何拥有可选的路由参数。

Parameter binding
参数绑定

Parameter binding is the process that converts request data (i.e., URL paths, query strings, or the body) into strongly typed parameters that can be consumed by route handlers. ASP.NET Core minimal APIs support the following binding sources:
参数绑定是将请求数据(即 URL 路径、查询字符串或正文)转换为路由处理程序可以使用的强类型参数的过程。ASP.NET Core 最小 API 支持以下绑定源:

• Route values
• Query strings
• Headers
• The body (as JSON, the only format supported by default)
• A service provider (dependency injection)

We’ll talk in detail about dependency injection in Chapter 4, Implementing Dependency Injection.
我们将在 第 4 章 实现依赖注入 中详细讨论依赖注入。

As we’ll see later in this chapter, if necessary, we can customize the way in which binding is performed for a particular input. Unfortunately, in the current version, binding from Form is not natively supported in minimal APIs. This means that, for example, IFormFile is not supported either.
正如我们在本章后面看到的那样,如有必要,我们可以自定义对特定 input 执行绑定的方式。遗憾的是,在当前版本中,最小的 API 本身并不支持从 Form 进行绑定。这意味着,例如,IFormFile 也不受支持。

To better understand how parameter binding works, let’s take a look at the following API:
为了更好地理解参数绑定的工作原理,我们来看一下以下 API:

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddScoped<PeopleService>();
var app = builder.Build();
app.MapPut("/people/{id:int}", (int id, bool notify, Person 
             person, PeopleService peopleService) => { });
app.Run();
public class PeopleService { }
public record class Person(string FirstName, string 
                           LastName);

Parameters that are passed to the handler are resolved in the following ways:
传递给处理程序的参数通过以下方式解析:

Table 2.2 – Parameter binding sources
表 2.2 – 参数绑定源

As we can see, ASP.NET Core is able to automatically understand where to search for parameters for binding, based on the route pattern and the types of the parameters themselves. For example, a complex type such as the Person class is expected in the request body.
正如我们所看到的,ASP.NET Core 能够根据路由模式和参数本身的类型,自动理解在何处搜索要绑定的参数。例如,请求正文中应包含复杂类型(如 Person 类)。

If needed, as in the previous versions of ASP.NET Core, we can use attributes to explicitly specify where parameters are bound from and, optionally, use different names for them. See the following endpoint:
如果需要,就像在早期版本的 ASP.NET Core 中一样,我们可以使用属性来显式指定参数的绑定位置,并可选择为它们使用不同的名称。请参阅以下终端节点:

app.MapGet("/search", string q) => { });

The API can be invoked with /search?q=text. However, using q as the name of the argument isn’t a good idea, because its meaning is not self-explanatory. So, we can modify the handler using FromQueryAttribute:
可以使用 /search?q=text 调用 API。但是,使用 q 作为参数的名称并不是一个好主意,因为它的含义不言自明。因此,我们可以使用 FromQueryAttribute 修改处理程序:

app.MapGet("/search", ([FromQuery(Name = "q")] string 
             searchText) => { });

In this way, the API still expects a query string parameter named q, but in the handler its value is now bound to the searchText argument.
这样,API 仍然需要名为 q 的查询字符串参数,但在处理程序中,其值现在绑定到 searchText 参数。

Note : According to the standard, the GET, DELETE, HEAD, and OPTIONS HTTP options should never have a body. If, nevertheless, you want to use it, you need to explicitly add the [FromBody] attribute to the handler argument; otherwise, you’ll get an InvalidOperationException error. However, keep in mind that this is a bad practice.
注意 : 根据该标准,GET、DELETE、HEAD 和 OPTIONS HTTP 选项不应有正文。但是,如果要使用它,则需要将 [FromBody] 属性显式添加到 handler 参数;否则,您将收到 InvalidOperationException 错误。但是,请记住,这是一种不好的做法。

By default, all the parameters in route handlers are required. So, if, according to routing, ASP.NET Core finds a valid route, but not all the required parameters are provided, we will get an error. For example, let’s look at the following method:
默认情况下,路由处理程序中的所有参数都是必需的。因此,如果根据路由,ASP.NET Core 找到了一个有效的路由,但未提供所有必需的参数,我们将收到错误。例如,让我们看看下面的方法:

app.MapGet("/people", (int pageIndex, int itemsPerPage) => { });

If we call the endpoint without the pageIndex or itemsPerPage query string values, we will obtain a BadHttpRequestException error, and the response will be 400 Bad Request.
如果我们在没有 pageIndex 或 itemsPerPage 查询字符串值的情况下调用终端节点,我们将获得 BadHttpRequestException 错误,并且响应将为 400 Bad Request。

To make the parameters optional, we just need to declare them as nullable or provide a default value. The latter case is the most common. However, if we adopt this solution, we cannot use a lambda expression for the handler. We need another approach, for example, a local function:
要使参数成为可选的,我们只需要将它们声明为 nullable 或提供默认值。后一种情况是最常见的。但是,如果我们采用此解决方案,则不能对处理程序使用 lambda 表达式。我们需要另一种方法,例如本地函数:

// This won't compile
//app.MapGet("/people", (int pageIndex = 0, int 
                         itemsPerPage = 50) => { });
string SearchMethod(int pageIndex = 0, 
                    int itemsPerPage = 50) => $"Sample 
                    result for page {pageIndex} getting 
                    {itemsPerPage} elements";
app.MapGet("/people", SearchMethod);

In this case, we are dealing with a query string, but the same rules apply to all the binding sources.
在本例中,我们正在处理查询字符串,但相同的规则适用于所有绑定源。

Keep in mind that if we use nullable reference types (which are enabled by default in .NET 6.0 projects) and we have, for example, a string parameter that could be null, we need to declare it as nullable – otherwise, we’ll get a BadHttpRequestException error again. The following example correctly defines the orderBy query string parameter as optional:
请记住,如果我们使用可为 null 的引用类型(在 .NET 6.0 项目中默认启用),并且我们有一个可能为 null 的字符串参数,则需要将其声明为可为 null,否则,我们将再次收到 BadHttpRequestException 错误。以下示例正确地将 orderBy 查询字符串参数定义为可选:

app.MapGet("/people", (string? orderBy) => $"Results ordered by {orderBy}");

Special bindings
特殊绑定

In controller-based web APIs, a controller that inherits from Microsoft.AspNetCore.Mvc.ControllerBase has access to some properties that allows it to get the context of the request and response: HttpContext, Request, Response, and User. In minimal APIs, we don’t have a base class, but we can still access this information because it is treated as a special binding that is always available to any handler:
在基于控制器的 Web API 中,从 Microsoft.AspNetCore.Mvc.ControllerBase 继承的控制器有权访问一些属性,这些属性允许它获取请求和响应的上下文:HttpContext、Request、Response 和 User。在最小的 API 中,我们没有基类,但我们仍然可以访问此信息,因为它被视为任何处理程序始终可用的特殊绑定:

app.MapGet("/products", (HttpContext context, HttpRequest req, HttpResponse res, ClaimsPrincipal user) => { });

Tip : We can also access all these objects using the IHttpContextAccessor interface, as we did in the previous ASP.NET Core versions.
提示 : 我们还可以使用 IHttpContextAccessor 接口访问所有这些对象,就像我们在以前的 ASP.NET Core 版本中所做的那样。

Custom binding
自定义绑定

In some cases, the default way in which parameter binding works isn’t enough for our purpose. In minimal APIs, we don’t have support for the IModelBinderProvider and IModelBinder interfaces, but we have two alternatives to implement custom model binding.
在某些情况下,参数绑定的默认工作方式不足以满足我们的目的。在最小的 API 中,我们不支持 IModelBinderProvider 和 IModelBinder 接口,但我们有两种实现自定义模型绑定的方法。

Important note : The IModelBinderProvider and IModelBinder interfaces in controller-based projects allow us to define the mapping between the request data and the application model. The default model binder provided by ASP.NET Core supports most of the common data types, but, if necessary, we can extend the system by creating our own providers. We can find more information at the following link: https://docs.microsoft.com/aspnet/core/mvc/advanced/custom-model-binding.
重要提示 : 基于控制器的项目中的 IModelBinderProvider 和 IModelBinder 接口允许我们定义请求数据和应用程序模型之间的映射。ASP.NET Core 提供的默认模型 Binder 支持大多数常见数据类型,但如有必要,我们可以通过创建自己的提供程序来扩展系统。我们可以在以下链接中找到更多信息:https://docs.microsoft.com/aspnet/core/mvc/advanced/custom-model-binding

If we want to bind a parameter that comes from a route, query string, or header to a custom type, we can add a static TryParse method to the type:
如果我们想将来自路由、查询字符串或标头的参数绑定到自定义类型,我们可以向该类型添加静态 TryParse 方法:

// GET /navigate?location=43.8427,7.8527
app.MapGet("/navigate", (Location location) => $"Location: 
            {location.Latitude}, {location.Longitude}");
public class Location
{
    public double Latitude { get; set; }
    public double Longitude { get; set; }
    public static bool TryParse(string? value, 
      IFormatProvider? provider, out Location? location)
    {
          if (!string.IsNullOrWhiteSpace(value))
          {
               var values = value.Split(',', 
               StringSplitOptions.RemoveEmptyEntries);
               if (values.Length == 2 && double.
                   TryParse(values[0],
                   NumberStyles.AllowDecimalPoint, 
                   CultureInfo.InvariantCulture, 
                   out var latitude) && double.
                   TryParse(values[1], NumberStyles.
                   AllowDecimalPoint, CultureInfo.
                   InvariantCulture, out var longitude))
               {
                       location = new Location 
                       { Latitude = latitude, 
                       Longitude = longitude };
                       return true;
               }
          }
          location = null;
          return false;
    }
}

In the TryParse method, we can try to split the input parameter and check whether it contains two decimal values: in this case, we parse the numbers to build the Location object and we return true. Otherwise, we return false because the Location object cannot be initialized.
在 TryParse 方法中,我们可以尝试拆分输入参数并检查它是否包含两个十进制值:在本例中,我们解析数字以构建 Location 对象并返回 true。否则,我们将返回 false,因为无法初始化 Location 对象。

Important note : When the minimal API finds that a type contains a static TryParse method, even if it is a complex type, it assumes that it is passed in the route or the query string, based on the routing template. We can use the [FromHeader] attributes to change the binding source. In any case, TryParse will never be invoked for the body of the request.
重要提示 : 当最小 API 发现某个类型包含静态 TryParse 方法时,即使它是一个复杂类型,它也会根据路由模板假定它是在路由或查询字符串中传递的。我们可以使用 [FromHeader] 属性来更改绑定源。在任何情况下,都不会为请求正文调用 TryParse。

If we need to completely control how binding is performed, we can implement a static BindAsync method on the type. This isn’t a very common solution, but in some cases, it can be useful:
如果我们需要完全控制绑定的执行方式,我们可以在类型上实现静态 BindAsync 方法。这不是一个非常常见的解决方案,但在某些情况下,它可能很有用:

// POST /navigate?lat=43.8427&lon=7.8527
app.MapPost("/navigate", (Location location) => 
   $"Location: {location.Latitude}, {location.Longitude}");
public class Location
{
    // ...
    public static ValueTask<Location?> BindAsync(HttpContext 
    context, ParameterInfo parameter)
    {
        if (double.TryParse(context.Request.Query["lat"], 
            NumberStyles.AllowDecimalPoint, CultureInfo.
            InvariantCulture, out var latitude)&& double.
            TryParse(context.Request.Query["lon"], 
            NumberStyles.AllowDecimalPoint, CultureInfo.
            InvariantCulture, out var longitude))
        {
                var location = new Location 
                { Latitude = latitude, Longitude = longitude };
                return ValueTask.
                  FromResult<Location?>(location);
        }
        return ValueTask.FromResult<Location?>(null);
    }
}

As we can see, the BindAsync method takes the whole HttpContext as an argument, so we can read all the information we need to create the actual Location object that is passed to the route handler. In this example, we read two query string parameters (lat and lon), but (in the case of POST, PUT, or PATCH methods) we can also read the entire body of the request and manually parse its content. This can be useful, for instance, if we need to handle requests that have a format other than JSON (which, as said before, is the only one supported by default).
正如我们所看到的,BindAsync 方法将整个 HttpContext 作为参数,因此我们可以读取创建传递给路由处理程序的实际 Location 对象所需的所有信息。在此示例中,我们读取两个查询字符串参数(lat 和 lon),但(在 POST、PUT 或 PATCH 方法的情况下)我们还可以读取请求的整个正文并手动解析其内容。例如,如果我们需要处理格式不是 JSON 的请求(如前所述,JSON 是默认支持的唯一格式),这可能很有用。

If the BindAsync method returns null, while the corresponding route handler parameter cannot assume this value (as in the previous example), we will get an HttpBadRequestException error, which. as usual, will be wrapped in a 400 Bad Request response.
如果 BindAsync 方法返回 null,而相应的路由处理程序参数不能采用此值(如前面的示例所示),我们将收到 HttpBadRequestException 错误。像往常一样,将包装在 400 Bad Request 响应中。

Important note : We shouldn’t define both the TryParse and BindAsync methods using a type; if both are present, BindAsync always has precedence (that is, TryParse will never be invoked).
重要提示 : 我们不应该使用类型同时定义 TryParse 和 BindAsync 方法;如果两者都存在,则 BindAsync 始终具有优先权(即,永远不会调用 TryParse)。

Now that we have looked at parameter binding and understood how to use it and customize its behavior, let’s see how to work with responses in minimal APIs.
现在我们已经了解了参数绑定并了解了如何使用它并自定义其行为,让我们看看如何在最小的 API 中使用响应。

Exploring responses
探索响应

As with controller-based projects, with route handlers of minimal APIs as well, we can directly return a string or a class (either synchronously or asynchronously):
与基于控制器的项目一样,使用最小 API 的路由处理程序,我们可以直接返回字符串或类(同步或异步):

• If we return a string (as in the examples of the previous section), the framework writes the string directly to the response, setting its content type to text/plain and the status code to 200 OK
如果我们返回一个字符串(如上一节的示例所示),框架会将该字符串直接写入响应,将其内容类型设置为 text/plain,并将状态代码设置为 200 OK

• If we use a class, the object is serialized into the JSON format and sent to the response with the application/json content type and a 200 OK status code
如果我们使用类,则对象将序列化为 JSON 格式,并使用 application/json 内容类型和 200 OK 状态代码发送到响应

However, in a real application, we typically need to control the response type and the status code. In this case, we can use the static Results class, which allows us to return an instance of the IResult interface, which in minimal APIs acts how IActionResult does for controllers. For instance, we can use it to return a 201 Created response rather than a 400 Bad Request or a 404 Not Found message. L et’s look at some examples:
但是,在实际应用程序中,我们通常需要控制响应类型和状态代码。在这种情况下,我们可以使用静态 Results 类,该类允许我们返回 IResult 接口的实例,该实例在最小的 API 中的作用类似于 IActionResult 对控制器的作用。例如,我们可以使用它来返回 201 Created 响应,而不是 400 Bad Request 或 404 Not Found 消息。我们来看看一些例子:

app.MapGet("/ok", () => Results.Ok(new Person("Donald", 
                                              "Duck")));
app.MapGet("/notfound", () => Results.NotFound());
app.MapPost("/badrequest", () =>
{
    // Creates a 400 response with a JSON body.
    return Results.BadRequest(new { ErrorMessage = "Unable to
                                    complete the request" });
});
app.MapGet("/download", (string fileName) => 
             Results.File(fileName));
record class Person(string FirstName, string LastName);

Each method of the Results class is responsible for setting the response type and status code that correspond to the meaning of the method itself (e.g., the Results.NotFound() method returns a 404 Not Found response). Note that even if we typically need to return an object in the case of a 200 OK response (with Results.Ok()), it isn’t the only method that allows this. Many other methods allow us to include a custom response; in all these cases, the response type will be set to application/json and the object will automatically be JSON-serialized.
Results 类的每个方法都负责设置与方法本身的含义相对应的响应类型和状态代码(例如,Results.NotFound() 方法返回 404 Not Found 响应)。请注意,即使我们通常需要在 200 OK 响应的情况下返回一个对象(使用 Results.Ok()),它也不是唯一允许这样做的方法。许多其他方法允许我们包含自定义响应;在所有这些情况下,响应类型都将设置为 application/json,并且对象将自动进行 JSON 序列化。

The current version of minimal APIs does not support content negotiation. We only have a few methods that allow us to explicitly set the content type, when getting a file with Results.Bytes(), Results.Stream(), and Results.File(), or when using Results.Text() and Results.Content(). In all other cases, when we’re dealing with complex objects, the response will be in JSON format. This is a precise design choice since most developers rarely need to support other media types. By supporting only JSON without performing content negotiation, minimal APIs can be very efficient.
当前版本的 minimal API 不支持内容协商。只有少数方法允许我们显式设置内容类型,当使用 Results.Bytes()、Results.Stream() 和 Results.File() 获取文件时,或者使用 Results.Text() 和 Results.Content() 时。在所有其他情况下,当我们处理复杂对象时,响应将采用 JSON 格式。这是一个精确的设计选择,因为大多数开发人员很少需要支持其他媒体类型。通过仅支持 JSON 而不执行内容协商,最少的 API 可以非常高效。

However, this approach isn’t enough in all scenarios. In some cases, we may need to create a custom response type, for example, if we want to return an HTML or XML response instead of the standard JSON. We can manually use the Results.Content() method (which allows us to specify the content as a simple string with a particular content type), but, if we have this requirement, it is better to implement a custom IResult type, so that the solution can be reused.
但是,这种方法并非在所有情况下都足够。在某些情况下,我们可能需要创建自定义响应类型,例如,如果我们要返回 HTML 或 XML 响应而不是标准 JSON。我们可以手动使用 Results.Content() 方法(它允许我们将内容指定为具有特定内容类型的简单字符串),但是,如果我们有此要求,最好实现自定义 IResult 类型,以便可以重用解决方案。

For example, let’s suppose that we want to serialize objects in XML instead of JSON. We can then define an XmlResult class that implements the IResult interface:
例如,假设我们想用 XML 而不是 JSON 来序列化对象。然后,我们可以定义一个实现 IResult 接口的 XmlResult 类:

public class XmlResult : IResult
{
   private readonly object value;
   public XmlResult(object value)
   {
       this.value = value;
   }
   public Task ExecuteAsync(HttpContext httpContext)
   {
       using var writer = new StringWriter();

       var serializer = new XmlSerializer(value.GetType());
       serializer.Serialize(writer, value);
       var xml = writer.ToString();
       httpContext.Response.ContentType = MediaTypeNames.
       Application.Xml;
       httpContext.Response.ContentLength = Encoding.UTF8
      .GetByteCount(xml);
       return httpContext.Response.WriteAsync(xml);
   }
}

The IResult interface requires us to implement the ExecuteAsync method, which receives the current HttpContext as an argument. We serialize the value using the XmlSerializer class and then write it to the response, specifying the correct response type.
IResult 接口要求我们实现 ExecuteAsync 方法,该方法接收当前 HttpContext 作为参数。我们使用 XmlSerializer 类序列化该值,然后将其写入响应,并指定正确的响应类型。

Now, we can directly use the new XmlResult type in our route handlers. However, best practices suggest that we create an extension method for the IResultExtensions interface, as with the following one:
现在,我们可以直接在路由处理程序中使用新的 XmlResult 类型。但是,最佳实践建议我们为 IResultExtensions 接口创建一个扩展方法,如下所示:

public static class ResultExtensions
{
    public static IResult Xml(this IResultExtensions 
    resultExtensions, object value) => new XmlResult(value);
}

In this way, we have a new Xml method available on the Results.Extensions property:
这样,我们在 Results.Extensions 属性上就有了一个新的 Xml 方法:

app.MapGet("/xml", () => Results.Extensions.Xml(new City { Name = "Taggia" }));
public record class City
{
    public string? Name { get; init; }
}

The benefit of this approach is that we can reuse it everywhere we need to deal with XML without having to manually handle the serialization and the response type (as we should have done using the Result.Content() method instead).
这种方法的好处是,我们可以在需要处理 XML 的任何地方重用它,而不必手动处理序列化和响应类型(就像我们应该使用 Result.Content() 方法所做的那样)。

Tip : If we want to perform content validation, we need to manually check the Accept header of the HttpRequest object, which we can pass to our handlers, and then create the correct response accordingly.
提示 : 如果我们想执行内容验证,我们需要手动检查 HttpRequest 对象的 Accept 标头,我们可以将其传递给我们的处理程序,然后相应地创建正确的响应。

After analyzing how to properly handle responses in minimal APIs, we’ll see how to control the way our data is serialized and deserialized in the next section.
在分析了如何在最小 API 中正确处理响应之后,我们将在下一节中了解如何控制数据的序列化和反序列化方式。

Controlling serialization
控制序列化

As described in the previous sections, minimal APIs only provide built-in support for the JSON format. In particular, the framework uses System.Text.Json for serialization and deserialization. In controller-based APIs, we can change this default and use JSON.NET instead. This is not possible when working with minimal APIs: we can’t replace the serializer at all.
如前几节所述,最小 API 仅提供对 JSON 格式的内置支持。具体而言,框架使用 System.Text.Json 进行序列化和反序列化。在基于控制器的 API 中,我们可以更改此默认值并改用 JSON.NET。当使用最少的 API 时,这是不可能的:我们根本无法替换序列化器。

The built-in serializer uses the following options:
内置序列化程序使用以下选项:

• Case-insensitive property names during serialization
序列化期间不区分大小写的属性名称

• Camel case property naming policy
驼峰式大小写属性命名策略

• Support for quoted numbers (JSON strings for number properties)
支持带引号的数字(数字属性的 JSON 字符串)

Note : We can find more information about the System.Text.Json namespace and all the APIs it provides at the following link: https://docs.microsoft.com/dotnet/api/system.text.json.
注意 : 我们可以在以下链接中找到有关 System.Text.Json 命名空间及其提供的所有 API 的更多信息:https://docs.microsoft.com/dotnet/api/system.text.json

In controller-based APIs, we can customize these settings by calling AddJsonOptions() fluently after AddControllers(). In minimal APIs, we can’t use this approach since we don’t have controllers at all, so we need to explicitly call the Configure method for JsonOptions. So, let’s consider this handler:
在基于控制器的 API 中,我们可以通过在 AddControllers() 之后流畅地调用 AddJsonOptions() 来自定义这些设置。在最小的 API 中,我们不能使用这种方法,因为我们根本没有控制器,因此我们需要显式调用 JsonOptions 的 Configure 方法。那么,让我们考虑一下这个处理程序:

app.MapGet("/product", () =>
{
    var product = new Product("Apple", null, 0.42, 6);
    return Results.Ok(product); 
});
public record class Product(string Name, string? Description, double UnitPrice, int Quantity)
{
    public double TotalPrice => UnitPrice * Quantity;
}

Using the default JSON options, we get this result:
使用默认的 JSON 选项,我们得到以下结果:

{
    "name": "Apple",
    "description": null,
    "unitPrice": 0.42,
    "quantity": 6,
    "totalPrice": 2.52
}

Now, let’s configure JsonOptions:
现在,让我们配置 JsonOptions:

var builder = WebApplication.CreateBuilder(args);
builder.Services.Configure<Microsoft.AspNetCore.Http.Json.
JsonOptions>(options =>
{
    options.SerializerOptions.DefaultIgnoreCondition = 
    JsonIgnoreCondition.WhenWritingNull;
    options.SerializerOptions.IgnoreReadOnlyProperties 
    = true;
});

Calling the /product endpoint again, we’ll now get the following:
再次调用 /product 端点,我们现在将获得以下内容:

{
    "name": "Apple",
    "unitPrice": 0.42,
    "quantity": 6
}

As expected, the Description property hasn’t been serialized because it is null, as well as TotalPrice, which isn’t included in the response because it is read-only.
正如预期的那样,Description 属性尚未序列化,因为它为 null,以及 TotalPrice,由于它是只读的,因此未包含在响应中。

Another typical use case for JsonOptions is when we want to add converters that will be automatically applied for each serialization or deserialization, for example, JsonStrinEnumConverter to convert enumeration values into or from strings.
JsonOptions 的另一个典型用例是当我们想要添加将自动应用于每个序列化或反序列化的转换器时,例如,JsonStrinEnumConverter 用于将枚举值转换为字符串或从字符串转换。

Important note : Be aware that the JsonOptions class used by minimal APIs is the one available in the Microsoft.AspNetCore.Http.Json namespace. Do not confuse it with the one that is defined in the Microsoft.AspNetCore.Mvc namespace; the name of the object is the same, but the latter is valid only for controllers, so it has no effect if set in a minimal API project.
重要提示 : 请注意,最小 API 使用的 JsonOptions 类是 Microsoft.AspNetCore.Http.Json 命名空间中可用的类。不要将其与 Microsoft.AspNetCore.Mvc 命名空间中定义的名称混淆;对象的名称相同,但后者仅对控制器有效,因此如果在最小 API 项目中设置,则无效。

Because of the JSON-only support, if we do not explicitly add support for other formats, as described in the previous sections (using, for example, the BindAsync method on a custom type), minimal APIs will automatically perform some validations on the body binding source and handle the following scenarios:
由于仅支持 JSON,如果我们没有显式添加对其他格式的支持,如前面部分所述(例如,在自定义类型上使用 BindAsync 方法),则最小 API 将在正文绑定源上自动执行一些验证并处理以下情况:

Table 2.3 – The response status codes for body binding problems
表 2.3 – 正文绑定问题的响应状态代码

In these cases, because body validation fails, our route handlers will never be invoked, and we will get the response status codes shown in the preceding table directly.
在这些情况下,由于主体验证失败,我们的路由处理程序将永远不会被调用,我们将直接获取上表中显示的响应状态代码。

Now, we have covered all the pillars that we need to start developing minimal APIs. However, there is another important thing to talk about: the correct way to design a real project to avoid common mistakes within the architecture.
现在,我们已经涵盖了开始开发最小 API 所需的所有支柱。但是,还有一件重要的事情要谈:设计真实项目的正确方法,以避免架构中的常见错误。

Architecting a minimal API project
构建一个最小的 API 项目

Up to now, we have written route handlers directly in the Program.cs file. This is a perfectly supported scenario: with minimal APIs, we can write all our code inside this single file. In fact, almost all the samples show this solution. However, while this is allowed, we can easily imagine how this approach can lead to unstructured and therefore unmaintainable projects. If we have fewer endpoints, it is fine – otherwise, it is better to organize our handlers in separate files.
到目前为止,我们已经直接在 Program.cs 文件中编写了路由处理程序。这是一个完全支持的场景:使用最少的 API,我们可以在这个文件中编写所有代码。事实上,几乎所有样本都显示了这种解决方案。然而,虽然这是允许的,但我们可以很容易地想象这种方法如何导致非结构化的、因此无法维护的项目。如果端点较少,那很好 —— 否则,最好将我们的处理程序组织在单独的文件中。

Let’s suppose that we have the following code right in the Program.cs file because we have to handle CRUD operations:
假设 Program.cs 文件中有以下代码,因为我们必须处理 CRUD作:

app.MapGet("/api/people", (PeopleService peopleService) => 
            { });
app.MapGet("/api/people/{id:guid}", (Guid id, PeopleService 
             peopleService) => { });
app.MapPost("/api/people", (Person Person, PeopleService 
              people) => { });
app.MapPut("/api/people/{id:guid}", (Guid id, Person 
             Person, PeopleService people) => { });
app.MapDelete("/api/people/{id:guid}", (Guid id, 
                PeopleService people) => { });

It’s easy to imagine that, if we have all the implementation here (even if we’re using PeopleService to extract the business logic), this file can easily explode. So, in real scenarios, the inline lambda approach isn’t the best practice. We should use the other methods that we have covered in the Routing section to define the handlers instead. So, it is a good idea to create an external class to hold all the route handlers:
很容易想象,如果我们在这里拥有所有实现(即使我们使用 PeopleService 来提取业务逻辑),此文件很容易爆炸。因此,在实际场景中,内联 lambda 方法并不是最佳实践。我们应该使用 路由 部分介绍的其他方法来定义处理程序。因此,创建一个外部类来保存所有路由处理程序是一个好主意:

public class PeopleHandler
{
   public static void MapEndpoints(IEndpointRouteBuilder 
   app)
   {
       app.MapGet("/api/people", GetList);
       app.MapGet("/api/people/{id:guid}", Get);
       app.MapPost("/api/people", Insert);
       app.MapPut("/api/people/{id:guid}", Update);
       app.MapDelete("/api/people/{id:guid}", Delete);
   }

   private static IResult GetList(PeopleService    
   peopleService) { /* ... */ }
   private static IResult Get(Guid id, PeopleService 
   peopleService) { /* ... */ }
   private static IResult Insert(Person person, 
   PeopleService people) { /* ... */ }
   private static IResult Update(Guid id, Person 
   person, PeopleService people) { /* ... */ }
   private static IResult Delete(Guid id) { /* ... */ }
}

We have grouped all the endpoint definitions inside the PeopleHandler.MapEndpoints static method, which takes the IEndpointRouteBuilder interface as an argument, which in turn is implemented by the WebApplication class. Then, instead of using lambda expressions, we have created separate methods for each handler, so that the code is much cleaner. In this way, to register all these handlers in our minimal API, we just need the following code in Program.cs:
我们已将所有端点定义分组到 PeopleHandler.MapEndpoints 静态方法中,该方法将 IEndpointRouteBuilder 接口作为参数,而该接口又由 WebApplication 类实现。然后,我们没有使用 lambda 表达式,而是为每个处理程序创建了单独的方法,以便代码更加简洁。这样,要在我们的最小 API 中注册所有这些处理程序,我们只需要在 Program.cs 中编写以下代码:

var builder = WebApplication.CreateBuilder(args);
// ..
var app = builder.Build();
// ..
PeopleHandler.MapEndpoints(app);
app.Run();

Going forward
展望未来

The approach just shown allows us to better organize a minimal API project, but still requires that we explicitly add a line to Program.cs for every handler we want to define. Using an interface and a bit of reflection, we can create a straightforward and reusable solution to simplify our work with minimal APIs.
刚才展示的方法使我们能够更好地组织一个最小的 API 项目,但仍然需要我们为要定义的每个处理程序显式添加一行 to Program.cs。使用接口和一些反射,我们可以创建一个简单且可重用的解决方案,以最少的 API 简化我们的工作。

So, let’s start by defining the following interface:
因此,让我们从定义以下接口开始:

public interface IEndpointRouteHandler
{
   public void MapEndpoints(IEndpointRouteBuilder app);
}

As the name implies, we need to make all our handlers (as with PeopleHandler previously) implement it:
顾名思义,我们需要让所有的处理程序(就像之前的 PeopleHandler 一样)实现它:

public class PeopleHandler : IEndpointRouteHandler
{
       public void MapEndpoints(IEndpointRouteBuilder app)
         {
                // ...
         }
         // ...
}

Note : The MapEndpoints method isn’t static anymore, because now it is the implementation of the IEndpointRouteHandler interface.
注意 : MapEndpoints 方法不再是静态的,因为它现在是 IEndpointRouteHandler 接口的实现。

Now we need a new extension method that, using reflection, scans an assembly for all the classes that implement this interface and automatically calls their MapEndpoints methods:
现在,我们需要一个新的扩展方法,该方法使用反射扫描程序集中实现此接口的所有类,并自动调用其 MapEndpoints 方法:

public static class IEndpointRouteBuilderExtensions
{
    public static void MapEndpoints(this
    IEndpointRouteBuilder app, Assembly assembly)
    {
        var endpointRouteHandlerInterfaceType = 
          typeof(IEndpointRouteHandler);
        var endpointRouteHandlerTypes = 
        assembly.GetTypes().Where(t =>
        t.IsClass && !t.IsAbstract && !t.IsGenericType
        && t.GetConstructor(Type.EmptyTypes) != null
        && endpointRouteHandlerInterfaceType
        .IsAssignableFrom(t));
        foreach (var endpointRouteHandlerType in 
        endpointRouteHandlerTypes)
        {
            var instantiatedType = (IEndpointRouteHandler)
              Activator.CreateInstance
                (endpointRouteHandlerType)!;
            instantiatedType.MapEndpoints(app);
        }
    }
}

Tip : If you want to go into further detail about reflection and how it works in .NET, you can start by browsing the following page: https://docs.microsoft.com/dotnet/csharp/programming-guide/concepts/reflection.
提示 : 如果您想更详细地了解反射及其在 .NET 中的工作原理,可以先浏览以下页面:https://docs.microsoft.com/dotnet/csharp/programming-guide/concepts/reflection

With all these pieces in place, the last thing to do is to call the extension method in the Program.cs file, before the Run() method:
完成所有这些部分后,最后要做的是在 Run() 方法之前调用 Program.cs 文件中的扩展方法:

app.MapEndpoints(Assembly.GetExecutingAssembly());
app.Run();

In this way, when we add new handlers, we should only need to create a new class that implements the IEndpointRouteHandler interface. No other changes will be required in Program.cs to add the new endpoints to the routing engine.
这样,当我们添加新的处理程序时,我们应该只需要创建一个实现 IEndpointRouteHandler 接口的新类。Program.cs 中无需进行其他更改即可将新终端节点添加到路由引擎。

Writing route handlers in external files and thinking about a way to automate endpoint registrations so that Program.cs won’t grow for each feature addition is the right way to architect a minimal API project.
在外部文件中编写路由处理程序并考虑一种自动化终端节点注册的方法,以便Program.cs不会因每个功能添加而增长,这是构建最小 API 项目的正确方法。

Summary
总结

ASP.NET Core minimal APIs represent a new way of writing HTTP APIs in the .NET world. In this chapter, we covered all the pillars that we need to start developing minimal APIs, how to effectively approach them, and the best practices to take into consideration when deciding to follow this architecture.
ASP.NET Core 最小 API 代表了在 .NET 环境中编写 HTTP API 的一种新方法。在本章中,我们介绍了开始开发最小 API 所需的所有支柱、如何有效地处理它们,以及在决定遵循此架构时要考虑的最佳实践。

In the next chapter, we’ll focus on some advanced concepts such as documenting APIs with Swagger, defining a correct error handling system, and integrating a minimal API with a single-page application.
在下一章中,我们将重点介绍一些高级概念,例如使用 Swagger 记录 API、定义正确的错误处理系统以及将最小 API 与单页应用程序集成。

3 Working with Minimal APIs

使用最少的 API

In this chapter, we will try to apply some advanced development techniques available in earlier versions of .NET. We will touch on four common topics that are disjointed from each other.
在本章中,我们将尝试应用早期版本的 .NET 中提供的一些高级开发技术。我们将讨论四个彼此脱节的常见主题。

We’ll cover productivity topics and best practices for frontend interfacing and configuration management.
我们将介绍前端接口和配置管理的生产力主题和最佳实践。

Every developer, sooner or later, will encounter the issues that we describe in this chapter. A programmer will have to write documentation for APIs, will have to make the API talk to a JavaScript frontend, will have to handle errors and try to fix them, and will have to configure the application according to parameters.
每个开发人员迟早都会遇到我们在本章中描述的问题。程序员必须为 API 编写文档,必须使 API 与 JavaScript 前端通信,必须处理错误并尝试修复它们,并且必须根据参数配置应用程序。

The themes we will touch on in this chapter are as follows:
我们将在本章中讨论的主题如下:

• Exploring Swagger
• Supporting CORS
• Working with global API settings
• Error handling

Technical requirements
技术要求

As reported in the previous chapters, it will be necessary to have the .NET 6 development framework available; you will also need to use .NET tools to run an in-memory web server.
如前几章所述,有必要提供 .NET 6 开发框架;您还需要使用 .NET 工具来运行内存中的 Web 服务器。

To validate the functionality of cross-origin resource sharing (CORS), we should exploit a frontend application residing on a different HTTP address from the one where we will host the API.
为了验证跨域资源共享 (CORS) 的功能,我们应该利用驻留在与我们将托管 API 的 HTTP 地址不同的 HTTP 地址上的前端应用程序。

To test the CORS example that we will propose within the chapter, we will take advantage of a web server in memory, which will allow us to host a simple static HTML page.
为了测试我们将在本章中提出的 CORS 示例,我们将利用内存中的 Web 服务器,这将允许我们托管一个简单的静态 HTML 页面。

To host the web page (HTML and JavaScript), we will therefore use LiveReloadServer, which you can install as a .NET tool with the following command:
因此,为了托管网页(HTML 和 JavaScript),我们将使用 LiveReloadServer,您可以使用以下命令将其作为 .NET 工具安装:

dotnet tool install -g LiveReloadServer

All the code samples in this chapter can be found in the GitHub repository for this book at https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter03.
本章中的所有代码示例都可以在本书的 GitHub 存储库中找到,网址为 https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter03

Exploring Swagger
探索 Swagger

Swagger has entered the life of .NET developers in a big way; it’s been present on the project shelves for several versions of Visual Studio.
Swagger 已经在很大程度上进入了 .NET 开发人员的生活;它已出现在多个版本的 Visual Studio 的项目架上。

Swagger is a tool based on the OpenAPI specification and allows you to document APIs with a web application. According to the official documentation available at https://oai.github.io/Documentation/introduction.xhtml:
Swagger 是基于 OpenAPI 规范的工具,允许您使用 Web 应用程序记录 API。根据 https://oai.github.io/Documentation/introduction.xhtml 上提供的官方文档:

“The OpenAPI Specification allows the description of a remote API accessible through HTTP or HTTP-like protocols.

An API defines the allowed interactions between two pieces of software, just like a user interface defines the ways in which a user can interact with a program.
“OpenAPI 规范允许描述可通过 HTTP 或类似 HTTP 的协议访问的远程 API。API 定义两个软件之间允许的交互,就像用户界面定义用户与程序交互的方式一样。

An API is composed of the list of possible methods to call (requests to make), their parameters, return values and any data format they require (among other things). This is equivalent to how a user’s interactions with a mobile phone app are limited to the buttons, sliders and text boxes in the app’s user interface.”
API 由可能调用的方法列表 (发出的请求) 、它们的参数、返回值和它们需要的任何数据格式 (以及其他内容) 组成。这相当于用户与手机应用程序的交互仅限于应用程序用户界面中的按钮、滑块和文本框。

Swagger in the Visual Studio scaffold
Visual Studio 基架中的 Swagger

We understand then that Swagger, as we know it in the .NET world, is nothing but a set of specifications defined for all applications that expose web-based APIs:
然后我们明白,正如我们在 .NET 世界中所知道的那样,Swagger 只不过是为公开基于 Web 的 API 的所有应用程序定义的一组规范:

Figure 3.1 – Visual Studio scaffold

By selecting Enable OpenAPI support, Visual Studio goes to add a NuGet package called Swashbuckle.AspNetCore and automatically configures it in the Program.cs file.
通过选择“启用 OpenAPI 支持”,Visual Studio 将添加一个名为 Swashbuckle.AspNetCore 的 NuGet 包,并自动在 Program.cs 文件中对其进行配置。

We show the few lines that are added with a new project. With these few pieces of information, a web application is enabled only for the development environment, which allows the developer to test the API without generating a client or using tools external to the application:
我们显示了随新项目添加的几行。有了这几条信息,Web 应用程序仅针对开发环境启用,这允许开发人员在不生成客户端或使用应用程序外部工具的情况下测试 API:

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

The graphical part generated by Swagger greatly increases productivity and allows the developer to share information with those who will interface with the application, be it a frontend application or a machine application.
Swagger 生成的图形部分大大提高了生产力,并允许开发人员与将与应用程序交互的人员共享信息,无论是前端应用程序还是机器应用程序。

Note : We remind you that enabling Swagger in a production environment is strongly discouraged because sensitive information could be publicly exposed on the web or on the network where the application resides.
注意 : 我们提醒您,强烈建议不要在生产环境中启用 Swagger,因为敏感信息可能会在 Web 或应用程序所在的网络上公开暴露。

We have seen how to introduce Swagger into our API applications; this functionality allows us to document our API, as well as allow users to generate a client to call our application. Let’s see the options we have to quickly interface an application with APIs described with OpenAPI.
我们已经了解了如何将 Swagger 引入我们的 API 应用程序;此功能允许我们记录我们的 API,并允许用户生成客户端来调用我们的应用程序。让我们看看我们必须选择哪些选项来快速将应用程序与 OpenAPI 中描述的 API 连接起来。

OpenAPI Generator
OpenAPI 生成器

With Swagger, and especially with the OpenAPI standard, you can automatically generate clients to connect to the web application. Clients can be generated for many languages but also for development tools. We know how tedious and repetitive it is to write clients to access the Web API. Open API Generator helps us automate code generation, inspect the API documentation made by Swagger and OpenAPI, and automatically generate code to interface with the API. Simple, easy, and above all, fast.
使用 Swagger,尤其是 OpenAPI 标准,您可以自动生成客户端以连接到 Web 应用程序。可以为多种语言生成客户端,也可以为开发工具生成客户端。我们知道编写客户端来访问 Web API 是多么乏味和重复。Open API Generator 帮助我们自动生成代码,检查 Swagger 和 OpenAPI 制作的 API 文档,并自动生成代码以与 API 交互。简单、轻松,最重要的是,快速。

The @openapitools/openapi-generator-cli npm package is a very well-known package wrapper for OpenAPI Generator, which you can find at https://openapi-generator.tech/.
@openapitools/openapi-generator-cli npm 包是 OpenAPI 生成器的一个非常知名的包包装器,您可以在 https://openapi-generator.tech/ 中找到它。

With this tool, you can generate clients for programming languages as well as load testing tools such as JMeter and K6.
使用此工具,您可以为编程语言生成客户端以及 JMeter 和 K6 等负载测试工具。

It is not necessary to install the tool on your machine, but if the URL of the application is accessible from the machine, you can use a Docker image, as described by the following command:
无需在计算机上安装该工具,但如果可以从计算机访问应用程序的 URL,则可以使用 Docker 映像,如以下命令所述:

docker run --rm \

    -v ${PWD}:/local openapitools/openapi-generator-cli generate \

    -i /local/petstore.yaml \

    -g go \

    -o /local/out/go

The command allows you to generate a Go client using the OpenAPI definition found in the petstore.yaml file that is mounted on the Docker volume.
该命令允许您使用挂载在 Docker 卷上的 petstore.yaml 文件中找到的 OpenAPI 定义生成 Go 客户端。

Now, let’s go into detail to understand how you can leverage Swagger in .NET 6 projects and with minimal APIs.
现在,让我们详细介绍如何在 .NET 6 项目中利用 Swagger 并使用最少的 API。

Swagger in minimal APIs
在最少的 API 中使用Swagger

In ASP.NET Web API, as in the following code excerpt, we see a method documented with C# language annotations with the triple slash (///).
在 Web API ASP.NET,如以下代码摘录所示,我们看到一个使用带有三斜杠 () 的 C# 语言注释记录的方法。

The documentation section is leveraged to add more information to the API description. In addition, the ProducesResponseType annotations help Swagger identify the possible codes that the client must handle as a result of the method call:
利用 documentation 部分向 API 描述添加更多信息。此外,ProducesResponseType 注释可帮助 Swagger 识别客户端在方法调用后必须处理的可能代码:

/// <summary>
/// Creates a Contact.
/// </summary>
/// <param name="contact"></param>
/// <returns>A newly created Contact</returns>
/// <response code="201">Returns the newly created contact</response>
/// <response code="400">If the contact is null</response>
[HttpPost]
[ProducesResponseType(StatusCodes.Status201Created)]
[ProducesResponseType(StatusCodes.Status400BadRequest)]
public async Task<IActionResult> Create(Contact contactItem)
{
     _context.Contacts.Add(contactItem);
     await _context.SaveChangesAsync();
     return CreatedAtAction(nameof(Get), new { id = 
     contactItem.Id }, contactItem);
}

Swagger, in addition to the annotations on single methods, is also instructed by the documentation of the language to give further information to those who will then have to use the API application. A description of the methods of the parameters is always welcome by those who will have to interface; unfortunately, it is not possible to exploit this functionality in the minimal API.
除了单个方法的注释外,该语言的文档还指示 Swagger 为那些随后必须使用 API 应用程序的人提供更多信息。对参数方法的描述总是受到那些必须进行接口的人的欢迎;遗憾的是,无法在最小 API 中利用此功能。

Let’s go in order and see how to start using Swagger on a single method:
让我们按顺序来看看如何在单个方法上开始使用 Swagger:

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen(c =>
{
    c.SwaggerDoc("v1", new() 
    { 
        Title = builder.Environment.ApplicationName,
        Version = "v1", Contact = new() 
        { Name = "PacktAuthor", Email = "authors@packtpub.com",
          Url = new Uri("https://www.packtpub.com/") },
          Description = "PacktPub Minimal API - Swagger",
          License = new Microsoft.OpenApi.Models.
            OpenApiLicense(),
          TermsOfService = new("https://www.packtpub.com/")
});
});
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

With this first example, we have configured Swagger and general Swagger information. We have included additional information that enriches Swagger’s UI. The only mandatory information is the title, while the version, contact, description, license, and terms of service are optional.
在第一个示例中,我们配置了 Swagger 和常规 Swagger 信息。我们添加了丰富 Swagger UI 的其他信息。唯一的必填信息是标题,而版本、联系人、描述、许可证和服务条款是可选的。

The UseSwaggerUI() method automatically configures where to put the UI and the JSON file describing the API with the OpenAPI format.
UseSwaggerUI() 方法自动配置放置 UI 和描述 OpenAPI 格式 API 的 JSON 文件的位置。

Here is the result at the graphical level:
这是图形级别的结果:

Figure 3.2 – The Swagger UI

We can immediately see that the OpenAPI contract information has been placed in the /swagger/v1/swagger.json path.
我们可以立即看到 OpenAPI 合约信息已经放在 /swagger/v1/swagger.json 路径下。

The contact information is populated, but no operations are reported as we haven’t entered any yet. Should the API have versioning? In the top-right section, we can select the available operations for each version.
联系信息已填充,但未报告任何作,因为我们尚未输入任何作。API 应该有版本控制吗?在右上角,我们可以为每个版本选择可用的作。

We can customize the Swagger URL and insert the documentation on a new path; the important thing is to redefine SwaggerEndpoint, as follows:
我们可以自定义 Swagger URL 并将文档插入到新路径上;重要的是重新定义 SwaggerEndpoint,如下所示:

app.UseSwaggerUI(c => c.SwaggerEndpoint("/swagger/v1/swagger.json", $"{builder.Environment.ApplicationName} v1"));

Let’s now go on to add the endpoints that describe the business logic.
现在,我们继续添加描述业务逻辑的终端节点。

It is very important to define RouteHandlerBuilder because it allows us to describe all the properties of the endpoint that we have written in code.
定义 RouteHandlerBuilder 非常重要,因为它允许我们描述我们在代码中编写的端点的所有属性。

The UI of Swagger must be enriched as much as possible; we must describe at best what the minimal APIs allow us to specify. Unfortunately, not all the functionalities are available, as in ASP.NET Web API.
必须尽可能丰富 Swagger 的 UI;我们最多只能描述最小 API 允许我们指定的内容。遗憾的是,并非所有功能都可用,就像 ASP.NET Web API 一样。

Versioning in minimal APIs
在最少的 API 中进行版本控制

Versioning in minimal APIs is not handled in the framework functionality; as a result, even Swagger cannot handle UI-side API versioning. So, we observe that when we go to the Select a definition section shown in Figure 3.2, only one entry for the current version of the API is visible.
最小 API 中的版本控制不在框架功能中处理;因此,即使是 Swagger 也无法处理 UI 端 API 版本控制。因此,我们观察到,当我们转到图 3.2 所示的 Select a definition 部分时,只有当前版本 API 的一个条目可见。

Swagger features
Swagger 功能

We just realized that not all features are available in Swagger; let’s now explore what is available instead. To describe the possible output values of an endpoint, we can call functions that can be called after the handler, such as the Produces or WithTags functions, which we are now going to explore.
我们刚刚意识到并非所有功能在 Swagger 中都可用;现在让我们来探索一下可用的内容。为了描述终端节点的可能输出值,我们可以调用可以在处理程序之后调用的函数,例如 Produces 或 WithTags 函数,我们现在将探讨这些函数。

The Produces function decorates the endpoint with all the possible responses that the client should be able to manage. We can add the name of the operation ID; this information will not appear in the Swagger screen, but it will be the name with which the client will create the method to call the endpoint. OperationId is the unique name of the operation made available by the handler.
Produces 函数使用客户端应该能够管理的所有可能的响应来装饰终端节点。我们可以添加作 ID 的名称;此信息不会显示在 Swagger 屏幕中,但它将是客户端创建调用终结点的方法时使用的名称。OperationId 是处理程序可用的作的唯一名称。

To exclude an endpoint from the API description, you need to call ExcludeFromDescription(). This function is rarely used, but it is very useful in cases where you don’t want to expose endpoints to programmers who are developing the frontend because that particular endpoint is used by a machine application.
要从 API 描述中排除终端节点,您需要调用 ExcludeFromDescription()。此函数很少使用,但在您不想将端点公开给正在开发前端的程序员的情况下,它非常有用,因为该特定端点由机器应用程序使用。

Finally, we can add and tag the various endpoints and segment them for better client management:
最后,我们可以添加和标记各种终端节点,并对其进行细分以更好地管理客户端:

app.MapGet("/sampleresponse", () =>
    {
        return Results.Ok(new ResponseData("My Response"));
    })
    .Produces<ResponseData>(StatusCodes.Status200OK)
    .WithTags("Sample")
    .WithName("SampleResponseOperation"); // operation ids to 
                                             Open API
app.MapGet("/sampleresponseskipped", () =>
{
    return Results.Ok(new ResponseData("My Response Skipped"));
})
    .ExcludeFromDescription();
app.MapGet("/{id}", (int id) => Results.Ok(id));
app.MapPost("/", (ResponseData data) => Results.Ok(data))
   .Accepts<ResponseData>(MediaTypeNames.Application.Json);

This is the graphical result of Swagger; as I anticipated earlier, the tags and operation IDs are not shown by the web client:
这是 Swagger 的图形结果;正如我之前所预料的那样,Web 客户端不会显示标签和作 ID:

Figure 3.3 – Swagger UI methods
图 3.3 – Swagger UI 方法

The endpoint description, on the other hand, is very useful to include. It’s very easy to implement: just insert C# comments in the method (just insert three slashes, ///, in the method). Minimal APIs don’t have methods like we are used to in web-based controllers, so they are not natively supported.
另一方面,终端节点描述非常有用。这很容易实现:只需在方法中插入 C# 注释(只需在方法中插入三个斜杠 , 即可)。Minimal API 没有我们在基于 Web 的控制器中习惯的方法,因此它们本身不受支持。

Swagger isn’t just the GUI we’re used to seeing. Above all, Swagger is the JSON file that supports the OpenAPI specification, of which the latest version is 3.1.0.
Swagger 不仅仅是我们习惯看到的 GUI。首先,Swagger 是支持 OpenAPI 规范的 JSON 文件,最新版本为 3.1.0。

In the following snippet, we show the section containing the description of the first endpoint that we inserted in the API. We can infer both the tag and the operation ID; this information will be used by those who will interface with the API:
在以下代码段中,我们显示了包含我们在 API 中插入的第一个终端节点的描述的部分。我们可以推断 tag 和作 ID;此信息将由将与 API 交互的人员使用:

"paths": {
         "/sampleresponse": {
              "get": {
                   "tags": [
                        "Sample"
                   ],
                   "operationId": "SampleResponseOperation",
                   "responses": {
                        "200": {
                             "description": "Success",
                             "content": {
                                  "application/json": {
                                       "schema": {
                                            "$ref": "#/components/schemas/ResponseData"
                                       }
                                  }
                             }
                        }
                   }
              }
         },

In this section, we have seen how to configure Swagger and what is currently not yet supported.
在本节中,我们了解了如何配置 Swagger 以及当前尚不支持的内容。

In the following chapters, we will also see how to configure OpenAPI, both for the OpenID Connect standard and authentication via the API key.
在接下来的章节中,我们还将了解如何配置 OpenAPI,包括 OpenID Connect 标准和通过 API 密钥进行身份验证。

In the preceding code snippet of the Swagger UI, Swagger makes the schematics of the objects involved available, both inbound to the various endpoints and outbound from them.
在 Swagger UI 的前面的代码片段中,Swagger 使所涉及对象的示意图可用,包括入站到各个端点和从它们出站的示意图。

Figure 3.4 – Input and output data schema
图 3.4 – 输入和输出数据架构

We will learn how to deal with these objects and how to validate and define them in Chapter 6, Exploring Validation and Mapping.
我们将在第 6 章 探索验证和映射 中学习如何处理这些对象以及如何验证和定义它们。

Swagger OperationFilter
Swagger OperationFilter

The operation filter allows you to add behavior to all operations shown by Swagger. In the following example, we’ll show you how to add an HTTP header to a particular call, filtering it by OperationId.
作筛选器允许您向 Swagger 显示的所有作添加行为。在以下示例中,我们将向您展示如何向特定调用添加 HTTP 标头,并按 OperationId 对其进行筛选。

When you go to define an operation filter, you can also set filters based on routes, tags, and operation IDs:
在定义作筛选条件时,您还可以根据路由、标签和作 ID 设置筛选条件:

public class CorrelationIdOperationFilter : IOperationFilter
{
    private readonly IWebHostEnvironment environment;
    public CorrelationIdOperationFilter(IWebHostEnvironment 
    environment)
    {
        this.environment = environment;
    }
    /// <summary>
    /// Apply header in parameter Swagger.
    /// We add default value in parameter for developer 
        environment
    /// </summary>
    /// <param name="operation"></param>
    /// <param name="context"></param>
    public void Apply(OpenApiOperation operation, 
    OperationFilterContext context)
    {
        if (operation.Parameters == null)
        {
            operation.Parameters = new 
            List<OpenApiParameter>();
        }
        if (operation.OperationId == 
            "SampleResponseOperation")
        {
             operation.Parameters.Add(new OpenApiParameter
             {
                 Name = "x-correlation-id",
                 In = ParameterLocation.Header,
                 Required = false,
                 Schema = new OpenApiSchema { Type = 
                 "String", Default = new OpenApiString("42") }
             });
        }
         }
}

To define an operation filter, the IOperationFilter interface must be implemented.
要定义作过滤器,必须实现 IOperationFilter 接口。

In the constructor, you can define all interfaces or objects that have been previously registered in the dependency inject engine.
在构造函数中,您可以定义之前在 dependency inject 引擎中注册的所有接口或对象。

The filter then consists of a single method, called Apply, which provides two objects:
然后,筛选器由一个名为 Apply 的方法组成,该方法提供两个对象:

• OpenApiOperation: An operation where we can add parameters or check the operation ID of the current call
• OperationFilterContext: The filter context that allows you to read ApiDescription, where you can find the URL of the current endpoint

Finally, to enable the operation filter in Swagger, we will need to register it inside the SwaggerGen method.
最后,要在 Swagger 中启用作筛选器,我们需要在 SwaggerGen 方法中注册它。

In this method, we should then add the filter, as follows:
在此方法中,我们应该添加过滤器,如下所示:

builder.Services.AddSwaggerGen(c =>
{
         … removed for brevity
         c.OperationFilter<CorrelationIdOperationFilter>();
});

Here is the result at the UI level; in the endpoint and only for a particular operation ID, we would have a new mandatory header with a default parameter that, in development, will not have to be inserted:
下面是 UI 级别的结果;在终端节点中,并且仅针对特定的作 ID,我们将有一个带有 default 参数的新 mandatory 标头,在开发中,不必插入该参数:

Figure 3.5 – API key section
图 3.5 – API 密钥部分

This case study helps us a lot when we have an API key that we need to set up and we don’t want to insert it on every single call.
当我们有一个需要设置的 API 密钥并且我们不想在每次调用时都插入它时,这个案例研究对我们有很大帮助。

Operation filter in production
生产中的作过滤器

Since Swagger should not be enabled in the production environment, the filter and its default value will not create application security problems.
由于不应在生产环境中启用 Swagger,因此过滤器及其默认值不会造成应用程序安全问题。

We recommend that you disable Swagger in the production environment.
建议您在生产环境中关闭 Swagger。

In this section, we figured out how to enable a UI tool that describes the API and allows us to test it. In the next section, we will see how to enable the call between single-page applications (SPAs) and the backend via CORS.
在本节中,我们弄清楚了如何启用描述 API 并允许我们测试它的 UI 工具。在下一节中,我们将了解如何通过 CORS 启用单页应用程序 (SPA) 与后端之间的调用。

Enabling CORS
启用 CORS

CORS is a security mechanism whereby an HTTP/S request is blocked if it arrives from a different domain than the one where the application is hosted. More information can be found in the Microsoft documentation or on the Mozilla site for developers.
CORS 是一种安全机制,如果 HTTP/S 请求来自与托管应用程序的域不同的域,则 HTTP/S 请求将被阻止。有关详细信息,请参阅 Microsoft 文档或 Mozilla 开发人员网站。

A browser prevents a web page from making requests to a domain other than the domain that serves that web page. A web page, SPA, or server-side web page can make HTTP requests to several backend APIs that are hosted in different origins.
浏览器会阻止网页向提供该网页的域以外的域发出请求。网页、SPA 或服务器端网页可以向托管在不同源中的多个后端 API 发出 HTTP 请求。

This restriction is called the same-origin policy. The same-origin policy prevents a malicious site from reading data from another site. Browsers don’t block HTTP requests but do block response data.
此限制称为同源策略。同源策略可防止恶意站点从其他站点读取数据。浏览器不会阻止 HTTP 请求,但会阻止响应数据。

We, therefore, understand that the CORS qualification, as it relates to safety, must be evaluated with caution.
因此,我们理解必须谨慎评估与安全相关的 CORS 资格。

The most common scenario is that of SPAs that are released on web servers with different web addresses than the web server hosting the minimal API:
最常见的情况是在 Web 服务器上发布的 SPA,这些 SPA 的 Web 地址与托管最小 API 的 Web 服务器不同:

Figure 3.6 – SPA and minimal API
图 3.6 – SPA 和最小 API

A similar scenario is that of microservices, which need to talk to each other. Each microservice will reside at a particular web address that will be different from the others.
类似的场景是微服务,它们需要相互通信。每个微服务将驻留在一个与其他微服务不同的特定 Web 地址上。

Figure 3.7 – Microservices and minimal APIs
图 3.7 – 微服务和最少的 API

In all these cases, therefore, a CORS problem is encountered.
因此,在所有这些情况下,都会遇到 CORS 问题。

We now understand the cases in which a CORS request can occur. Now let’s see what the correct HTTP request flow is and how the browser handles the request.
现在,我们了解了可能发生 CORS 请求的情况。现在让我们看看正确的 HTTP 请求流是什么,以及浏览器如何处理请求。

CORS flow from an HTTP request
来自 HTTP 请求的 CORS 流

What happens when a call leaves the browser for a different address other than the one where the frontend is hosted?
当调用离开浏览器前往托管前端的地址以外的其他地址时,会发生什么情况?

The HTTP call is executed and it goes all the way to the backend code, which executes correctly.
HTTP 调用被执行,并一直进入后端代码,后端代码正确执行。

The response, with the correct data inside, is blocked by the browser. That’s why when we execute a call with Postman, Fiddler, or any HTTP client, the response reaches us correctly.
包含正确数据的响应被浏览器阻止。这就是为什么当我们使用 Postman、Fiddler 或任何 HTTP 客户端执行调用时,响应会正确到达我们。

Figure 3.8 – CORS flow
图 3.8 – CORS 流程

In the following figure, we can see that the browser makes the first call with the OPTIONS method, to which the backend responds correctly with a 204 status code:
在下图中,我们可以看到浏览器使用 OPTIONS 方法进行了第一次调用,后端以 204 状态码正确响应:

Figure 3.9 – First request for the CORS call (204 No Content result)
图 3.9 – CORS 调用的第一个请求(204 No Content 结果)

In the second call that the browser makes, an error occurs; the strict-origin-when-cross-origin value is shown in Referrer Policy, which indicates the refusal by the browser to accept data from the backend:
在浏览器进行的第二次调用中,会发生错误;strict-origin-when-cross-origin 值显示在 Referrer Policy 中,该值表示浏览器拒绝接受来自后端的数据:

Figure 3.10 – Second request for the CORS call (blocked by the browser)
图 3.10 – CORS 调用的第二个请求(被浏览器阻止)

When CORS is enabled, in the response to the OPTIONS method call, three headers are inserted with the characteristics that the backend is willing to respect:
启用 CORS 后,在对 OPTIONS 方法调用的响应中,将插入三个标头,这些标头具有后端愿意遵循的特征:

Figure 3.11 – Request for CORS call (with CORS enabled)
图 3.11 – 请求 CORS 调用(启用 CORS)

In this case, we can see that three headers are added that define Access-Control-Allow-Headers, Access-Control-Allow-Methods, and Access-Control-Allow-Origin.
在本例中,我们可以看到添加了三个标头,分别定义 Access-Control-Allow-Headers、Access-Control-Allow-Methods 和 Access-Control-Allow-Origin。

The browser with this information can accept or block the response to this API.
具有此信息的浏览器可以接受或阻止对此 API 的响应。

Setting CORS with a policy
使用策略设置 CORS

Many configurations are possible within a .NET 6 application for activating CORS. We can define authorization policies in which the four available settings can be configured. CORS can also be activated by adding extension methods or annotations.
在 .NET 6 应用程序中可以使用许多配置来激活 CORS。我们可以定义授权策略,在其中可以配置四个可用设置。还可以通过添加扩展方法或注释来激活 CORS。

But let us proceed in order.
但是,让我们按顺序进行吧。

The CorsPolicyBuilder class allows us to define what is allowed or not allowed within the CORS acceptance policy.
orsPolicyBuilder 类允许我们定义 CORS 接受策略中允许或不允许的内容。

We have, therefore, the possibility to set different methods, for example:
因此,我们可以设置不同的方法,例如:

• AllowAnyHeader
• AllowAnyMethod
• AllowAnyOrigin
• AllowCredentials

While the first three methods are descriptive and allow us to enable any settings relating to the header, method, and origin of the HTTP call, respectively, AllowCredentials allows us to include the cookie with the authentication credentials.
虽然前三种方法是描述性的,并允许我们分别启用与 HTTP 调用的标头、方法和来源相关的任何设置,但 AllowCredentials 允许我们将 Cookie 与身份验证凭据一起包含。

CORS policy recommendations
CORS 策略建议

We recommend that you don’t use the AllowAny methods but instead filter out the necessary information to allow for greater security. As a best practice, when enabling CORS, we recommend the use of these methods:
我们建议您不要使用 AllowAny 方法,而是筛选掉必要的信息以提高安全性。作为最佳实践,在启用 CORS 时,我们建议使用以下方法:

• WithExposedHeaders
• WithHeaders
• WithOrigins

To simulate a scenario for CORS, we created a simple frontend application with three different buttons. Each button allows you to test one of the possible configurations of CORS within the minimal API. We will explain these configurations in a few lines.
为了模拟 CORS 的场景,我们创建了一个具有三个不同按钮的简单前端应用程序。每个按钮都允许您在最小 API 中测试 CORS 的一种可能配置。我们将用几行来解释这些配置。

To enable the CORS scenario, we have created a single-page application that can be launched on a web server in memory. We have used LiveReloadServer, a tool that can be installed with the .NET CLI. We talked about it at the start of the chapter and now it’s time to use it.
为了启用 CORS 方案,我们创建了一个单页应用程序,该应用程序可以在内存中的 Web 服务器上启动。我们使用了 LiveReloadServer,这是一个可以使用 .NET CLI 安装的工具。我们在本章的开头讨论过它,现在是时候使用它了。

After installing it, you need to launch the SPA with the following command:
安装后,您需要使用以下命令启动 SPA:

livereloadserver "{BasePath}\Chapter03\2-CorsSample\Frontend"

Here, BasePath is the folder where you are going to download the examples available on GitHub.
此处,BasePath 是您要下载 GitHub 上可用示例的文件夹。

Then you must start the application backend, either through Visual Studio or Visual Studio Code or through the .NET CLI with the following command:
然后,您必须使用以下命令通过 Visual Studio 或 Visual Studio Code 或通过 .NET CLI 启动应用程序后端:

dotnet run .\Backend\CorsSample.csproj

We’ve figured out how to start an example that highlights the CORS problem; now we need to configure the server to accept the request and inform the browser that it is aware that the request is coming from a different source.
我们已经想出了如何开始一个突出 CORS 问题的示例;现在我们需要配置服务器以接受请求并通知浏览器它知道请求来自不同的来源。

Next, we will talk about policy configuration. We will understand the characteristics of the default policy as well as how to create a custom one.
接下来,我们将讨论策略配置。我们将了解默认策略的特征以及如何创建自定义策略。

Configuring a default policy
配置默认策略

To configure a single CORS enabling policy, you need to define the behavior in the Program.cs file and add the desired configurations. Let’s implement a policy and define it as Default.
要配置单个 CORS 启用策略,您需要在 Program.cs 文件中定义行为并添加所需的配置。让我们实现一个策略并将其定义为 Default。

Then, to enable the policy for the whole application, simply add app.UseCors(); before defining the handlers:
然后,要为整个应用程序启用策略,只需添加 app.UseCors();在定义处理程序之前:

var builder = WebApplication.CreateBuilder(args);
var corsPolicy = new CorsPolicyBuilder("http://localhost:5200")
    .AllowAnyHeader()
    .AllowAnyMethod()
    .Build();
builder.Services.AddCors(c => c.AddDefaultPolicy(corsPolicy));
var app = builder.Build();
app.UseCors();
app.MapGet("/api/cors", () =>
{
         return Results.Ok(new { CorsResultJson = true });
});
app.Run();

Configuring custom policies
配置自定义策略

We can create several policies within an application; each policy may have its own configuration and each policy may be associated with one or more endpoints.
我们可以在一个应用程序中创建多个策略;每个策略可能有自己的配置,并且每个策略可能与一个或多个终端节点关联。

In the case of microservices, having several policies helps to precisely segment access from a different source.
对于微服务,拥有多个策略有助于精确分段来自不同来源的访问。

In order to configure a new policy, it is necessary to add it and give it a name; this name will give access to the policy and allow it to be associated with the endpoint.
要配置新策略,必须添加该策略并为其命名;此名称将授予对策略的访问权限,并允许它与终端节点关联。

The customized policy, as in the previous example, is assigned to the entire application:
如前面的示例所示,自定义策略被分配给整个应用程序:

var builder = WebApplication.CreateBuilder(args);
var corsPolicy = new CorsPolicyBuilder("http://localhost:5200")
    .AllowAnyHeader()
    .AllowAnyMethod()
    .Build();
builder.Services.AddCors(options => options.AddPolicy("MyCustomPolicy", corsPolicy));
var app = builder.Build();
app.UseCors("MyCustomPolicy");
app.MapGet("/api/cors", () =>
{
    return Results.Ok(new { CorsResultJson = true });
});
app.Run();

We next look at how to apply a single policy to a specific endpoint; to this end, two methods are available. The first is via an extension method to the IEndpointConventionBuilder interface. The second method is to add the EnableCors annotation followed by the name of the policy to be enabled for that method.
接下来,我们将了解如何将单个策略应用于特定终端节点;为此,有两种方法可供选择。第一种是通过 IEndpointConventionBuilder 接口的扩展方法。第二种方法是添加 EnableCors 注释,后跟要为该方法启用的策略的名称。

Setting CORS with extensions
使用扩展设置 CORS

It is necessary to use the RequireCors method followed by the name of the policy.
必须使用 RequireCors 方法,后跟策略的名称。

With this method, it is then possible to enable one or more policies for an endpoint:
使用此方法,可以为终端节点启用一个或多个策略:

app.MapGet("/api/cors/extension", () =>
{
    return Results.Ok(new { CorsResultJson = true });
})
.RequireCors("MyCustomPolicy");

Setting CORS with an annotation
使用注释设置 CORS

The second method is to add the EnableCors annotation followed by the name of the policy to be enabled for that method:
第二种方法是添加 EnableCors 注释,后跟要为该方法启用的策略的名称:

app.MapGet("/api/cors/annotation", [EnableCors("MyCustomPolicy")] () =>
{
   return Results.Ok(new { CorsResultJson = true });
});

Regarding controller programming, it soon becomes apparent that it is not possible to apply a policy to all methods of a particular controller. It is also not possible to group controllers and enable the policy. It is therefore necessary to apply the individual policy to the method or the entire application.
关于控制器编程,很快就会发现不可能将策略应用于特定控制器的所有方法。也无法对控制器进行分组并启用策略。因此,有必要将单个策略应用于方法或整个应用程序。

In this section, we found out how to configure browser protection for applications hosted on different domains.
在本节中,我们了解了如何为托管在不同域上的应用程序配置浏览器保护。

In the next section, we will start configuring our applications.
在下一节中,我们将开始配置我们的应用程序。

Working with global API settings
使用全局 API 设置

We have just defined how you can load data with the options pattern within an ASP.NET application. In this section, we want to describe how you can configure an application and take advantage of everything we saw in the previous section.
我们刚刚定义了如何在 ASP.NET 应用程序中使用 options 模式加载数据。在本节中,我们想描述如何配置应用程序并利用我们在上一节中看到的所有内容。

With the birth of .NET Core, the standard has moved from the Web.config file to the appsettings.json file. The configurations can also be read from other sources, such as other file formats like the old .ini file or a positional file.
随着 .NET Core 的诞生,该标准已从 Web.config 文件移至 appsettings.json 文件。还可以从其他来源读取配置,例如其他文件格式,如旧.ini文件或位置文件。

In minimal APIs, the options pattern feature remains unchanged, but in the next few paragraphs, we will see how to reuse the interfaces or the appsettings.json file structure.
在最小 API 中,选项模式功能保持不变,但在接下来的几段中,我们将看到如何重用接口或 appsettings.json 文件结构。

Configuration in .NET 6
.NET 6 中的配置

The object provided from .NET is IConfiguration, which allows us to read some specific configurations inside the appsettings file.
从 .NET 提供的对象是 IConfiguration,它允许我们读取 appsettings 文件中的一些特定配置。

But, as described earlier, this interface does much more than just access a file for reading.
但是,如前所述,此接口的作用不仅仅是访问文件进行读取。

The following extract from the official documentation helps us understand how the interface is the generic access point that allows us to access the data inserted in various services:
以下摘录自官方文档有助于我们了解接口如何成为允许我们访问插入各种服务中的数据的通用接入点:

Configuration in ASP.NET Core is performed using one or more configuration providers. Configuration providers read configuration data from key-value pairs using a variety of configuration sources.
ASP.NET Core 中的配置是使用一个或多个配置提供程序执行的。配置提供程序使用各种配置源从键值对中读取配置数据。

The following is a list of configuration sources:
以下是配置源的列表:

• Settings files, such as appsettings.json
• Environment variables
• Azure Key Vault
• Azure App Configuration
• Command-line arguments
• Custom providers, installed or created
• Directory files
• In-memory .NET objects

(https://docs.microsoft.com/aspnet/core/fundamentals/configuration/)

The IConfiguration and IOptions interfaces, which we will see in the next chapter, are designed to read data from the various providers. These interfaces are not suitable for reading and editing the configuration file while the program is running.
我们将在下一章中看到的 IConfiguration 和 IOptions 接口旨在从各种提供程序读取数据。这些接口不适合在程序运行时读取和编辑配置文件。

The IConfiguration interface is available through the builder object, builder.Configuration, which provides all the methods needed to read a value, an object, or a connection string.
IConfiguration 接口可通过 builder 对象 builder 获得。Configuration,它提供读取值、对象或连接字符串所需的所有方法。

After looking at one of the most important interfaces that we will use to configure the application, we want to define good development practices and use a fundamental building block for any developer: namely, classes. Copying the configuration into a class will allow us to better enjoy the content anywhere in the code.
在查看了我们将用于配置应用程序的最重要的接口之一之后,我们想要定义良好的开发实践并为任何开发人员使用一个基本构建块:即类。将配置复制到类中将使我们能够更好地享受代码中任何位置的内容。

We define classes containing a property and classes corresponding appsettings file:
我们定义包含属性的类和对应的 appsettings 文件的类:

Configuration classes

public class MyCustomObject
{
    public string? CustomProperty { get; init; }
}
public class MyCustomStartupObject
{
    public string? CustomProperty { get; init; }
}

And here, we bring back the corresponding JSON of the C# class that we just saw:
在这里,我们返回我们刚刚看到的 C# 类的相应 JSON:

appsettings.json definition
appsettings.json定义

{
    "MyCustomObject": {
         "CustomProperty": "PropertyValue"
    },
    "MyCustomStartupObject": {
         "CustomProperty": "PropertyValue"
    },
    "ConnectionStrings": {
         "Default": "MyConnectionstringValueInAppsettings"
    }
}

Next, we will be performing several operations.
接下来,我们将执行几项作。

The first operation we perform creates an instance of the startupConfig object that will be of the MyCustomStartupObject type. To populate the instance of this object, through IConfiguration, we are going to read the data from the section called MyCustomStartupObject:
我们执行的第一个作将创建一个 startupConfig 对象的实例,该实例将为 MyCustomStartupObject 类型。为了填充此对象的实例,通过 IConfiguration,我们将从名为 MyCustomStartupObject 的部分读取数据:

var startupConfig = builder.Configuration.GetSection(nameof(MyCustomStartupObject)).Get<MyCustomStartupObject>();

The newly created object can then be used in the various handlers of the minimal APIs.
然后,新创建的对象可以在最小 API 的各种处理程序中使用。

Instead, in this second operation, we use the dependency injection engine to request the instance of the IConfiguration object:
相反,在第二个作中,我们使用依赖项注入引擎来请求 IConfiguration 对象的实例:

app.MapGet("/read/configurations", (IConfiguration configuration) =>
{
    var customObject = configuration.
    GetSection(nameof(MyCustomObject)).Get<MyCustomObject>();

With the IConfiguration object, we will retrieve the data similarly to the operation just described. We select the GetSection(nameof(MyCustomObject)) section and type the object with the Get<T>() method.
使用 IConfiguration 对象,我们将检索数据,类似于刚才描述的作。我们选择 GetSection(nameof(MyCustomObject)) 部分,并使用Get<T>()方法键入对象。

Finally, in these last two examples, we read a single key, present at the root level of the appsettings file:
最后,在最后两个示例中,我们读取了一个键,该键位于 appsettings 文件的根级别:

MyCustomValue = configuration.GetValue<string>("MyCustomValue"),
ConnectionString = configuration.GetConnectionString("Default"),

The configuration.GetValue<T>(“JsonRootKey”) method extracts the value of a key and converts it into an object; this method is used to read strings or numbers from a root-level property.
configuration.GetValue<T>(“JsonRootKey”) 方法提取键的值并将其转换为对象;此方法用于从根级别属性中读取字符串或数字。

In the next line, we can see how you can leverage an IConfiguration method to read ConnectionString.
在下一行中,我们可以看到如何利用 IConfiguration 方法来读取 ConnectionString。

In the appsettings file, connection strings are placed in a specific section, ConnectionStrings, that allows you to name the string and read it. Multiple connection strings can be placed in this section to exploit it in different objects.
在 appsettings 文件中,连接字符串放置在特定部分 ConnectionStrings 中,该部分允许你命名和读取字符串。可以在此部分中放置多个连接字符串,以便在不同的对象中利用它。

In the configuration provider for Azure App Service, connection strings should be entered with a prefix that also indicates the SQL provider you are trying to use, as described in the following link: https://docs.microsoft.com/azure/app-service/configure-common#configure-connection-strings.
在 Azure 应用服务的配置提供程序中,应输入连接字符串,并带有一个前缀,该前缀也指示你尝试使用的 SQL 提供程序,如以下链接所述:https://docs.microsoft.com/azure/app-service/configure-common#configure-connection-strings

At runtime, connection strings are available as environment variables, prefixed with the following connection types:
在运行时,连接字符串可用作环境变量,前缀为以下连接类型:

• SQLServer: SQLCONNSTR
• MySQL: MYSQLCONNSTR

• SQLAzure: SQLAZURECONNSTR
• Custom: CUSTOMCONNSTR

• PostgreSQL: POSTGRESQLCONNSTR_

For completeness, we will bring back the entire code just described in order to have a better general picture of how to exploit the IConfiguration object inside the code:
为了完整起见,我们将返回刚才描述的整个代码,以便更好地了解如何在代码中利用 IConfiguration 对象:

var builder = WebApplication.CreateBuilder(args);
var startupConfig = builder.Configuration.GetSection(nameof(MyCustomStartupObject)).Get<MyCustomStartupObject>();
app.MapGet("/read/configurations", (IConfiguration configuration) =>
{
    var customObject = configuration.GetSection
    (nameof(MyCustomObject)).Get<MyCustomObject>();
    return Results.Ok(new
    {
        MyCustomValue = configuration.GetValue
        <string>("MyCustomValue"),
         ConnectionString = configuration.
         GetConnectionString("Default"),
         CustomObject = customObject,
         StartupObject = startupConfig
    });
})
.WithName("ReadConfigurations");

We’ve seen how to take advantage of the appsettings file with connection strings, but very often, we have many different files for each environment. Let’s see how to take advantage of one file for each environment.
我们已经了解了如何利用带有连接字符串的 appsettings 文件,但通常,每个环境都有许多不同的文件。让我们看看如何为每个环境利用一个文件。

Priority in appsettings files
appsettings 文件中的优先级

The appsettings file can be managed according to the environments in which the application is located. In this case, the practice is to place key information for that environment in the appsettings.{ENVIRONMENT}.json file.
可以根据应用程序所在的环境来管理 appsettings 文件。在这种情况下,做法是将该环境的关键信息放在 appsettings.{ENVIRONMENT}.json文件。

The root file (that is, appsettings.json) should be used for the production environment only.
根文件(即 appsettings.json)应仅用于生产环境。

For example, if we created these examples in the two files for the “Priority” key, what would we get?
例如,如果我们在两个文件中为 “Priority” 键创建这些示例,我们会得到什么?

appsettings.json

"Priority": "Root"

appsettings.Development.json

"Priority":    "Dev"

If it is a Development environment, the value of the key would result in Dev, while in a Production environment, the value would result in Root.
如果是 Development 环境,则 key 的值将导致 Dev,而在 Production 环境中,该值将导致 Root。

What would happen if the environment was anything other than Production or Development? For example, if it were called Stage? In this case, having not specified any appsettings.Stage.json file, the read value would be that of one of the appsettings.json files and therefore, Root.
如果环境不是生产或开发,会发生什么情况?例如,如果它被称为 Stage?在本例中,未指定任何 appsettings.Stage.json文件中,读取值将是其中一个appsettings.json文件的值,因此是 Root。

However, if we specified the appsettings.Stage.json file, the value would be read from the that file.
但是,如果我们指定 appsettings.Stage.json文件中,将从该文件中读取该值。

Next, let’s visit the Options pattern. There are objects that the framework provides to load configuration information upon startup or when changes are made by the systems department. Let’s go over how.
接下来,让我们访问 Options 模式。框架提供了一些对象,用于在启动时或系统部门进行更改时加载配置信息。让我们来看看如何作。

Options pattern
选项模式

The options pattern uses classes to provide strongly typed access to groups of related settings, that is, when configuration settings are isolated by scenario into separate classes.
选项模式使用类提供对相关设置组的强类型访问,即,当配置设置按方案隔离到单独的类中时。

The options pattern will be implemented with different interfaces and different functionalities. Each interface (see the following subsection) has its own features that help us achieve certain goals.
选项模式将使用不同的接口和不同的功能实现。每个界面(请参阅以下小节)都有自己的功能,可以帮助我们实现某些目标。

But let’s start in order. We define an object for each type of interface (we will do it to better represent the examples), but the same class can be used to register more options inside the configuration file. It is important to keep the structure of the file identical:
但让我们按顺序开始。我们为每种类型的接口定义一个对象(我们将这样做以更好地表示示例),但同一个类可用于在配置文件中注册更多选项。保持文件的结构相同非常重要:

public class OptionBasic
{
    public string? Value { get; init; }
}
    public class OptionSnapshot
    {
        public string? Value { get; init; }
    }
    public class OptionMonitor
    {
        public string? Value { get; init; }
    }
    public class OptionCustomName
    {
        public string? Value { get; init; }
    }

Each option is registered in the dependency injection engine via the Configure method, which also requires the registration of the T type present in the method signature. As you can see, in the registration phase, we declared the types and the section of the file where to retrieve the information, and nothing more:
每个选项都通过 Configure 方法在依赖项注入引擎中注册,该方法还需要注册方法签名中存在的 T 类型。如你所见,在注册阶段,我们声明了类型和文件部分,用于检索信息,仅此而已:

builder.Services.Configure<OptionBasic>(builder.Configuration.GetSection("OptionBasic"));
builder.Services.Configure<OptionMonitor>(builder.Configuration.GetSection("OptionMonitor"));
builder.Services.Configure<OptionSnapshot>(builder.Configuration.GetSection("OptionSnapshot"));
builder.Services.Configure<OptionCustomName>("CustomName1", builder.Configuration.GetSection("CustomName1"));
builder.Services.Configure<OptionCustomName>("CustomName2", builder.Configuration.GetSection("CustomName2"));

We have not yet defined how the object should be read, how often, and with what type of interface.
我们尚未定义应该如何读取对象、读取频率以及使用什么类型的接口。

The only thing that changes is the parameter, as seen in the last two examples of the preceding code snippet. This parameter allows you to add a name to the option type. The name is required to match the type used in the method signature. This feature is called named options.
唯一更改的是参数,如前面代码段的最后两个示例所示。此参数允许您向选项类型添加名称。该名称必须与方法签名中使用的类型匹配。此功能称为 named options。

Different option interfaces
不同的选项接口

Different interfaces can take advantage of the recordings you just defined. Some support named options and some do not:
不同的界面可以利用您刚刚定义的记录。有些支持命名选项,有些则不支持:

IOptions<TOptions>:
Is registered as a singleton and can be injected into any service lifetime
注册为单一实例,可以注入到任何服务生命周期中
Does not support the following:
不支持以下内容:
Reading of configuration data after the app has started
在应用程序启动后读取配置数据
Named options
命名选项

IOptionsSnapshot<TOptions>:
Is useful in scenarios where options should be recomputed on every request
在应在每个请求上重新计算选项的情况下非常有用
Is registered as scoped and therefore cannot be injected into a singleton service
注册为 scoped,因此不能注入到单一实例服务
Supports named options
支持命名选项

IOptionsMonitor<TOptions>:
Is used to retrieve options and manage options notifications for TOptions instances
用于检索选项和管理 TOptions 实例的选项通知
Is registered as a singleton and can be injected into any service lifetime
注册为单一实例,可以注入到任何服务生命周期中
Supports the following:
支持以下功能:
Change notifications
更改通知
Named options
命名选项
Reloadable configuration
可重新加载配置
Selective options invalidation (IOptionsMonitorCache<TOptions>)
选择性选项失效 (IOptionsMonitorCache<TOptions>)

We want to point you to the use of IOptionsFactory<TOptions>, which is responsible for creating new instances of options. It has a single Create method. The default implementation takes all registered IConfigureOptions<TOptions> and IPostConfigureOptions and performs all configurations first, followed by post-configuration (https://docs.microsoft.com/aspnet/core/fundamentals/configuration/options#options-interfaces).
我们想向您介绍一下 IOptionsFactory<TOptions> 的使用,它负责创建新的选项实例。它只有一个 Create 方法。默认实现采用所有已注册的 IConfigureOptions<TOptions> IPostConfigureOptions<TOptions>并首先执行所有配置,然后执行后配置 (https://docs.microsoft.com/aspnet/core/fundamentals/configuration/options#options-interfaces)。

The Configure method can also be followed by another method in the configuration pipeline. This method is called PostConfigure and is intended to modify the configuration each time it is configured or reread. Here is an example of how to record this behavior:
Configure 方法也可以后跟配置管道中的另一个方法。此方法称为 PostConfigure,旨在在每次配置或重新读取配置时修改配置。以下是如何记录此行为的示例:

builder.Services.PostConfigure<MyConfigOptions>(myOptions =>
{
   myOptions.Key1 = "my_new_value_post_configuration";
});

Putting it all together
把它们放在一起

Having defined the theory of these numerous interfaces, it remains for us to see IOptions at work with a concrete example.
在定义了这些众多接口的理论之后,我们仍然需要通过一个具体的例子来了解 IOptions 的工作原理。

Let’s see the use of the three interfaces just described and the use of IOptionsFactory, which, along with the Create method and with the named options function, retrieves the correct instance of the object:
让我们看看刚才描述的三个接口的用法以及 IOptionsFactory 的用法,它与 Create 方法和命名选项函数一起检索对象的正确实例:

app.MapGet("/read/options", (IOptions<OptionBasic> optionsBasic,
         IOptionsMonitor<OptionMonitor> optionsMonitor,
         IOptionsSnapshot<OptionSnapshot> optionsSnapshot,
         IOptionsFactory<OptionCustomName> optionsFactory) =>
{
         return Results.Ok(new
         {
             Basic = optionsBasic.Value,
             Monitor = optionsMonitor.CurrentValue,
             Snapshot = optionsSnapshot.Value,
             Custom1 = optionsFactory.Create("CustomName1"),
             Custom2 = optionsFactory.Create("CustomName2")
         });
})
.WithName("ReadOptions");

In the previous code snippet, we want to bring attention to the use of the different interfaces available.
在前面的代码片段中,我们希望提请注意可用不同接口的使用。

Each individual interface used in the previous snippet has a particular life cycle that characterizes its behavior. Finally, each interface has slight differences in the methods, as we have already described in the previous paragraphs.
上一个代码段中使用的每个接口都有一个特定的生命周期,用于描述其行为。最后,正如我们在前面的段落中已经描述的那样,每个接口在方法上略有不同。

IOptions and validation
操作和验证

Last but not least is the validation functionality of the data present in the configuration. This is very useful when the team that has to release the application still performs manual or delicate operations that need to be at least verified by the code.
最后但并非最不重要的一点是配置中存在的数据的验证功能。当必须发布应用程序的团队仍然执行至少需要由代码验证的手动或精细作时,这非常有用。

Before the advent of .NET Core, very often, the application would not start because of an incorrect configuration. Now, with this feature, we can validate the data in the configuration and throw errors.
在 .NET Core 出现之前,应用程序经常由于配置不正确而无法启动。现在,借助此功能,我们可以验证配置中的数据并引发错误。

Here is an example:
下面是一个示例:

Register option with validation
带验证的 Register 选项

builder.Services.AddOptions<ConfigWithValidation>().Bind(builder.Configuration.GetSection(nameof(ConfigWithValidation)))
.ValidateDataAnnotations();
app.MapGet("/read/options", (IOptions<ConfigWithValidation> optionsValidation) =>
{
    return Results.Ok(new
    {
        Validation = optionsValidation.Value
    });
})
.WithName("ReadOptions");

This is the configuration file where an error is explicitly reported:
这是明确报告错误的配置文件:

Appsettings section for configuration validation
用于配置验证的 Appsettings 部分

"ConfigWithValidation": {
         "Email": "andrea.tosato@hotmail.it",
         "NumericRange": 1001
    }

And here is the class containing the validation logic:
下面是包含验证逻辑的类:

public class ConfigWithValidation
{
    [RegularExpression(@"^([\w\.\-]+)@([\w\-]+)((\.(\w)
                      {2,})+)$")]
    public string? Email { get; set; }
    [Range(0, 1000, ErrorMessage = "Value for {0} must be 
                                    between {1} and {2}.")]
    public int NumericRange { get; set; }
}

The application then encounters errors while using the particular configuration and not at startup. This is also because, as we have seen before, IOptions could reload information following a change in appsettings:
然后,应用程序在使用特定配置时遇到错误,而不是在启动时遇到错误。这也是因为,正如我们之前看到的,IOptions 可以在 appsettings 更改后重新加载信息:

Error validate option
错误验证选项

Microsoft.Extensions.Options.OptionsValidationException: DataAnnotation validation failed for 'ConfigWithValidation' members: 'NumericRange' with the error: 'Value for NumericRange must be between 0 and 1000.'.

Best practice for using validation in IOptions
在 IOptions 中使用验证的最佳实践

This setting is not suitable for all application scenarios. Only some options can have formal validations; if we think of a connection string, it is not necessarily formally incorrect, but the connection may not be working.
此设置并不适合所有应用程序方案。只有某些选项可以进行正式验证;如果我们考虑一个连接字符串,它不一定在形式上是错误的,但连接可能无法正常工作。

Be cautious about applying this feature, especially since it reports errors at runtime and not during startup and gives an Internal Server Error, which is not a best practice in scenarios that should be handled.
在应用此功能时请谨慎,尤其是因为它在运行时而不是在启动期间报告错误,并给出内部服务器错误,这在应该处理的场景中不是最佳实践。

Everything we’ve seen up to this point is about configuring the appsettings.json file, but what if we wanted to use other sources for configuration management? We’ll look at that in the next section.
到目前为止,我们所看到的所有内容都是关于配置 appsettings.json 文件的,但是如果我们想使用其他源进行配置管理呢?我们将在下一节中介绍这一点。

Configuration sources
配置源

As we mentioned at the beginning of the section, the IConfiguration interface and all variants of IOptions work not only with the appsettings file but also on different sources.
正如我们在本节开头提到的,IConfiguration 接口和 IOptions 的所有变体不仅适用于 appsettings 文件,也适用于不同的源。

Each source has its own characteristics, and the syntax for accessing objects is very similar between providers. The main problem is when we must define a complex object or an array of objects; in this case, we will see how to behave and be able to replicate the dynamic structure of a JSON file.
每个源都有其自己的特征,并且访问对象的语法在提供程序之间非常相似。主要问题是当我们必须定义一个复杂对象或一个对象数组时;在这种情况下,我们将了解如何作并能够复制 JSON 文件的动态结构。

Let’s look at two very common use cases.
让我们看两个非常常见的用例。

Configuring an application in Azure App Service
在 Azure 应用服务中配置应用程序

Let’s start with Azure, and in particular, the Azure Web Apps service.
让我们从 Azure 开始,特别是 Azure Web 应用服务。

On the Configuration page, there are two sections: Application settings and Connection strings.
在 Configuration (配置) 页面上,有两个部分: Application settings (应用程序设置) 和 Connection strings (连接字符串)。

In the first section, we need to insert the keys and values or JSON objects that we saw in the previous examples.
在第一部分中,我们需要插入我们在前面的示例中看到的键和值或 JSON 对象。

In the Connection strings section, you can insert the connection strings that are usually inserted in the appsettings.json file. In this section, in addition to the textual string, it is necessary to set the connection type, as we saw in the Configuration in .NET 6 section.
在 Connection strings (连接字符串) 部分中,您可以插入通常插入 appsettings.json 文件中的连接字符串。在本节中,除了文本字符串之外,还需要设置连接类型,正如我们在 .NET 6 中的配置部分中看到的那样。

Figure 3.12 – Azure App Service Application settings
图 3.12 – Azure 应用服务应用程序设置

Inserting an object
插入对象

To insert an object, we must specify the parent for each key.
要插入对象,我们必须为每个键指定 parent。

The format is as follows:
格式如下:

parent__key

Note that there are two underscores.
请注意,有两个下划线。

The object in the JSON file would be defined as follows:
JSON 文件中的对象将定义如下:

"MyCustomObject": {
         "CustomProperty": "PropertyValue"
    }

So, we should write MyCustomObjectCustomProperty.
所以,我们应该写MyCustomObject
CustomProperty。

Inserting an array
插入数组

Inserting an array is much more verbose.
插入数组要详细得多。

The format is as follows:
格式如下:

parent__child__ArrayIndexNumber_key

The array in the JSON file would be defined as follows:
JSON 文件中的数组定义如下:

{
"MyCustomArray": {
"CustomPropertyArray": [
{ "CustomKey": "ValueOne" },
{ "CustomKey ": "ValueTwo" }
]
}
}
So, to access the ValueOne value, we should write the following: MyCustomArrayCustomPropertyArray0CustomKey.
因此,要访问 ValueOne 值,我们应该编写以下内容:MyCustomArray
CustomPropertyArray0CustomKey。

Configuring an application in Docker
在 Docker 中配置应用程序

If we are developing for containers and therefore for Docker, appsettings files are usually replaced in the docker-compose file, and very often in the override file, because it behaves analogously to the settings files divided by the environment.
如果我们针对容器和 Docker 进行开发,则 appsettings 文件通常会在 docker-compose 文件中被替换,并且经常在 override 文件中被替换,因为它的行为类似于按环境划分的设置文件。

We want to provide a brief overview of the features that are usually leveraged to configure an application hosted in Docker. Let’s see in detail how to define root keys and objects, and how to set the connection string. Here is an example:
我们想简要概述通常用于配置 Docker 中托管的应用程序的功能。让我们详细看看如何定义根键和对象,以及如何设置连接字符串。下面是一个示例:

app.MapGet("/env-test", (IConfiguration configuration) =>
{
    var rootProperty = configuration.
    GetValue<string>("RootProperty");
    var sampleVariable = configuration.
    GetValue<string>("RootSettings:SampleVariable");
    var connectionString = configuration.
    GetConnectionString("SqlConnection");
    return Results.Ok(new
    {
        RootProperty = rootProperty,
        SampleVariable = sampleVariable,
        Connection String = connectionString
    });
})
.WithName("EnvironmentTest");

Minimal APIs that use configuration
使用配置的最小 API

The docker-compose.override.yaml file is as follows:
docker-compose.override.yaml 文件如下:

services:
    dockerenvironment:
         environment:
              - ASPNETCORE_ENVIRONMENT=Development
              - ASPNETCORE_URLS=https://+:443;http://+:80
              - RootProperty=minimalapi-root-value
              - RootSettings__SampleVariable=minimalapi-variable-value
              - ConnectionStrings__SqlConnection=Server=minimal.db;Database=minimal_db;User Id=sa;Password=Taggia42!

There is only one application container for this example, and the service that instantiates it is called dockerenvironment.
此示例只有一个应用程序容器,实例化它的服务称为 dockerenvironment。

In the configuration section, we can see three particularities that we are going to analyze line by line.
在配置部分,我们可以看到我们将逐行分析的三个特性。

The snippet we want to show you has several very interesting components: a property in the configuration root, an object composed of a single property, and a connection string to a database.
我们要向您展示的代码段有几个非常有趣的组件:配置根中的属性、由单个属性组成的对象以及数据库的连接字符串。

In this first configuration, you are going to set a property that is the root of the configurations. In this case, it is a simple string:
在第一个配置中,您将设置一个属性,该属性是配置的根。在本例中,它是一个简单的字符串:

# First configuration
- RootProperty=minimalapi-root-value

In this second configuration, we are going to set up an object:
在第二个配置中,我们将设置一个对象:

# Second configuration
- RootSettings__SampleVariable=minimalapi-variable-value

The object is called RootSettings, while the only property it contains is called SampleVariable. This object can be read in different ways. We recommend using the Ioptions object that we have seen extensively before. In the preceding example, we show how to access a single property present in an object via code.
该对象称为 RootSettings,而它包含的唯一属性称为 SampleVariable。可以通过不同的方式读取此对象。我们建议使用我们之前广泛看到的 Ioptions 对象。在前面的示例中,我们展示了如何通过代码访问对象中存在的单个属性。

In this case, via code, you need to use the following notation to access the value: RootSettings:SampleVariable. This approach is useful if you need to read a single property, but we recommend using the Ioptions interfaces to access the object.
在这种情况下,您需要通过代码使用以下表示法来访问该值:RootSettings:SampleVariable。如果需要读取单个属性,此方法非常有用,但我们建议使用 Ioptions 接口来访问对象。

In this last example, we show you how to set the connection string called SqlConnection. This way, it will be easy to retrieve the information from the base methods available on Iconfiguration:
在最后一个示例中,我们将向您展示如何设置名为 SqlConnection 的连接字符串。这样,就很容易从 Iconfiguration 上可用的 base 方法中检索信息:

# Third configuration
- ConnectionStrings__SqlConnection=Server=minimal.db;Database=minimal_db;User Id=sa;Password=Taggia42!

To read the information, it is necessary to exploit this method: GetConnectionString(“SqlConnection”).
要读取信息,必须利用此方法: GetConnectionString(“SqlConnection”)。

There are a lot of scenarios for configuring our applications; in the next section, we will also see how to handle errors.
配置我们的应用程序有很多场景;在下一节中,我们还将了解如何处理错误。

Error handling
错误处理

Error handling is one of the features that every application must provide. The representation of an error allows the client to understand the error and possibly handle the request accordingly. Very often, we have our own customized methods of handling errors.
错误处理是每个应用程序都必须提供的功能之一。错误的表示允许客户端理解错误并可能相应地处理请求。很多时候,我们有自己的自定义错误处理方法。

Since what we’re describing is a key functionality of the application, we think it’s fair to see what the framework provides and what is more correct to use.
由于我们所描述的是应用程序的关键功能,因此我们认为查看框架提供的内容以及使用起来更正确的内容是公平的。

Traditional approach
传统方法

.NET provides the same tool for minimal APIs that we can implement in traditional development: a Developer Exception Page. This is nothing but middleware that reports the error in plain text format. This middleware can’t be removed from the ASP.NET pipeline and works exclusively in the development environment (https://docs.microsoft.com/aspnet/core/fundamentals/error-handling).
.NET 为最小 API 提供了我们可以在传统开发中实现的相同工具:开发人员异常页。这只不过是以纯文本格式报告错误的中间件。此中间件无法从 ASP.NET 管道中删除,并且只能在开发环境 (https://docs.microsoft.com/aspnet/core/fundamentals/error-handling) 中运行。

Figure 3.13 – Minimal APIs pipeline, ExceptionHandler
图 3.13 – 最小 API 管道 ExceptionHandler

If exceptions are raised within our code, the only way to catch them in the application layer is through middleware that is activated before sending the response to the client.
如果在我们的代码中引发了异常,那么在应用程序层捕获它们的唯一方法是通过在将响应发送到客户端之前激活的中间件。

Error handling middleware is standard and can be implemented as follows:
错误处理中间件是标准的,可以按如下方式实现:

app.UseExceptionHandler(exceptionHandlerApp =>
{
    exceptionHandlerApp.Run(async context =>
    {
        context.Response.StatusCode = StatusCodes.
        Status500InternalServerError;
        context.Response.ContentType = Application.Json;
        var exceptionHandlerPathFeature = context.Features.
          Get<IExceptionHandlerPathFeature>()!;
        var errorMessage = new
        {
            Message = exceptionHandlerPathFeature.Error.Message
        };
        await context.Response.WriteAsync
        (JsonSerializer.Serialize(errorMessage));
         if (exceptionHandlerPathFeature?.
             Error is FileNotFoundException)
         {
             await context.Response.
             WriteAsync(" The file was not found.");
         }
         if (exceptionHandlerPathFeature?.Path == "/")
         {
             await context.Response.WriteAsync("Page: Home.");
         }
    });
});

We have shown here a possible implementation of the middleware. In order to be implemented, the UseExceptionHandler method must be exploited, allowing the writing of management code for the whole application.
我们在这里展示了中间件的可能实现。为了实现,必须利用 UseExceptionHandler 方法,允许为整个应用程序编写管理代码。

Through the var functionality called exceptionHandlerPathFeature = context.Features.Get<IExceptionHandlerPathFeature>()!;, we can access the error stack and return the information of interest for the caller in the output:
通过名为 exceptionHandlerPathFeature = context.Features.Get<IExceptionHandlerPathFeature>()!;,我们可以访问错误堆栈并在输出中返回调用方感兴趣的信息:

app.MapGet("/ok-result", () =>
{
         throw new ArgumentNullException("taggia-parameter", 
         "Taggia has an error");
})
.WithName("OkResult");

When an exception occurs in the code, as in the preceding example, the middleware steps in and handles the return message to the client.
当代码中发生异常时,如前面的示例所示,中间件会介入并处理发送给客户端的返回消息。

If the exception were to occur in internal application stacks, the middleware would still intervene to provide the client with the correct error and appropriate indication.
如果内部应用程序堆栈中发生异常,中间件仍会进行干预,为客户端提供正确的错误和适当的指示。

Problem Details and the IETF standard
问题详细信息和 IETF 标准

Problem Details for HTTP APIs is an IETF standard that was approved in 2016. This standard allows a set of information to be returned to the caller with standard fields and JSON notations that help identify the error.
HTTP API 的问题详细信息是 2016 年批准的 IETF 标准。此标准允许使用标准字段和 JSON 表示法将一组信息返回给调用方,以帮助识别错误。

HTTP status codes are sometimes not enough to convey enough information about an error to be useful. While the humans behind web browsers can be informed about the nature of the problem with an HTML response body, non-human consumers, such as machine, PC, and server, of so-called HTTP APIs usually cannot.
HTTP 状态代码有时不足以传达有关错误的足够信息,因此没有用。虽然 Web 浏览器背后的人类可以通过 HTML 响应正文了解问题的性质,但所谓的 HTTP API 的非人类使用者(如机器、PC 和服务器)通常不能。

This specification defines simple JSON and XML document formats to suit this purpose. They are designed to be reused by HTTP APIs, which can identify distinct problem types specific to their needs.
此规范定义了简单的 JSON 和 XML 文档格式以适应此目的。它们旨在供 HTTP API 重用,HTTP API 可以识别特定于其需求的不同问题类型。

Thus, API clients can be informed of both the high-level error class and the finer-grained details of the problem (https://datatracker.ietf.org/doc/html/rfc7807).
因此,API 客户端可以了解高级错误类和问题的更细粒度的详细信息 (https://datatracker.ietf.org/doc/html/rfc7807)。

In .NET, there is a package with all the functionality that meets the IETF standard.
在 .NET 中,有一个包,其中包含满足 IETF 标准的所有功能。

The package is called Hellang.Middleware.ProblemDetails, and you can download it at the following address: https://www.nuget.org/packages/Hellang.Middleware.ProblemDetails/.
该包名为 Hellang.Middleware.ProblemDetails,您可以在以下地址下载:https://www.nuget.org/packages/Hellang.Middleware.ProblemDetails/

Let’s see now how to insert the package into the project and configure it:
现在让我们看看如何将包插入到项目中并对其进行配置:

var builder = WebApplication.CreateBuilder(args);
builder.Services.TryAddSingleton<IActionResultExecutor<ObjectResult>, ProblemDetailsResultExecutor>();
builder.Services.AddProblemDetails(options =>
{   options.MapToStatusCode<NotImplementedException>
    (StatusCodes.Status501NotImplemented);
});
var app = builder.Build();
app.UseProblemDetails();

As you can see, there are only two instructions to make this package work:
如您所见,只有两条说明可以使此软件包正常工作:

builder.Services.AddProblemDetails
app.UseProblemDetails();

Since, in the minimal APIs, the IActionResultExecutor interface is not present in the ASP.NET pipeline, it is necessary to add a custom class to handle the response in case of an error.
由于在最小 API 中,ASP.NET 管道中不存在 IActionResultExecutor 接口,因此有必要添加自定义类以在出现错误时处理响应。

To do this, you need to add a class (the following) and register it in the dependency injection engine: builder.Services.TryAddSingleton<IActionResultExecutor<ObjectResult>, ProblemDetailsResultExecutor>(); .
为此,您需要添加一个类(如下)并在依赖项注入引擎 builder 中注册它。 builder.Services.TryAddSingleton<IActionResultExecutor<ObjectResult>, ProblemDetailsResultExecutor>();。

Here is the class to support the package, also under minimal APIs:
以下是支持该包的类,也在最小 API 下:

public class ProblemDetailsResultExecutor : IActionResultExecutor<ObjectResult>
{
    public virtual Task ExecuteAsync(ActionContext context, 
    ObjectResult result)
{
        ArgumentNullException.ThrowIfNull(context);
        ArgumentNullException.ThrowIfNull(result);
        var executor = Results.Json(result.Value, null, 
        "application/problem+json", result.StatusCode);
        return executor.ExecuteAsync(context.HttpContext);
    }
}

As mentioned earlier, the standard for handling error messages has been present in the IETF standard for several years, but for the C# language, it is necessary to add the package just mentioned.
如前所述,处理错误消息的标准在 IETF 标准中已经存在了几年,但对于 C# 语言,有必要添加刚才提到的包。

Now, let’s see how this package goes about handling errors on some endpoints that we report here:
现在,让我们看看这个软件包如何处理我们在此处报告的某些端点上的错误:

app.MapGet("/internal-server-error", () =>
{
    throw new ArgumentNullException("taggia-parameter", 
    "Taggia has an error");
})
    .Produces<ProblemDetails>(StatusCodes.
     Status500InternalServerError)
         .WithName("internal-server-error");

We throw an application-level exception with this endpoint. In this case, the ProblemDetails middleware goes and returns a JSON error consistent with the error. We then have the handling of an unhandled exception for free:
我们使用此终端节点引发应用程序级异常。在这种情况下,ProblemDetails 中间件会返回与错误一致的 JSON 错误。然后,我们可以免费处理未处理的异常:

{
    "type": "https://httpstatuses.com/500",
    "title": "Internal Server Error",
    "status": 500,
    "detail": "Taggia has an error (Parameter 'taggia-
     parameter')",
    "exceptionDetails": [
         {
 ------- for brevity
         }
    ],
    "traceId": "00-f6ff69d6f7ba6d2692d87687d5be75c5-
     e734f5f081d7a02a-00"
}

By inserting additional configurations in the Program file, you can map some specific exceptions to HTTP errors. Here is an example:
通过在 Program 文件中插入其他配置,您可以将某些特定异常映射到 HTTP 错误。下面是一个示例:

builder.Services.AddProblemDetails(options =>
{
    options.MapToStatusCode<NotImplementedException>
      (StatusCodes.Status501NotImplemented);
});

The code with the NotImplementedException exception is mapped to HTTP error code 501:
具有 NotImplementedException 异常的代码映射到 HTTP 错误代码 501:

app.MapGet("/not-implemented-exception", () =>
{
    throw new NotImplementedException
      ("This is an exception thrown from a Minimal API.");
})
    .Produces<ProblemDetails>(StatusCodes.
     Status501NotImplemented)
         .WithName("NotImplementedExceptions");

Finally, it is possible to create extensions to the ProblemDetails class of the framework with additional fields or to call the base method by adding custom text.
最后,可以使用其他字段创建框架的 ProblemDetails 类的扩展,或者通过添加自定义文本来调用基方法。

Here are the last two examples of MapGet endpoint handlers:
以下是 MapGet 端点处理程序的最后两个示例:

app.MapGet("/problems", () =>
{
    return Results.Problem(detail: "This will end up in 
                                    the 'detail' field.");
})
    .Produces<ProblemDetails>(StatusCodes.Status400BadRequest)
    .WithName("Problems");
app.MapGet("/custom-error", () =>
{
    var problem = new OutOfCreditProblemDetails
    {
        Type = "https://example.com/probs/out-of-credit",
        Title = "You do not have enough credit.",
        Detail = "Your current balance is 30, 
        but that costs 50.",
        Instance = "/account/12345/msgs/abc",
        Balance = 30.0m, Accounts = 
        { "/account/12345", "/account/67890" }
    };
    return Results.Problem(problem);
})
    .Produces<OutOfCreditProblemDetails>(StatusCodes.
     Status400BadRequest)
     .WithName("CreditProblems");
app.Run();
public class OutOfCreditProblemDetails : ProblemDetails
{
    public OutOfCreditProblemDetails()
    {
        Accounts = new List<string>();
    }
    public decimal Balance { get; set; }
    public ICollection<string> Accounts { get; }
}

Summary
总结

In this chapter, we have seen several advanced aspects regarding the implementation of minimal APIs. We explored Swagger, which is used to document APIs and provide the developer with a convenient, working debugging environment. We saw how CORS handles the issue of applications hosted on different addresses other than the current API. Finally, we saw how to load configuration information and handle unexpected errors in the application.
在本章中,我们了解了有关实现最小 API 的几个高级方面。我们探索了 Swagger,它用于记录 API,并为开发人员提供方便、有效的调试环境。我们了解了 CORS 如何处理托管在当前 API 以外的不同地址上的应用程序问题。最后,我们了解了如何加载配置信息和处理应用程序中的意外错误。

We explored the nuts and bolts that will allow us to be productive in a short amount of time.
我们探索了使我们能够在短时间内提高工作效率的具体细节。

In the next chapter, we will add a fundamental building block for SOLID pattern-oriented programming, namely the dependency injection engine, which will help us to better manage the application code scattered in the various layers.
在下一章中,我们将为 SOLID 面向模式的编程添加一个基本构建块,即依赖注入引擎,这将帮助我们更好地管理分散在各个层中的应用程序代码。

Part 2: What’s New in .NET 6?

第 2 部分:.NET 6 中的新增功能

In the second part of the book, we want to show you the features of the .NET 6 framework and how they can also be used in minimal APIs.
在本书的第二部分,我们想向你展示 .NET 6 框架的功能,以及如何在最小的 API 中使用它们。

We will cover the following chapters in this section:
在本节中,我们将介绍以下章节:

Chapter 4, Dependency Injection in a Minimal API Project
第 4 章 最小 API 项目中的依赖关系注入

Chapter 5, Using Logging to Identify Errors
第 5 章 使用日志记录识别错误

Chapter 6, Exploring Validation and Mapping
第 6 章 探索验证和映射

Chapter 7, Integration with the Data Access Layer
第 7 章 与数据访问层集成

4 Dependency Injection in a Minimal API Project

最小 API 项目中的依赖关系注入

In this chapter of the book, we will discuss some basic topics of minimal APIs in .NET 6.0. We will learn how they differ from the controller-based Web APIs that we were used to using in the previous version of .NET. We will also try to underline the pros and the cons of this new approach of writing APIs.
在本书的这一章中,我们将讨论 .NET 6.0 中最小 API 的一些基本主题。我们将了解它们与我们以前在 .NET 版本中习惯使用的基于控制器的 Web API 有何不同。我们还将尝试强调这种编写 API 的新方法的优缺点。

In this chapter, we will be covering the following topics:
在本章中,我们将介绍以下主题:

• What is dependency injection?
什么是依赖项注入?

• Implementing dependency injection in a minimal API project
在最小 API 项目中实现依赖关系注入

Technical requirements
技术要求

To follow the explanations in this chapter, you will need to create an ASP.NET Core 6.0 Web API application. You can refer the Technical requirements section of Chapter 2, Exploring Minimal APIs and Their Advantages to know how to do it.
要按照本章中的说明进行作,您需要创建一个 ASP.NET Core 6.0 Web API 应用程序。您可以参考 第 2 章 探索最小 API 及其优势 的技术要求 部分来了解如何作。

All the code samples in this chapter can be found in the GitHub repository for this book at https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter04.
本章中的所有代码示例都可以在本书的 GitHub 存储库中找到,网址为 https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter04

What is dependency injection?
什么是依赖项注入?

For a while, .NET has natively supported the dependency injection (often referred to as DI) software design pattern.
一段时间以来,.NET 本身就支持依赖关系注入(通常称为 DI)软件设计模式。

Dependency injection is a way to implement in .NET the Inversion of Control (IoC) pattern between service classes and their dependencies. By the way, in .NET, many fundamental services are built with dependency injection, such as logging, configuration, and other services.
依赖项注入是在 .NET 中实现服务类及其依赖项之间的控制反转 (IoC) 模式的一种方式。顺便说一句,在 .NET 中,许多基本服务都是通过依赖项注入构建的,例如日志记录、配置和其他服务。

Let’s look at a practical example to get a good understanding of how it works.
让我们看一个实际示例,以更好地理解它是如何工作的。

Generally speaking, a dependency is an object that depends on another object. In the following example, we have a LogWriter class with only one method inside, called Log:
一般来说,依赖项是依赖于另一个对象的对象。在下面的示例中,我们有一个 LogWriter 类,其中只有一个方法,称为 Log:

public class LogWriter
{
    public void Log(string message)
    {
        Console.WriteLine($"LogWriter.Write
          (message: \"{message}\")");
    }
}

Other classes in the project, or in another project, can create an instance of the LogWriter class and use the Log method.
项目或其他项目中的其他类可以创建 LogWriter 类的实例并使用 Log 方法。

Take a look at the following example:
请看以下示例:

public class Worker
{
    private readonly LogWriter _logWriter = new LogWriter();
    protected async Task ExecuteAsync(CancellationToken 
                                      stoppingToken)
    {
        while (!stoppingToken.IsCancellationRequested)
        {
            _logWriter.Log($"Worker running at: 
             {DateTimeOffset.Now}");
             await Task.Delay(1000, stoppingToken);
        }
    }
}

This class depends directly on the LogWriter class, and it’s hardcoded in each class of your projects.
此类直接依赖于 LogWriter 类,并且在项目的每个类中都是硬编码的。

This means that you will have some issues if you want to change the Log method; for instance, you will have to replace the implementation in each class of your solution.
这意味着,如果要更改 Log 方法,您将遇到一些问题;例如,您必须替换解决方案的每个类中的 implementation。

The preceding implementation has some issues if you want to implement unit tests in your solution. It’s not easy to create a mock of the LogWriter class.
如果要在解决方案中实现单元测试,前面的实现存在一些问题。创建 LogWriter 类的 mock 并不容易。

Dependency injection can solve these problems with some changes in our code:
依赖项注入可以通过对代码进行一些更改来解决这些问题:

  1. Use an interface to abstract the dependency.
    使用接口抽象依赖项。

  2. Register the dependency injection in the built-in service connecte to .NET.
    在内置服务 connecte to .NET 中注册依赖项注入。

  3. Inject the service into the constructor of the class.
    将服务注入到类的构造函数中。

The preceding things might seem like they require big change in your code, but they are very easy to implement.
上述内容似乎需要对代码进行大量更改,但它们很容易实现。

Let’s see how we can achieve this goal with our previous example:
让我们看看如何通过前面的示例来实现这个目标:

  1. First, we will create an ILogWriter interface with the abstraction of our logger:
    public interface ILogWriter
    首先,我们将使用记录器的抽象创建一个 ILogWriter 接口:
{
    void Log(string message);
}
  1. Next, implement this ILogWriter interface in a real class called ConsoleLogWriter:
    public class ConsoleLogWriter : ILogWriter
    接下来,在名为 ConsoleLogWriter 的实际类中实现此 ILogWriter 接口:

    {
    public void Log(string message)
    {
        Console.WriteLine($"ConsoleLogWriter.
        Write(message: \"{message}\")");
    }
    }
  2. Now, change the Worker class and replace the explicit LogWriter class with the new ILogWriter interface:

现在,更改 Worker 类,并将显式 LogWriter 类替换为新的 ILogWriter 接口:

public class Worker
{
    private readonly ILogWriter _logWriter;
    public Worker(ILogWriter logWriter)
    {
        _logWriter = logWriter;
    }

    protected async Task ExecuteAsync
      (CancellationToken stoppingToken)
    {
        while (!stoppingToken.IsCancellationRequested)
        {
            _logWriter.Log($"Worker running at:
                             {DateTimeOffset.Now}");
             await Task.Delay(1000, stoppingToken);
        }
    }
}

As you can see, it’s very easy to work in this new way, and the advantages are substantial. Here are a few advantages of dependency injection:
如您所见,以这种新方式工作非常容易,而且优势非常大。以下是依赖项注入的一些优点:

• Maintainability 可维护性
• Testability 测试
• Reusability 可重用

Now we need to perform the last step, that is, register the dependency when the application starts up.
现在我们需要执行最后一步,即在应用程序启动时注册依赖项。

4.At the top of the Program.cs file, add this line of code:
在 Program.cs 文件的顶部,添加以下代码行:

builder.Services.AddScoped<ILogWriter, ConsoleLogWriter>();

In the next section, we will discuss the difference between dependency injection lifetimes, another concept that you need to understand before using dependency injection in your minimal API project.
在下一节中,我们将讨论依赖注入生命周期之间的区别,这是在最小 API 项目中使用依赖注入之前需要了解的另一个概念。

Understanding dependency injection lifetimes
了解依赖关系注入生命周期

In the previous section, we learned the benefits of using dependency injection in our project and how to transform our code to use it.
在上一节中,我们了解了在项目中使用依赖项注入的好处,以及如何转换代码以使用它。

In one of the last paragraphs, we added our class as a service to ServiceCollection of .NET.
在最后一段中,我们将类作为服务添加到 .NET 的 ServiceCollection 中。

In this section, we will try to understand the difference between each dependency injection’s lifetime.
在本节中,我们将尝试了解每个依赖注入的生命周期之间的差异。

The service lifetime defines how long an object will be alive after it has been created by the container.
服务生存期定义对象在容器创建后将处于活动状态的时间。

When they are registered, dependencies require a lifetime definition. This defines the conditions when a new service instance is created.
注册依赖项时,它们需要生命周期定义。这定义了创建新服务实例时的条件。

In the following list, you can find the lifetimes defined in .NET:
在以下列表中,您可以找到 .NET 中定义的生存期:

• Transient: A new instance of the class is created every time it is requested.
Transient:每次请求时都会创建类的新实例。

• Scoped: A new instance of the class is created once per scope, for instance, for the same HTTP request.
范围:每个范围创建一次类的新实例,例如,针对同一 HTTP 请求。

• Singleton: A new instance of the class is created only on the first request. The next request will use the same instance of the same class.
Singleton:仅在第一个请求时创建类的新实例。下一个请求将使用同一类的相同实例。

Very often, in web applications, you only find the first two lifetimes, that is, transient and scoped.
很多时候,在 Web 应用程序中,你只能找到前两个生命周期,即 transient 和 scoped。

If you have a particular use case that requires a singleton, it’s not prohibited, but for best practice, it is recommended to avoid them in web applications.
如果您有需要单例的特定用例,则不禁止这样做,但为了最佳实践,建议在 Web 应用程序中避免使用它们。

In the first two cases, transient and scoped, the services are disposed of at the end of the request.
在前两种情况中,transient 和 scoped,服务将在请求结束时被释放。

In the next section, we will see how to implement all the concepts that we have mentioned in the last two sections (the definition of dependency injection and its lifetime) in a short demo that you can use as a starting point for your next project.
在下一节中,我们将通过一个简短的演示来了解如何实现我们在最后两节中提到的所有概念(依赖注入的定义及其生命周期),您可以将其用作下一个项目的起点。

Implementing dependency injection in a minimal API project
在最小 API 项目中实现依赖关系注入

After understanding how to use dependency injection in an ASP.NET Core project, let’s try to understand how to use dependency injection in our minimal API project, starting with the default project using the WeatherForecast endpoint.
在了解了如何在 ASP.NET Core 项目中使用依赖项注入之后,让我们尝试了解如何在最小 API 项目中使用依赖项注入,从使用 WeatherForecast 端点的默认项目开始。

This is the actual code of the WeatherForecast GET endpoint:
这是 WeatherForecast GET 端点的实际代码:

app.MapGet("/weatherforecast", () =>
{
    var forecast = Enumerable.Range(1, 5).Select(index =>
    new WeatherForecast
    (
        DateTime.Now.AddDays(index),
        Random.Shared.Next(-20, 55),
        summaries[Random.Shared.
        Next(summaries.Length)]
    ))
    .ToArray();
    return forecast;
});

As we mentioned before, this code works but it’s not easy to test it, especially the creation of the new values of the weather.
正如我们之前提到的,这段代码可以工作,但并不容易测试它,尤其是创建 weather 的新值。

The best choice is to use a service to create fake values and use it with dependency injection.
最好的选择是使用服务创建假值并将其与依赖项注入一起使用。

Let’s see how we can better implement our code:
让我们看看如何更好地实现我们的代码:

  1. First of all, in the Program.cs file, add a new interface called IWeatherForecastService and define a method that returns an array of the WeatherForecast entity:
    首先,在 Program.cs 文件中,添加一个名为 IWeatherForecastService 的新接口,并定义一个返回 WeatherForecast 实体数组的方法:
public interface IWeatherForecastService
{
           WeatherForecast[] GetForecast();
}
  1. The next step is to create the real implementation of the class inherited from the interface.
    下一步是创建从接口继承的类的真正实现。

The code should look like this:
代码应如下所示:

public class WeatherForecastService : IWeatherForecastService
{
}
  1. Now cut and paste the code from the project template inside our new implementation of the service. The final code looks like this:
    现在,将项目模板中的代码剪切并粘贴到我们新的服务实现中。最终代码如下所示:

    public class WeatherForecastService : IWeatherForecastService
    {
    public WeatherForecast[] GetForecast()
    {
        var summaries = new[]
        {
            "Freezing", "Bracing", "Chilly", "Cool",
            "Mild", "Warm", "Balmy", "Hot", "Sweltering",
            "Scorching"
        };
        var forecast = Enumerable.Range(1, 5).
        Select(index =>
        new WeatherForecast
        (
            DateTime.Now.AddDays(index),
            Random.Shared.Next(-20, 55),
            summaries[Random.Shared.Next
            (summaries.Length)]
        ))
        .ToArray();
        return forecast;
    }
    }
  2. We are now ready to add our implementation of WeatherForecastService as a dependency injection in our project. To do that, insert the following line below the first line of code in the Program.cs file:
    现在,我们已准备好将 WeatherForecastService 的实现作为依赖项注入添加到我们的项目中。为此,请在 Program.cs 文件中的第一行代码下方插入以下行:

builder.Services.AddScoped<IWeatherForecastService, WeatherForecastService>();

When the application starts, insert our service into the services collection. Our work is not finished yet.
当应用程序启动时,将我们的服务插入到服务集合中。我们的工作还没有完成。

We need to use our service in the default MapGet implementation of the WeatherForecast endpoint.
我们需要在 WeatherForecast 端点的默认 MapGet 实现中使用我们的服务。

The minimal API has his own parameter binding implementation and it’s very easy to use.
最小的 API 有自己的参数绑定实现,非常易于使用。

First of all, to implement our service with dependency injection, we need to remove all the old code from the endpoint.
首先,要使用依赖项注入实现我们的服务,我们需要从端点中删除所有旧代码。

The code of the endpoint, after removing the code, looks like this:
删除代码后,端点的代码如下所示:

app.MapGet("/weatherforecast", () =>
{
});

We can improve our code and use the dependency injection very easily by simply replacing the old code with the new code:
我们可以通过简单地将旧代码替换为新代码来非常轻松地改进我们的代码并使用依赖注入:

app.MapGet("/weatherforecast", (IWeatherForecastService weatherForecastService) =>
{
    return weatherForecastService.GetForecast();
});

In the minimal API project, the real implementations of the services in the service collection are passed as parameters to the functions and you can use them directly.
在最小 API 项目中,服务集合中服务的真实实现作为参数传递给函数,您可以直接使用它们。

From time to time, you may have to use a service from the dependency injection directly in the main function during the startup phase. In this case, you must retrieve the instance of the implementation directly from the services collection, as shown in the following code snippet:
有时,您可能必须在启动阶段直接在 main 函数中使用依赖项注入中的服务。在这种情况下,您必须直接从 services 集合中检索实现的实例,如以下代码片段所示:

using (var scope = app.Services.CreateScope())
{
    var service = scope.ServiceProvider.GetRequiredService
                  <IWeatherForecastService>();
    service.GetForecast();
}

In this section, we have implemented dependency injection in a minimal API project, starting from the default template.
在本节中,我们从默认模板开始,在最小 API 项目中实现了依赖注入。

We reused the existing code but implemented it with logic that’s more geared toward an architecture that’s better suited to being maintained and tested in the future.
我们重用了现有代码,但使用更适合将来维护和测试的架构的逻辑来实现它。

Summary
总结

Dependency injection is a very important approach to implement in modern applications. In this chapter, we learned what dependency injection is and discussed its fundamentals. Then, we saw how to use dependency injection in a minimal API project.
依赖项注入是在现代应用程序中实现的一种非常重要的方法。在本章中,我们了解了什么是依赖注入并讨论了它的基础知识。然后,我们了解了如何在最小 API 项目中使用依赖注入。

In the next chapter, we will focus on another important layer of modern applications and discuss how to implement a logging strategy in a minimal API project.
在下一章中,我们将重点介绍现代应用程序的另一个重要层,并讨论如何在最小的 API 项目中实现日志记录策略。

5 Using Logging to Identify Errors

5 使用日志记录识别错误

In this chapter, we will begin to learn about the logging tools that .NET provides us with. A logger is one of the tools that developers must use to debug an application or understand its failure in production. The log library has been built into ASP.NET with several features enabled by design. The purpose of this chapter is to delve into the things we take for granted and add more information as we go.
在本章中,我们将开始了解 .NET 为我们提供的日志记录工具。记录器是开发人员用来调试应用程序或了解其在生产中的故障时必须使用的工具之一。日志库已内置于 ASP.NET 中,通过设计启用了多项功能。本章的目的是深入研究我们认为理所当然的事情,并在此过程中添加更多信息。

The themes we will touch on in this chapter are as follows:
我们将在本章中讨论的主题如下:

• Exploring logging in .NET
探索 .NET 中的日志记录

• Leveraging the logging framework
利用日志记录框架

• Storing a structured log with Serilog
使用 Serilog 存储结构化日志

Technical requirements
技术要求

As reported in the previous chapters, it will be necessary to have the .NET 6 development framework.
如前几章所述,有必要具有 .NET 6 开发框架。

There are no special requirements in this chapter for beginning to test the examples described.
本章中没有对开始测试所描述的示例的特殊要求。

All the code samples in this chapter can be found in the GitHub repository for this book at https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter05.
本章中的所有代码示例都可以在本书的 GitHub 存储库中找到,网址为 https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter05

Exploring logging in .NET
探索 .NET 中的日志记录

ASP.NET Core templates create a WebApplicationBuilder and a WebApplication, which provide a simplified way to configure and run web applications without a startup class.
ASP.NET Core 模板创建 WebApplicationBuilder 和 WebApplication,它们提供了一种无需启动类即可配置和运行 Web 应用程序的简化方法。

As mentioned previously, with .NET 6, the Startup.cs file is eliminated in favor of the existing Program.cs file. All startup configurations are placed in this file, and in the case of minimal APIs, endpoint implementations are also placed.
如前所述,在 .NET 6 中,Startup.cs 文件被消除,取而代之的是现有的 Program.cs 文件。所有启动配置都放置在此文件中,对于最小的 API,还会放置端点实现。

What we have just described is the starting point of every .NET application and its various configurations.
我们刚才描述的是每个 .NET 应用程序及其各种配置的起点。

Logging into an application means tracking the evidence in different points of the code to check whether it is running as expected. The purpose of logging is to track over time all the conditions that led to an unexpected result or event in the application. Logging in an application can be useful both during development and while the application is in production.
登录到应用程序意味着跟踪代码不同点的证据,以检查它是否按预期运行。日志记录的目的是随着时间的推移跟踪导致应用程序中出现意外结果或事件的所有条件。在开发期间和应用程序处于生产状态时,登录应用程序都非常有用。

However, for logging, as many as four providers are added for tracking application information:
但是,对于日志记录,将添加多达四个提供程序来跟踪应用程序信息:

• Console: The Console provider logs output to the console. This log is unusable in production because the console of a web application is usually not visible. This kind of log is useful during development to make logging fast when you are running your app under Kestrel on your desktop machine in the app console window.
控制台:控制台提供程序将输出记录到控制台。此日志在生产中不可用,因为 Web 应用程序的控制台通常不可见。在开发过程中,这种日志非常有用,当您在应用程序控制台窗口中的桌面计算机上的 Kestrel 下运行应用程序时,可以快速进行日志记录。

• Debug: The Debug provider writes log output by using the System.Diagnostics.Debug class. When we develop, we are used to seeing this section in the Visual Studio output window.
调试:调试提供程序使用 System.Diagnostics.Debug 类写入日志输出。在开发时,我们习惯于 Visual Studio 输出窗口中看到此部分。

Under the Linux operating system, information is tracked depending on the distribution in the following locations: /var/log/message and /var/log/syslog.
在 Linux作系统下,根据以下位置的分发情况跟踪信息:/var/log/message 和 /var/log/syslog。

• EventSource: On Windows, this information can be viewed in the EventTracing window.
EventSource:在 Windows 上,可以在 EventTracing 窗口中查看此信息。

• EventLog (only when running on Windows): This information is displayed in the native Windows window, so you can only see it if you run the application on the Windows operating system.
EventLog (仅在 Windows 上运行时):此信息显示在本机 Windows 窗口中,因此只有在 Windows作系统上运行应用程序时才能看到它。

A new feature in the latest .NET release
最新 .NET 版本中的新功能

New logging providers have been added in the latest versions of .NET. However, these providers are not enabled within the framework.
最新版本的 .NET 中添加了新的日志记录提供程序。但是,这些提供程序未在框架内启用。

Use these extensions to enable new logging scenarios: AddSystemdConsole, AddJsonConsole, and AddSimpleConsole.
使用以下扩展启用新的日志记录方案:AddSystemdConsole、AddJsonConsole 和 AddSimpleConsole。

You can find more details on how to configure the log and what the basic ASP.NET settings are at this link: https://docs.microsoft.com/aspnet/core/fundamentals/host/generic-host.
您可以在以下链接中找到有关如何配置日志以及基本 ASP.NET 设置的更多详细信息:https://docs.microsoft.com/aspnet/core/fundamentals/host/generic-host

We’ve started to see what the framework gives us; now we need to understand how to leverage it within our applications. Before proceeding, we need to understand what a logging layer is. It is a fundamental concept that will help us break down information into different layers and enable them as needed:
我们已经开始看到框架给我们带来了什么;现在我们需要了解如何在我们的应用程序中利用它。在继续之前,我们需要了解什么是日志层。这是一个基本概念,可帮助我们将信息分解为不同的层并根据需要启用它们:

Table 5.1 – Log levels
表 5.1 – 日志级别

Table 5.1 shows the most verbose levels down to the least verbose level.
表 5.1 显示了最详细的级别到最不详细的级别。

To learn more, you can read the article titled Logging in .NET Core and ASP.NET Core, which explains the logging process in detail here: https://docs.microsoft.com/aspnet/core/fundamentals/logging/.
若要了解详细信息,可以阅读标题为“在 .NET Core 和 ASP.NET Core 中登录”的文章,其中详细介绍了日志记录过程:https://docs.microsoft.com/aspnet/core/fundamentals/logging/

If we select our log level as Information, everything at this level will be tracked down to the Critical level, skipping Debug and Trace.
如果我们将日志级别选为 Information,则此级别的所有内容都将被跟踪到 Critical 级别,跳过 Debug 和 Trace。

We’ve seen how to take advantage of the log layers; now, let’s move on to writing a single statement that will log information and can allow us to insert valuable content into the tracking system.
我们已经看到了如何利用日志层;现在,让我们继续编写一个语句,该语句将记录信息,并允许我们将有价值的内容插入到跟踪系统中。

Configuring logging
配置日志记录

To start using the logging component, you need to know a couple of pieces of information to start tracking data. Each logger object (ILogger<T>) must have an associated category. The log category allows you to segment the tracking layer with a high definition. For example, if we want to track everything that happens in a certain class or in an ASP.NET controller, without having to rewrite all our code, we need to enable the category or categories of our interest.
要开始使用 logging 组件,您需要了解一些信息才能开始跟踪数据。每个记录器对象 (ILogger<T>) 必须具有关联的类别。日志类别允许您对高清晰度的跟踪层进行分段。例如,如果我们想跟踪某个类或 ASP.NET 控制器中发生的所有事情,而不必重写所有代码,我们需要启用我们感兴趣的一个或多个类别。

A category is a T class. Nothing could be simpler. You can reuse typed objects of the class where the log method is injected. For example, if we’re implementing MyService, and we want to track everything that happens in the service with the same category, we just need to request an ILogger<MyService> object instance from the dependency injection engine.
类别是 T 类。没有比这更简单的了。您可以重用注入 log 方法的类的类型化对象。例如,如果我们正在实现 MyService,并且想要跟踪具有相同类别的服务中发生的所有事情,则只需从依赖项注入引擎请求 ILogger<MyService> 对象实例。

Once the log categories are defined, we need to call the ILogger<T> object and take advantage of the object’s public methods. In the previous section, we looked at the log layers. Each log layer has its own method for tracking information. For example, LogDebug is the method specified to track information with a Debug layer.
定义日志类别后,我们需要调用 ILogger<T> 对象并利用该对象的公共方法。在上一节中,我们了解了日志层。每个日志层都有自己的跟踪信息方法。例如,LogDebug 是指定用于使用 Debug 层跟踪信息的方法。

Let’s now look at an example. I created a record in the Program.cs file:
现在让我们看一个示例。我在 Program.cs 文件中创建了一条记录:

internal record CategoryFiltered();

This record is used to define a particular category of logs that I want to track only when necessary. To do this, it is advisable to define a class or a record as an end in itself and enable the necessary trace level.
此记录用于定义我只想在必要时跟踪的特定日志类别。为此,建议将类或记录定义为其本身的 end,并启用必要的跟踪级别。

A record that is defined in the Program.cs file has no namespace; we must remember this when we define the appsettings file with all the necessary information.
在 Program.cs 文件中定义的记录没有命名空间;当我们使用所有必要的信息定义 AppSettings 文件时,我们必须记住这一点。

If the log category is within a namespace, we must consider the full name of the class. In this case, it is LoggingSamples.Categories.MyCategoryAlert:
如果日志类别位于命名空间内,则必须考虑类的全名。在本例中,它是 LoggingSamples.Categories.MyCategoryAlert:

namespace LoggingSamples.Categories
{
    public class MyCategoryAlert
    {
    }
}

If we do not specify the category, as in the following example, the selected log level is the default:
如果我们不指定类别,如以下示例所示,则所选日志级别为默认日志级别:

  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning",
      "CategoryFiltered": "Information",
      "LoggingSamples.Categories.MyCategoryAlert": "Debug"
    }
  }

Anything that comprises infrastructure logs, such as Microsoft logs, stays in special categories such as Microsoft.AspNetCore or Microsoft.EntityFrameworkCore.
构成基础结构日志的任何内容(如 Microsoft 日志)都属于特殊类别,如 Microsoft.AspNetCore 或 Microsoft.EntityFrameworkCore。

The full list of Microsoft log categories can be found at the following link:
Microsoft 日志类别的完整列表可在以下链接中找到:
https://docs.microsoft.com/aspnet/core/fundamentals/logging/#aspnet-core-and-ef-core-categories

Sometimes, we need to define certain log levels depending on the tracking provider. For example, during development, we want to see all the information in the log console, but we only want to see errors in the log file.
有时,我们需要根据跟踪提供商定义某些日志级别。例如,在开发过程中,我们希望在日志控制台中看到所有信息,但我们只想在日志文件中看到错误。

To do this, we don’t need to change the configuration code but just define its level for each provider. The following is an example that shows how everything that is tracked in the Microsoft categories is shown from the Information layer to the ones below it:
为此,我们不需要更改配置代码,只需为每个提供程序定义其级别。以下示例显示了如何从信息层向其下方的 Microsoft 类别中跟踪的所有内容显示:

{
  "Logging": {      // Default, all providers.
    "LogLevel": {
      "Microsoft": "Warning"
    },
    "Console": { // Console provider.
      "LogLevel": {
        "Microsoft": "Information"
      }
    }
  }
}

Now that we’ve figured out how to enable logging and how to filter the various categories, all that’s left is to apply this information to a minimal API.
现在我们已经弄清楚了如何启用日志记录以及如何筛选各种类别,剩下的工作就是将此信息应用于最小的 API。

In the following code, we inject two ILogger instances with different categories. This is not a common practice, but we did it to make the example more concrete and show how the logger works:
在下面的代码中,我们注入了两个不同类别的 ILogger 实例。这不是一种常见的做法,但我们这样做是为了使示例更加具体并展示 Logger 的工作原理:

app.MapGet("/first-log", (ILogger<CategoryFiltered> loggerCategory, ILogger<MyCategoryAlert> loggerAlertCategory) =>
{
    loggerCategory.LogInformation("I'm information 
      {MyName}", "My Name Information");
    loggerAlertCategory.LogInformation("I'm information
      {MyName}", "Alert Information");
    return Results.Ok();
})
.WithName("GetFirstLog");

In the preceding snippet, we inject two instances of the logger with different categories; each category tracks a single piece of information. The information is written according to a template that we will describe shortly. The effect of this example is that based on the level, we can show or disable the information displayed for a single category, without changing the code.
在前面的代码段中,我们注入了两个不同类别的 Logger 实例;每个类别跟踪一条信息。该信息是根据我们稍后将介绍的模板编写的。此示例的效果是,根据级别,我们可以显示或禁用为单个类别显示的信息,而无需更改代码。

We started filtering the logo by levels and categories. Now, we want to show you how to define a template that will allow us to define a message and make it dynamic in some of its parts.
我们开始按级别和类别过滤徽标。现在,我们想向您展示如何定义一个模板,该模板将允许我们定义消息并使其在某些部分中是动态的。

Customizing log message
自定义日志消息

The message field that is asked by the log methods is a simple string object that we can enrich and serialize through the logging frameworks in proper structures. The message is therefore essential to identify malfunctions and errors, and inserting objects in it can significantly help us to identify the problem:
log 方法询问的 message 字段是一个简单的字符串对象,我们可以通过日志记录框架以适当的结构对其进行扩充和序列化。因此,该消息对于识别故障和错误至关重要,在其中插入对象可以显着帮助我们识别问题:

string apples = "apples";
string pears = "pears";
string bananas = "bananas";
logger.LogInformation("My fruit box has: {pears}, {bananas}, {apples}", apples, pears, bananas);

The message template contains placeholders that interpolate content into the textual message.
消息模板包含将内容插入到文本消息中的占位符。

In addition to the text, it is necessary to pass the arguments to replace the placeholders. Therefore, the order of the parameters is valid but not the name of the placeholders for the substitution.
除了文本之外,还需要传递参数来替换占位符。因此,参数的顺序有效,但替换的占位符名称无效。

The result then considers the positional parameters and not the placeholder names:
然后,结果会考虑位置参数,而不是占位符名称:

My fruit box has: apples, pears, bananas

Now you know how to customize log messages. Next, let us learn about infrastructure logging, which is essential while working in more complex scenarios.
现在您知道如何自定义日志消息了。接下来,让我们了解一下基础设施日志记录,这在更复杂的场景中工作时是必不可少的。

Infrastructure logging
基础设施日志记录

In this section, we want to tell you about a little-known and little-used theme within ASP.NET applications: the W3C log.
在本节中,我们想向您介绍 ASP.NET 应用程序中一个鲜为人知且很少使用的主题:W3C 日志。

This log is a standard that is used by all web servers, not only Internet Information Services (IIS). It also works on NGINX and many other web servers and can be used on Linux, too. It is also used to trace various requests. However, the log cannot understand what happened inside the call.
此日志是所有 Web 服务器都使用的标准,而不仅仅是 Internet Information Services (IIS)。它也适用于 NGINX 和许多其他 Web 服务器,也可以在 Linux 上使用。它还用于跟踪各种请求。但是,日志无法理解调用中发生的情况。

Thus, this feature focuses on the infrastructure, that is, how many calls are made and to which endpoint.
因此,此功能侧重于基础设施,即进行多少次调用以及调用到哪个终端节点。

In this section, we will see how to enable tracking, which, by default, is stored on a file. The functionality takes a little time to find but enables more complex scenarios that must be managed with appropriate practices and tools, such as OpenTelemetry.
在本节中,我们将了解如何启用跟踪,默认情况下,跟踪存储在文件中。该功能需要一点时间才能找到,但支持更复杂的场景,这些场景必须使用适当的实践和工具(如 OpenTelemetry)进行管理。

OpenTelemetry
开放遥测

OpenTelemetry is a collection of tools, APIs, and SDKs. We use it to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) to help analyze software performance and behavior. You can learn more at the OpenTelemetry official website: https://opentelemetry.io/.
OpenTelemetry 是工具、API 和 SDK 的集合。我们使用它来检测、生成、收集和导出遥测数据(指标、日志和跟踪),以帮助分析软件性能和行为。您可以在 OpenTelemetry 官方网站上了解更多信息: https://opentelemetry.io/.

To configure W3C logging, you need to register the AddW3CLogging method and configure all available options.
要配置 W3C 日志记录,您需要注册 AddW3CLogging 方法并配置所有可用选项。

To enable logging, you only need to add UseW3CLogging.
要启用日志记录,您只需添加 UseW3CLogging。

The writing of the log does not change; the two methods enable the scenario just described and start writing data to the W3C log standard:
日志的写入不会改变;这两种方法启用刚才描述的方案并开始将数据写入 W3C 日志标准:

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddW3CLogging(logging =>
{
    logging.LoggingFields = W3CLoggingFields.All;
});
var app = builder.Build();
app.UseW3CLogging();
app.MapGet("/first-w3c-log", (IWebHostEnvironment webHostEnvironment) =>
{
    return Results.Ok(new { PathToWrite = 
      webHostEnvironment.ContentRootPath });
})
.WithName("GetW3CLog");

We report the header of the file that is created (the headers of the information will be tracked later):
我们报告所创建文件的标题(稍后将跟踪信息的标题):

#Version: 1.0
#Start-Date: 2022-01-03 10:34:15
#Fields: date time c-ip cs-username s-computername s-ip s-port cs-method cs-uri-stem cs-uri-query sc-status time-taken cs-version cs-host cs(User-Agent) cs(Cookie) cs(Referer)

We’ve seen how to track information about the infrastructure hosting our application; now, we want to increase log performance with new features in .NET 6 that help us set up standard log messages and avoid errors.
我们已经了解了如何跟踪有关托管应用程序的基础设施的信息;现在,我们希望通过 .NET 6 中的新功能来提高日志性能,这些功能可以帮助我们设置标准日志消息并避免错误。

Source generators
源生成器

One of the novelties of .NET 6 is the source generators; they are performance optimization tools that generate executable code at compile time. The creation of executable code at compile time, therefore, generates an increase in performance. During the execution phase of the program, all structures are comparable to code written by the programmer before compilation.
.NET 6 的新颖之处之一是源生成器;它们是在编译时生成可执行代码的性能优化工具。因此,在编译时创建可执行代码会提高性能。在程序的执行阶段,所有结构都与程序员在编译前编写的代码相当。

String interpolation using $”” is generally great, and it makes for much more readable code than string.Format(), but you should almost never use it when writing log messages:
使用 $“” 的字符串插值通常很棒,并且它使代码比 string 更具可读性。Format(),但在编写日志消息时几乎不应该使用它:

logger.LogInformation($"I'm {person.Name}-{person.Surname}")

The output of this method to the Console will be the same when using string interpolation or structural logging, but there are several problems:
使用字符串插值或结构日志记录时,此方法对 Console 的输出将相同,但存在几个问题:

• You lose the structured logs and you won’t be able to filter by the format values or archive the log message in the custom field of NoSQL products.
您将丢失结构化日志,并且无法按格式值进行筛选,也无法在 NoSQL 产品的自定义字段中存档日志消息。

• Similarly, you no longer have a constant message template to find all identical logs.
同样,您不再有固定的消息模板来查找所有相同的日志。

• The serialization of the person is done ahead of time before the string is passed into LogInformation.
将字符串传递到 LogInformation 之前,会提前完成人员的序列化。

• The serialization is done even though the log filter is not enabled. To avoid processing the log, it is necessary to check whether the layer is active, which would make the code much less readable.
即使未启用日志过滤器,也会完成序列化。为避免处理日志,有必要检查该层是否处于活动状态,这将使代码的可读性大大降低。

Let us say you decide to update the log message to include Age to clarify why the log is being written:
假设您决定更新日志消息以包含 Age 以阐明写入日志的原因:

logger.LogInformation("I'm {Name}-{Surname} with {Age}", person.Name, person.Surname);

In the previous code snippet, I added Age in the message template but not in the method signature. At compile time, there is no compile-time error, but when this line is executed, an exception is thrown due to the lack of a third parameter.
在前面的代码段中,我在消息模板中添加了 Age,但没有在方法签名中添加。在编译时,没有编译时错误,但是当执行此行时,由于缺少第三个参数,会引发异常。

LoggerMessage in .NET 6 comes to our rescue, automatically generating the code to log the necessary data. The methods will require the correct number of parameters and the text will be formatted in a standard way.
.NET 6 中的 LoggerMessage 可以帮我们忙,自动生成代码来记录必要的数据。这些方法将需要正确数量的参数,并且文本将以标准方式格式化。

To use the LoggerMessage syntax, you can take advantage of a partial class or a static class. Inside the class, it will be possible to define the method or methods with all the various log cases:
要使用 LoggerMessage 语法,您可以利用分部类或静态类。在类中,可以使用所有不同的日志情况定义一个或多个方法:

public partial class LogGenerator
    {
        private readonly ILogger<LogGeneratorCategory> 
          _logger;
        public LogGenerator(ILogger<LogGeneratorCategory>
          logger)
        {
            _logger = logger;
        }
        [LoggerMessage(
            EventId = 100,
            EventName = "Start",
            Level = LogLevel.Debug,
            Message = "Start Endpoint: {endpointName} with
              data {dataIn}")]
        public partial void StartEndpointSignal(string 
          endpointName, object dataIn);
        [LoggerMessage(
           EventId = 101,
           EventName = "StartFiltered",
           Message = "Log level filtered: {endpointName} 
             with data {dataIn}")]
        public partial void LogLevelFilteredAtRuntime(
          LogLevel, string endpointName, object dataIn);
    }
    public class LogGeneratorCategory { }

In the previous example, we created a partial class, injected the logger and its category, and implemented two methods. The methods are used in the following code:
在前面的示例中,我们创建了一个分部类,注入了 Logger 及其类别,并实现了两个方法。这些方法在以下代码中使用:

app.MapPost("/start-log", (PostData data, LogGenerator logGenerator) =>
{
    logGenerator.StartEndpointSignal("start-log", data);
    logGenerator.LogLevelFilteredAtRuntime(LogLevel.Trace,
      "start-log", data);
})
.WithName("StartLog");
internal record PostData(DateTime Date, string Name);

Notice how in the second method, we also have the possibility to define the log level at runtime.
请注意,在第二种方法中,我们还可以在运行时定义日志级别。

Behind the scenes, the [LoggerMessage] source generator generates the LoggerMessage.Define() code to optimize your method call. The following output shows the generated code:
在后台,[LoggerMessage] 源生成器会生成 LoggerMessage.Define() 代码来优化方法调用。以下输出显示了生成的代码:

[global::System.CodeDom.Compiler.GeneratedCodeAttribute("Microsoft.Extensions.Logging.Generators", "6.0.5.2210")]
        public partial void LogLevelFilteredAtRuntime(
          global::Microsoft.Extensions.Logging.LogLevel 
          logLevel, global::System.String endpointName,
          global::System.Object dataIn)
        {
            if (_logger.IsEnabled(logLevel))
            {
                _logger.Log(
                    logLevel,
                    new global::Microsoft.Extensions.
                     Logging.EventId(101, "StartFiltered"),
                    new __LogLevelFilteredAtRuntimeStruct(
                      endpointName, dataIn),
                    null,
                      __LogLevelFilteredAtRuntimeStruct.
                          Format);
            }
        }

In this section, you have learned about some logging providers, different log levels, how to configure them, what parts of the message template to modify, enabling logging, and the benefits of source generators. In the next section, we will focus more on logging providers.
在本节中,您了解了一些日志记录提供程序、不同的日志级别、如何配置它们、要修改消息模板的哪些部分、启用日志记录以及源生成器的好处。在下一节中,我们将更多地关注日志提供程序。

Leveraging the logging framework
利用日志记录框架

The logging framework, as mentioned at the beginning of the chapter, already has by design a series of providers that do not require adding any additional packages. Now, let us explore how to work with these providers and how to build custom ones. We will analyze only the Console log provider because it has all the sufficient elements to replicate the same reasoning on other log providers.
如本章开头所述,日志框架在设计上已经有一系列不需要添加任何其他包的提供程序。现在,让我们探索如何与这些提供商合作以及如何构建自定义提供商。我们将仅分析 Console 日志提供程序,因为它具有在其他日志提供程序上复制相同推理的所有足够元素。

Console log
控制台日志

The Console log provider is the most used one because, during the development, it gives us a lot of information and collects all the application errors.
Console 日志提供程序是最常用的一种,因为在开发过程中,它为我们提供了大量信息并收集了所有应用程序错误。

Since .NET 6, this provider has been joined by the AddJsonConsole provider, which, besides tracing the errors like the console, serializes them in a JSON object readable by the human eye.
从 .NET 6 开始,此提供程序已由 AddJsonConsole 提供程序加入,该提供程序除了像控制台一样跟踪错误外,还会将它们序列化为人眼可读的 JSON 对象。

In the following example, we show how to configure the JsonConsole provider and also add indentation when writing the JSON payload:
在以下示例中,我们将展示如何配置 JsonConsole 提供程序,并在写入 JSON 有效负载时添加缩进:

builder.Logging.AddJsonConsole(options =>
        options.JsonWriterOptions = new JsonWriterOptions()
        {
            Indented = true
        });

As we’ve seen in the previous examples, we’re going to track the information with the message template:
正如我们在前面的示例中所看到的,我们将使用 message 模板跟踪信息:

app.MapGet("/first-log", (ILogger<CategoryFiltered> loggerCategory, ILogger<MyCategoryAlert> loggerAlertCategory) =>
{
    loggerCategory.LogInformation("I'm information 
      {MyName}", "My Name Information");
    loggerCategory.LogDebug("I'm debug {MyName}",
      "My Name Debug");
    loggerCategory.LogInformation("I'm debug {Data}", 
      new PayloadData("CategoryRoot", "Debug"));
    loggerAlertCategory.LogInformation("I'm information 
      {MyName}", "Alert Information");
    loggerAlertCategory.LogDebug("I'm debug {MyName}",
      "Alert Debug");
    var p = new PayloadData("AlertCategory", "Debug");
    loggerAlertCategory.LogDebug("I'm debug {Data}", p);
    return Results.Ok();
})
.WithName("GetFirstLog");

Finally, an important note: the Console and JsonConsole providers do not serialize objects passed via the message template but only write the class name.
最后,需要注意的是:Console 和 JsonConsole 提供程序不会序列化通过消息模板传递的对象,而只写入类名。

var p = new PayloadData("AlertCategory", "Debug");
loggerAlertCategory.LogDebug("I'm debug {Data}", p);

This is definitely a limitation of providers. Thus, we suggest using structured logging tools such as NLog, log4net, and Serilog, which we will talk about shortly.
这绝对是提供商的限制。因此,我们建议使用结构化日志记录工具,例如 NLog、log4net 和 Serilog,我们稍后会讨论这些工具。

We present the outputs of the previous lines with the two providers just described:
我们将前面几行的输出与刚才描述的两个提供商一起呈现:

Figure 5.1 – AddJsonConsole output
图 5.1 – AddJsonConsole 输出

Figure 5.1 shows the log formatted as JSON, with several additional details compared to the traditional console log.
图 5.1 显示了格式为 JSON 的日志,与传统控制台日志相比,还有一些额外的细节。

Figure 5.2 – Default logging provider Console output
图 5.2 – 默认日志记录提供程序控制台输出

Figure 5.2 shows the default logging provider Console output.
图 5.2 显示了默认的日志记录提供程序 Console 输出。

Given the default providers, we want to show you how you can create a custom one that fits the needs of your application.
给定默认提供程序,我们想向您展示如何创建适合您应用程序需求的自定义提供程序。

Creating a custom provider
创建自定义提供程序

The logging framework designed by Microsoft can be customized with little effort. Thus, let us learn how to create a custom provider.
Microsoft 设计的日志记录框架可以毫不费力地进行自定义。因此,让我们学习如何创建自定义提供商(provider)。

Why create a custom provider? Well, put simply, to not have dependencies with logging libraries and to better manage the performance of the application. Finally, it also encapsulates some custom logic of your specific scenario and makes your code more manageable and readable.
为什么要创建自定义提供商?嗯,简单地说,不要依赖日志库,并更好地管理应用程序的性能。最后,它还封装了特定方案的一些自定义逻辑,并使代码更易于管理和可读。

In the following example, we have simplified the usage scenario to show you the minimum components needed to create a working logging provider for profit.
在以下示例中,我们简化了使用场景,向您展示了创建有效的日志记录提供商以获取利润所需的最少组件。

One of the fundamental parts of a provider is the ability to configure its behavior. Let us create a class that can be customized at application startup or retrieve information from appsettings.
提供程序的基本部分之一是配置其行为的能力。让我们创建一个类,该类可以在应用程序启动时自定义或从 appsettings 中检索信息。

In our example, we define a fixed EventId to verify a daily rolling file logic and a path of where to write the file:
在我们的示例中,我们定义了一个固定的 EventId 来验证每日滚动文件逻辑和写入文件的路径:

public class FileLoggerConfiguration
{
        public int EventId { get; set; }
        public string PathFolderName { get; set; } = 
          "logs";
        public bool IsRollingFile { get; set; }
}

The custom provider we are writing will be responsible for writing the log information to a text file. We achieve this by implementing the log class, which we call FileLogger, which implements the ILogger interface.
我们正在编写的自定义提供程序将负责将日志信息写入文本文件。我们通过实现 log 类来实现这一点,我们称之为 FileLogger,它实现 ILogger 接口。

In the class logic, all we do is implement the log method and check which file to put the information in.
在 class logic中,我们所做的只是实现 log 方法并检查将信息放入哪个文件。

We put the directory verification in the next file, but it’s more correct to put all the control logic in this method. We also need to make sure that the log method does not throw exceptions at the application level. The logger should never affect the stability of the application:
我们将目录验证放在下一个文件中,但将所有 control logic 都放在此方法中更为正确。我们还需要确保 log 方法不会在应用程序级别引发异常。记录器不应影响应用程序的稳定性:

    public class FileLogger : ILogger
    {
        private readonly string name;
        private readonly Func<FileLoggerConfiguration> 
          getCurrentConfig;
        public FileLogger(string name,
          Func<FileLoggerConfiguration> getCurrentConfig)
        {
            this.name = name;
            this.getCurrentConfig = getCurrentConfig;
        }
        public IDisposable BeginScope<TState>(TState state)
          => default!;
        public bool IsEnabled(LogLevel logLevel) => true;
        public void Log<TState>(LogLevel logLevel, EventId
          , TState state, Exception? exception, 
          Func<TState, Exception?, string> formatter)
        {
            if (!IsEnabled(logLevel))
            {
                return;
            }
            var config = getCurrentConfig();
            if (config.EventId == 0 || config.EventId ==
                eventId.Id)
            {
                string line = $"{name} - {formatter(state,
                  exception)}";
                string fileName = config.IsRollingFile ? 
                  RollingFileName : FullFileName;
                string fullPath = Path.Combine(
                  config.PathFolderName, fileName);
                File.AppendAllLines(fullPath, new[] { line });
            }
        }
        private static string RollingFileName => 
          $"log-{DateTime.UtcNow:yyyy-MM-dd}.txt";
        private const string FullFileName = "logs.txt";
    }

Now, we need to implement the ILoggerProvider interface, which is intended to create one or more instances of the logger class just discussed.
现在,我们需要实现 ILoggerProvider 接口,该接口旨在创建刚才讨论的 Logger 类的一个或多个实例。

In this class, we check the directory we mentioned in the previous paragraph, but we also check whether the settings in the appsettings file change, via IOptionsMonitor<T>:
在这个类中,我们检查了我们在上一段中提到的目录,但我们也会通过 IOptionsMonitor<T>检查 appsettings 文件中的设置是否发生了变化:

public class FileLoggerProvider : ILoggerProvider
{
    private readonly IDisposable onChangeToken;
    private FileLoggerConfiguration currentConfig;
    private readonly ConcurrentDictionary<string,
      FileLogger> _loggers = new();
    public FileLoggerProvider(
      IOptionsMonitor<FileLoggerConfiguration> config)
    {
        currentConfig = config.CurrentValue;
        CheckDirectory();
        onChangeToken = config.OnChange(updateConfig =>
        {
            currentConfig = updateConfig;
            CheckDirectory();
        });
    }
    public ILogger CreateLogger(string categoryName)
    {
        return _loggers.GetOrAdd(categoryName, name => new 
          FileLogger(name, () => currentConfig));
    }
    public void Dispose()
    {
        _loggers.Clear();
        onChangeToken.Dispose();
    }
    private void CheckDirectory()
    {
        if (!Directory.Exists(currentConfig.PathFolderName))
            Directory.CreateDirectory(currentConfig.
            PathFolderName);
    }
}

Finally, to simplify its use and configuration during the application startup phase, we also define an extension method for registering the various classes just mentioned.
最后,为了简化它在应用程序启动阶段的使用和配置,我们还定义了一个扩展方法,用于注册刚才提到的各种类。

The AddFile method will register ILoggerProvider and couple it to its configuration (very simple as an example, but it encapsulates several aspects of configuring and using a custom provider):
AddFile 方法将注册 ILoggerProvider 并将其耦合到其配置(示例非常简单,但它封装了配置和使用自定义提供程序的几个方面):

public static class FileLoggerExtensions
    {
        public static ILoggingBuilder AddFile(
        this ILoggingBuilder builder)
        {
            builder.AddConfiguration();
           builder.Services.TryAddEnumerable(
             ServiceDescriptor.Singleton<ILoggerProvider,
             FileLoggerProvider>());
            LoggerProviderOptions.RegisterProviderOptions<
              FileLoggerConfiguration, FileLoggerProvider>
              (builder.Services);
            return builder;
        }
        public static ILoggingBuilder AddFile(
            this ILoggingBuilder builder,
            Action<FileLoggerConfiguration> configure)
        {
            builder.AddFile();
            builder.Services.Configure(configure);
            return builder;
        }
    }

We record everything seen in the Program.cs file with the AddFile extension as shown:
我们使用 AddFile 扩展名记录 Program.cs 文件中看到的所有内容,如下所示:

builder.Logging.AddFile(configuration =>
{
    configuration.PathFolderName = Path.Combine(
      builder.Environment.ContentRootPath, "logs");
    configuration.IsRollingFile = true;
});

The output is shown in Figure 5.3, where we can see both Microsoft log categories in the first five lines (this is the classic application startup information):
输出如图 5.3 所示,我们可以在前五行中看到两个 Microsoft 日志类别(这是经典应用程序启动信息):

Figure 5.3 – File log provider output
图 5.3 – 文件日志提供程序输出

Then, the handler of the minimal APIs that we reported in the previous sections is called. As you can see, no exception data or data passed to the logger is serialized.
然后,调用我们在前面几节中报告的最小 API 的处理程序。如您所见,不会序列化任何异常数据或传递给 logger 的数据。

To add this functionality as well, it is necessary to rewrite ILogger formatter and support serialization of the object. This will give you everything you need to have in a useful logging framework for production scenarios.
若要同时添加此功能,必须重写 ILogger 格式化程序并支持对象的序列化。这将为您提供用于生产场景的有用日志记录框架所需的一切。

We’ve seen how to configure the log and how to customize the provider object to create a structured log to send to a service or storage.
我们已经了解了如何配置日志以及如何自定义 provider 对象以创建要发送到服务或存储的结构化日志。

In the next section, we want to describe the Azure Application Insights service, which is very useful for both logging and application monitoring.
在下一部分中,我们将介绍 Azure Application Insights 服务,该服务对于日志记录和应用程序监视都非常有用。

Application Insights
应用程序洞察

In addition to the already seen providers, one of the most used ones is Azure Application Insights. This provider allows you to send every single log event in the Azure service. In order to insert the provider into our project, all we would have to do is install the following NuGet package:
除了已经看到的提供程序之外,最常用的提供程序之一是 Azure Application Insights。此提供程序允许您发送 Azure 服务中的每个日志事件。为了将提供程序插入到我们的项目中,我们只需安装以下 NuGet 包:

<PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.20.0" />

Registering the provider is very easy.
注册提供商非常简单。

We first register the Application Insights framework, AddApplicationInsightsTelemetry, and then register its extension on the AddApplicationInsights logging framework.
我们首先注册 Application Insights 框架 AddApplicationInsightsTelemetry,然后在 AddApplicationInsights 日志记录框架上注册其扩展。

In the NuGet package previously described, the one for logging the component to the logging framework is also present as a reference:
在前面描述的 NuGet 包中,用于将组件记录到日志记录框架的包也作为参考存在:

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddApplicationInsightsTelemetry();
builder.Logging.AddApplicationInsights();

To register the instrumentation key, which is the key that is issued after registering the service on Azure, you will need to pass this information to the registration method. We can avoid hardcoding this information by placing it in the appsettings.json file using the following format:
若要注册检测密钥(在 Azure 上注册服务后颁发的密钥),您需要将此信息传递给注册方法。我们可以使用以下格式将此信息放在 appsettings.json 文件中,从而避免对此信息进行硬编码:

"ApplicationInsights": {
    "InstrumentationKey": "your-key"
  },

This process is also described in the documentation (https://docs.microsoft.com/it-it/azure/azure-monitor/app/asp-net-core#enable-application-insights-server-side-telemetry-no-visual-studio).
文档 (https://docs.microsoft.com/it-it/azure/azure-monitor/app/asp-net-core#enable-application-insights-server-side-telemetry-no-visual-studio) 中也介绍了此过程。

By launching the method already discussed in the previous sections, we have all the information hooked into Application Insights.
通过启动前面部分中已讨论的方法,我们将所有信息挂接到 Application Insights 中。

Application Insights groups the logs under a particular trace. A trace is a call to an API, so everything that happens in that call is logically grouped together. This feature takes advantage of the WebServer information and, in particular, TraceParentId issued by the W3C standard for each call.
Application Insights 将日志分组到特定跟踪下。跟踪是对 API 的调用,因此该调用中发生的所有事情都在逻辑上分组在一起。此功能利用 WebServer 信息,特别是 W3C 标准为每个调用颁发的 TraceParentId。

In this way, Application Insights can bind calls between various minimal APIs, should we be in a microservice application or with multiple services collaborating with each other.
通过这种方式,Application Insights 可以在各种最小 API 之间绑定调用,前提是我们位于微服务应用程序中或多个服务相互协作。

Figure 5.4 – Application Insights with a standard log provider
图 5.4 – 具有标准日志提供程序的 Application Insights

We notice how the default formatter of the logging framework does not serialize the PayloadData object but only writes the text of the object.
我们注意到日志记录框架的默认格式化程序不会序列化 PayloadData 对象,而只写入对象的文本。

In the applications that we will bring into production, it will be necessary to also trace the serialization of the objects. Understanding the state of the object on time is fundamental to analyzing the errors that occurred during a particular call while running queries in the database or reading the data read from the same.
在我们即将投入生产的应用程序中,还需要跟踪对象的序列化。了解对象的按时状态对于在数据库中运行查询或读取从数据库中读取的数据时分析特定调用期间发生的错误至关重要。

Storing a structured log with Serilog
使用 Serilog 存储结构化日志

As we just discussed, tracking structured objects in the log helps us tremendously in understanding errors.
正如我们刚才讨论的,跟踪日志中的结构化对象对我们理解错误有很大帮助。

We, therefore, suggest one of the many logging frameworks: Serilog.
因此,我们建议使用众多日志框架之一:Serilog。

Serilog is a comprehensive library that has many sinks already written that allow you to store log data and search it later.
Serilog 是一个综合库,它已经编写了许多接收器,允许您存储日志数据并在以后进行搜索。

Serilog is a logging library that allows you to track information on multiple data sources. In Serilog, these sources are called sinks, and they allow you to write structured data inside the log applying a serialization of the data passed to the logging system.
Serilog 是一个日志记录库,允许您跟踪有关多个数据源的信息。在 Serilog 中,这些源称为 sink,它们允许您在日志中写入结构化数据,应用传递给日志记录系统的数据的序列化。

Let’s see how to get started using Serilog for a minimal API application. Let’s install these NuGet packages. Our goal will be to track the same information we’ve been using so far, specifically Console and ApplicationInsights:
让我们看看如何开始将 Serilog 用于最小的 API 应用程序。让我们安装这些 NuGet 包。我们的目标是跟踪我们目前一直在使用的相同信息,特别是控制台和 ApplicationInsights:

<PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.20.0" />
<PackageReference Include="Serilog.AspNetCore" Version="4.1.0" />
<PackageReference Include="Serilog.Settings.Configuration" Version="3.3.0" />
<PackageReference Include="Serilog.Sinks.ApplicationInsights" Version="3.1.0" />

The first package is the one needed for the ApplicationInsights SDK in the application. The second package allows us to register Serilog in the ASP.NET pipeline and to be able to exploit Serilog. The third package allows us to configure the framework in the appsettings file and not have to rewrite the application to change a parameter or code. Finally, we have the package to add the ApplicationInsights sink.
第一个包是应用程序中 ApplicationInsights SDK 所需的包。第二个包允许我们在 ASP.NET 管道中注册 Serilog,并能够利用 Serilog。第三个包允许我们在 appsettings 文件中配置框架,而不必重写应用程序来更改参数或代码。最后,我们有了用于添加 ApplicationInsights 接收器的包。

In the appsettings file, we create a new Serilog section, in which we should register the various sinks in the Using section. We register the log level, the sinks, the enrichers that enrich the information for each event, and the properties, such as the application name:
在 appsettings 文件中,我们创建一个新的 Serilog 部分,我们应该在其中注册 Using 部分的各种接收器。我们注册日志级别、接收器、扩充每个事件信息的 enricher 以及属性,例如应用程序名称:

"Serilog": {
    "Using": [ "Serilog.Sinks.Console",
      "Serilog.Sinks.ApplicationInsights" ],
    "MinimumLevel": "Verbose",
    "WriteTo": [
      { "Name": "Console" },
      {
        "Name": "ApplicationInsights",
        "Args": {
          "restrictedToMinimumLevel": "Information",
          "telemetryConverter": "Serilog.Sinks.
           ApplicationInsights.Sinks.ApplicationInsights.
           TelemetryConverters.TraceTelemetryConverter, 
           Serilog.Sinks.ApplicationInsights"
        }
      }
    ],
    "Enrich": [ "FromLogContext"],   
    "Properties": {
      "Application": "MinimalApi.Packt"
    }
  }

Now, we just have to register Serilog in the ASP.NET pipeline:
现在,我们只需要在 ASP.NET 管道中注册 Serilog:

using Microsoft.ApplicationInsights.Extensibility;
using Serilog;
var builder = WebApplication.CreateBuilder(args);
builder.Logging.AddSerilog();
builder.Services.AddApplicationInsightsTelemetry();
var app = builder.Build();
Log.Logger = new LoggerConfiguration()
.WriteTo.ApplicationInsights(app.Services.GetRequiredService<TelemetryConfiguration>(), TelemetryConverter.Traces)
.CreateLogger();

With the builder.Logging.AddSerilog() statement, we register Serilog with the logging framework to which all logged events will be passed with the usual ILogger interface. Since the framework needs to register the TelemetryConfiguration class to register ApplicationInsights, we are forced to hook the configuration to the static Logger object of Serilog. This is all because Serilog will turn the information from the Microsoft logging framework over to the Serilog framework and add all the necessary information.
builder.Logging.AddSerilog()语句中,我们将 Serilog 注册到日志记录框架,所有记录的事件都将使用通常的 ILogger 接口传递到该框架。由于框架需要注册 TelemetryConfiguration 类来注册 ApplicationInsights,因此我们被迫将配置挂接到 Serilog 的静态 Logger 对象。这都是因为 Serilog 会将信息从 Microsoft 日志记录框架转移到 Serilog 框架并添加所有必要的信息。

The usage is very similar to the previous one, but this time, we add an @ (at) to the message template that will tell Serilog to serialize the sent object.
用法与前一个非常相似,但这次,我们在消息模板中添加一个 @ (at),它将告诉 Serilog 序列化发送的对象。

With this very simple {@Person} wording, we will be able to achieve the goal of serializing the object and sending it to the ApplicationInsights service:
使用这个非常简单的 {@Person} 措辞,我们将能够实现序列化对象并将其发送到 ApplicationInsights 服务的目标:

app.MapGet("/serilog", (ILogger<CategoryFiltered> loggerCategory) =>
{
    loggerCategory.LogInformation("I'm {@Person}", new
      Person("Andrea", "Tosato", new DateTime(1986, 11, 
      9)));
    return Results.Ok();
})
.WithName("GetFirstLog");
internal record Person(string Name, string Surname, DateTime Birthdate);

Finally, we have to find the complete data, serialized with the JSON format, in the Application Insights service.
最后,我们必须在 Application Insights 服务中找到使用 JSON 格式序列化的完整数据。

Figure 5.5 – Application Insights with structured data
图 5.5 – 包含结构化数据的 Application Insights

Summary
总结

In this chapter, we have seen several logging aspects of the implementation of minimal APIs.
在本章中,我们了解了最小 API 实现的几个日志记录方面。

We started to appreciate the ASP.NET churned logging framework, and we understood how to configure and customize it. We focused on how to define a message template and how to avoid errors with the source generator.
我们开始欣赏 ASP.NET 的 churned 日志记录框架,并且我们了解如何配置和自定义它。我们重点介绍了如何定义消息模板以及如何避免源生成器出错。

We saw how to use the new provider to serialize logs with the JSON format and create a custom provider. These elements turned out to be very important for mastering the logging tool and customizing it to your liking.
我们了解了如何使用新的提供程序以 JSON 格式序列化日志并创建自定义提供程序。事实证明,这些元素对于掌握日志记录工具并根据您的喜好对其进行自定义非常重要。

Not only was the application log mentioned but also the infrastructure log, which together with Application Insights becomes a key element to monitoring your application. Finally, we understood that there are ready-made tools, such as Serilog, that help us to have ready-to-use functionalities with a few steps thanks to some packages installed by NuGet.
不仅提到了应用程序日志,还提到了基础结构日志,它与 Application Insights 一起成为监视应用程序的关键元素。最后,我们了解到有一些现成的工具,例如 Serilog,由于 NuGet 安装的一些软件包,它们可以帮助我们通过几个步骤获得即用型功能。

In the next chapter, we will present the mechanisms for validating an input object to the API. This is a fundamental feature to return a correct error to the calls and discard inaccurate requests or those promoted by illicit activities such as spam and attacks, aimed at generating load on our servers.
在下一章中,我们将介绍验证 API 的输入对象的机制。这是一项基本功能,可向调用返回正确的错误并丢弃不准确的请求或由非法活动(如垃圾邮件和攻击)推动的请求,旨在在我们的服务器上产生负载。

6 Exploring Validation and Mapping

6 探索验证和映射

In this chapter of the book, we will discuss how to perform data validation and mapping with minimal APIs, showing what features we currently have, what is missing, and what the most interesting alternatives are. Learning about these concepts will help us to develop more robust and maintainable applications.
在本书的这一章中,我们将讨论如何使用最少的 API 执行数据验证和映射,展示我们目前拥有的功能、缺少的功能以及最有趣的替代方案。了解这些概念将有助于我们开发更健壮且可维护的应用程序。

In this chapter, we will be covering the following topics:
在本章中,我们将介绍以下主题:

• Handling validation
处理验证

• Mapping data to and from APIs
将数据映射到 API 或从 API 映射数据

Technical requirements
技术要求

To follow the descriptions in this chapter, you will need to create an ASP.NET Core 6.0 Web API application. Refer to the Technical requirements section in Chapter 2, Exploring Minimal APIs and Their Advantages, for instructions on how to do so.
要按照本章中的描述进行作,您需要创建一个 ASP.NET Core 6.0 Web API 应用程序。有关如何执行此作的说明,请参阅第 2 章 “探索最小 API 及其优势”中的“技术要求”部分。

If you’re using your console, shell, or bash terminal to create the API, remember to change your working directory to the current chapter number (Chapter06).
如果您使用控制台、shell 或 bash 终端创建 API,请记住将工作目录更改为当前章节编号 (Chapter06)。

All the code samples in this chapter can be found in the GitHub repository for this book at https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter06.
本章中的所有代码示例都可以在本书的 GitHub 存储库中找到,网址为 https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter06

Handling validation
处理验证

Data validation is one of the most important processes in any working software. In the context of a Web API, we perform the validation process to ensure that the information passed to our endpoints respects certain rules – for example, that a Person object has both the FirstName and LastName properties defined, an email address is valid, or an appointment date isn’t in the past.
数据验证是任何工作软件中最重要的过程之一。在 Web API 的上下文中,我们执行验证过程以确保传递给终端节点的信息符合某些规则,例如,Person 对象同时定义了 FirstName 和 LastName 属性、电子邮件地址有效或约会日期不是过去的日期。

In controller-based projects, we can perform these checks, also termed model validation, directly on the model, using data annotations. In fact, the ApiController attribute that is placed on a controller makes model validation errors automatically trigger a 400 Bad Request response if one or more validation rules fail. Therefore, in controller-based projects, we typically don’t need to perform explicit model validation at all: if the validation fails, our endpoint will never be invoked.
在基于控制器的项目中,我们可以使用数据注释直接在模型上执行这些检查,也称为模型验证。事实上,放置在控制器上的 ApiController 属性会使模型验证错误在一个或多个验证规则失败时自动触发 400 Bad Request 响应。因此,在基于控制器的项目中,我们通常根本不需要执行显式模型验证:如果验证失败,我们的端点将永远不会被调用。

Note : The ApiController attribute enables the automatic model validation behavior using the ModelStateInvalidFilter action filter.
注意 : ApiController 属性使用 ModelStateInvalidFilter作筛选器启用自动模型验证行为。

Unfortunately, minimal APIs do not provide built-in support for validation. The IModelValidator interface and all related objects cannot be used. Thus, we don’t have a ModelState; we can’t prevent the execution of our endpoint if there is a validation error and must explicitly return a 400 Bad Request response.
遗憾的是,最小 API 不提供对验证的内置支持。不能使用 IModelValidator 接口和所有相关对象。因此,我们没有 ModelState;如果存在验证错误,我们无法阻止终端节点的执行,并且必须显式返回 400 Bad Request 响应。

So, for example, let’s see the following code:
因此,例如,让我们看看以下代码:

app.MapPost("/people", (Person person) =>
{
    return Results.NoContent();
});
public class Person
{
    [Required]
    [MaxLength(30)]
    public string FirstName { get; set; }
    [Required]
    [MaxLength(30)]
    public string LastName { get; set; }
    [EmailAddress]
    [StringLength(100, MinimumLength = 6)]
    public string Email { get; set; }
}

As we can see, the endpoint will be invoked even if the Person argument does not respect the validation rules. There is only one exception: if we use nullable reference types and we don’t pass a body in the request, we effectively get a 400 Bad Request response. As mentioned in Chapter 2, Exploring Minimal APIs and Their Advantages, nullable reference types are enabled by default in .NET 6.0 projects.
正如我们所看到的,即使 Person 参数不遵守验证规则,也会调用端点。只有一个例外:如果我们使用可为 null 的引用类型,并且我们没有在请求中传递正文,我们实际上会得到 400 Bad Request 响应。如第 2 章 探索最小 API 及其优点中所述,在 .NET 6.0 项目中默认启用可为 null 的引用类型。

If we want to accept a null body (if ever there was a need), we need to declare the parameter as Person?. But, as long as there is a body, the endpoint will always be invoked.
如果我们想接受一个 null body(如果有需要),我们需要将参数声明为 Person?。但是,只要有 body,端点就会始终被调用。

So, with minimal APIs, it is necessary to perform validation inside each route handler and return the appropriate response if some rules fail. We can either implement a validation library compatible with the existing attributes so that we can perform validation using the classic data annotations approach, as described in the next section, or use a third-party solution such as FluentValidation, as we will see in the Integrating FluentValidation section.
因此,使用最少的 API,有必要在每个路由处理程序中执行验证,并在某些规则失败时返回相应的响应。我们可以实现与现有属性兼容的验证库,以便我们可以使用经典数据注释方法执行验证,如下一节所述,也可以使用第三方解决方案,例如 FluentValidation,正如我们将在集成 FluentValidation 部分中看到的那样。

Performing validation with data annotations
使用数据注释执行验证

If we want to use the common validation pattern based on data annotations, we need to rely on reflection to retrieve all the validation attributes in a model and invoke their IsValid methods, which are provided by the ValidationAttribute base class.
如果我们想使用基于数据注释的通用验证模式,则需要依靠反射来检索模型中的所有验证属性,并调用它们的 IsValid 方法,这些方法由 ValidationAttribute 基类提供。

This behavior is a simplification of what ASP.NET Core actually does to handle validations. However, this is the way validation in controller-based projects works.
此行为简化了 ASP.NET Core 实际处理验证的作。但是,这就是基于 controller 的 projects 中 validation 的工作方式。

While we can also manually implement a solution of this kind with minimal APIs, if we decide to use data annotations for validation, we can leverage a small but interesting library, MiniValidation, which is available on GitHub (https://github.com/DamianEdwards/MiniValidation) and NuGet (https://www.nuget.org/packages/MiniValidation).
虽然我们也可以使用最少的 API 手动实现此类解决方案,但如果我们决定使用数据注释进行验证,我们可以利用一个小而有趣的库 MiniValidation,该库可在 GitHub (https://github.com/DamianEdwards/MiniValidation) 和 NuGet (https://www.nuget.org/packages/MiniValidation) 上使用。

Important note : At the time of writing, MiniValidation is available on NuGet as a prerelease.
重要提示 : 在撰写本文时,MiniValidation 在 NuGet 上作为预发行版提供。

We can add this library to our project in one of the following ways:
我们可以通过以下方式之一将此库添加到我们的项目中:

• Option 1: If you’re using Visual Studio 2022, right-click on the project and choose the Manage NuGet Packages command to open the Package Manager GUI; then, search for MiniValidation. Be sure to check the Include prerelease option and click Install.
选项 1:如果您使用的是 Visual Studio 2022,请右键单击项目并选择“管理 NuGet 包”命令以打开包管理器 GUI;然后,搜索 MiniValidation。请务必选中 Include prerelease 选项,然后单击 Install。

• Option 2: Open the Package Manager Console if you’re inside Visual Studio 2022, or open your console, shell, or bash terminal, go to your project directory, and execute the following command: dotnet add package MiniValidation --prerelease
选项 2:如果您在 Visual Studio 2022 中,请打开包管理器控制台,或者打开控制台、shell 或 bash 终端,转到您的项目目录,然后执行以下命令:dotnet add package MiniValidation --prerelease

Now, we can validate a Person object using the following code:
现在,我们可以使用以下代码验证 Person 对象:

app.MapPost("/people", (Person person) =>
{
    var isValid = MiniValidator.TryValidate(person, 
      out var errors);
    if (!isValid)
    {
        return Results.ValidationProblem(errors);
    }
    return Results.NoContent();
});

As we can see, the MiniValidator.TryValidate static method provided by MiniValidation takes an object as input and automatically verifies all the validation rules that are defined on its properties. If the validation fails, it returns false and populates the out parameter with all the validation errors that have occurred. In this case, because it is our responsibility to return the appropriate response code, we use Results.ValidationProblem, which produces a 400 Bad Request response with a ProblemDetails object (as described in Chapter 3, Working with Minimal APIs) and also contains the validation issues.
正如我们所看到的,MiniValidation 提供的 MiniValidator.TryValidate 静态方法将对象作为输入,并自动验证在其属性上定义的所有验证规则。如果验证失败,它将返回 false 并使用已发生的所有验证错误填充 out 参数。在这种情况下,由于我们有责任返回适当的响应代码,因此我们使用 Results.ValidationProblem,它生成带有 ProblemDetails 对象的 400 Bad Request 响应(如第 3 章 使用最小 API 中所述),并且还包含验证问题。

Now, as an example, we can invoke the endpoint using the following invalid input:
现在,例如,我们可以使用以下无效输入调用终端节点:

{
  "lastName": "MyLastName",
  "email": "email"
}

This is the response we will obtain:
这是我们将获得的响应:

{
  "type": 
    "https://tools.ietf.org/html/rfc7231#section-6.5.1",
  "title": "One or more validation errors occurred.",
  "status": 400,
  "errors": {
    "FirstName": [
      "The FirstName field is required."
    ],
    "Email": [
      "The Email field is not a valid e-mail address.",
      "The field Email must be a string with a minimum
       length of 6 and a maximum length of 100."
    ]
  }
}

In this way, besides the fact that we need to execute validation manually, we can implement the approach of using data annotations on our models in the same way we were accustomed to in previous versions of ASP.NET Core. We can also customize error messages and define custom rules by creating classes that inherit from ValidationAttribute.
这样,除了需要手动执行验证之外,我们还可以像以前版本的 ASP.NET Core 一样,在模型上实现使用数据注释的方法。我们还可以通过创建继承自 ValidationAttribute 的类来自定义错误消息和定义自定义规则。

Note : The full list of validation attributes available in ASP.NET Core 6.0 is published at https://docs.microsoft.com/dotnet/api/system.componentmodel.dataannotations. If you’re interested in creating custom attributes, you can refer to https://docs.microsoft.com/aspnet/core/mvc/models/validation#custom-attributes.
注意 : ASP.NET Core 6.0 中可用的验证属性的完整列表发布在 https://docs.microsoft.com/dotnet/api/system.componentmodel.dataannotations。如果你对创建自定义属性感兴趣,可以参考 https://docs.microsoft.com/aspnet/core/mvc/models/validation#custom-attributes

Although data annotations are the most used solution, we can also handle validations using a so-called fluent approach, which has the benefit of completely decoupling validation rules from the model, as we’ll see in the next section.
尽管数据注释是最常用的解决方案,但我们也可以使用所谓的 Fluent 方法处理验证,其优点是将验证规则与模型完全解耦,我们将在下一节中看到。

Integrating FluentValidation
集成 FluentValidation

In every application, it is important to correctly organize our code. This is also true for validation. While data annotations are a working solution, we should think about alternatives that can help us write more maintainable projects. This is the purpose of FluentValidation – a library, part of the .NET Foundation, that allows us to build validation rules using a fluent interface with lambda expressions. The library is available on GitHub (https://github.com/FluentValidation/FluentValidation) and NuGet (https://www.nuget.org/packages/FluentValidation). This library can be used in any kind of project, but when working with ASP.NET Core, there is an ad-hoc NuGet package (https://www.nuget.org/packages/FluentValidation.AspNetCore) that contains useful methods that help to integrate it.
在每个应用程序中,正确组织我们的代码都很重要。验证也是如此。虽然数据注释是一种有效的解决方案,但我们应该考虑可以帮助我们编写更可维护项目的替代方案。这就是 FluentValidation 的用途 – 一个库,是 .NET Foundation 的一部分,它允许我们使用带有 lambda 表达式的 Fluent 接口构建验证规则。该库在 GitHub (https://github.com/FluentValidation/FluentValidation) 和 NuGet (https://www.nuget.org/packages/FluentValidation) 上提供。此库可用于任何类型的项目,但在使用 ASP.NET Core 时,有一个临时 NuGet 包 (https://www.nuget.org/packages/FluentValidation.AspNetCore) 包含有助于集成它的有用方法。

Note : .NET Foundation is an independent organization that aims to support open source software development and collaboration around the .NET platform. You can learn more at https://dotnetfoundation.org.
注意 : .NET Foundation 是一个独立的组织,旨在支持围绕 .NET 平台的开源软件开发和协作。您可以在 https://dotnetfoundation.org 中了解更多信息。

As stated before, with this library, we can decouple validation rules from the model to create a more structured application. Moreover, FluentValidation allows us to define even more complex rules with a fluent syntax without the need to create custom classes based on ValidationAttribute. The library also natively supports the localization of standard error messages.
如前所述,借助此库,我们可以将验证规则与模型解耦,以创建更加结构化的应用程序。此外,FluentValidation 允许我们使用 Fluent 语法定义更复杂的规则,而无需基于 ValidationAttribute 创建自定义类。该库还原生支持标准错误消息的本地化。

So, let’s see how we can integrate FluentValidation into a minimal API project. First, we need to add this library to our project in one of the following ways:
那么,让我们看看如何将 FluentValidation 集成到一个最小的 API 项目中。首先,我们需要通过以下方式之一将此库添加到我们的项目中:

• Option 1: If you’re using Visual Studio 2022, right-click on the project and choose the Manage NuGet Packages command to open Package Manager GUI. Then, search for FluentValidation.DependencyInjectionExtensions and click Install.
选项 1:如果您使用的是 Visual Studio 2022,请右键单击项目并选择“管理 NuGet 包”命令以打开包管理器 GUI。然后,搜索 FluentValidation.DependencyInjectionExtensions 并单击 Install。

• Option 2: Open Package Manager Console if you’re inside Visual Studio 2022, or open your console, shell, or bash terminal, go to your project directory, and execute the following command: dotnet add package FluentValidation.DependencyInjectionExtensions
选项 2:如果您在 Visual Studio 2022 中,请打开包管理器控制台,或者打开控制台、shell 或 bash 终端,转到您的项目目录,然后执行以下命令:
dotnet add 包 FluentValidation.DependencyInjectionExtensions

Now, we can rewrite the validation rules for the Person object and put them in a PersonValidator class:
现在,我们可以重写 Person 对象的验证规则,并将它们放入 PersonValidator 类中:

public class PersonValidator : AbstractValidator<Person>
{
    public PersonValidator() 
    {
        RuleFor(p =>
          p.FirstName).NotEmpty().MaximumLength(30);
        RuleFor(p => 
          p.LastName).NotEmpty().MaximumLength(30);
        RuleFor(p => p.Email).EmailAddress().Length(6,
          100);
    }
}

PersonValidator inherits from AbstractValidator<T>, a base class provided by FluentValidation that contains all the methods we need to define the validation rules. For example, we fluently say that we have a rule for the FirstName property, which is that it must not be empty and it can have a maximum length of 30 characters.
PersonValidator 继承自 AbstractValidator<T>,后者是 FluentValidation 提供的基类,包含定义验证规则所需的所有方法。例如,我们流畅地说我们有一条 FirstName 属性的规则,即它不能为空,并且最大长度为 30 个字符。

The next step is to register the validator in the service provider so that we can use it in our route handlers. We can perform this task with a simple instruction:
下一步是在 service provider 中注册 validator,以便我们可以在 route handlers 中使用它。我们可以通过一个简单的指令来执行这项任务:

var builder = WebApplication.CreateBuilder(args);
//...
builder.Services.AddValidatorsFromAssemblyContaining<Program>();

The AddValidatorsFromAssemblyContaining method automatically registers all the validators derived from AbstractValidator within the assembly containing the specified type. In particular, this method registers the validators and makes them accessible through dependency injection via the IValidator<T> interface, which in turn, is implemented by the AbstractValidator<T> class. If we have multiple validators, we can register them all with this single instruction. We can also easily put our validators in external assemblies.
AddValidatorsFromAssemblyContaining 方法会自动在包含指定类型的程序集中注册从 AbstractValidator 派生的所有验证程序。特别是,此方法注册验证器,并通过 IValidator<T> 接口通过依赖项注入使它们可访问,而 IValidator<T> 接口又由 AbstractValidatorT 类实现。如果我们有多个验证者,我们可以使用这个指令将它们全部注册。我们还可以轻松地将验证器放在外部程序集中。

Now that everything is in place, remembering that with minimal APIs we don’t have automatic model validation, we must update our route handler in this way:
现在一切都已准备就绪,请记住,使用最少的 API 时,我们没有自动模型验证,我们必须以这种方式更新我们的路由处理程序:

app.MapPost("/people", async (Person person, IValidator<Person> validator) =>
{
    var validationResult = 
      await validator.ValidateAsync(person);
    if (!validationResult.IsValid)
    {
        var errors = validationResult.ToDictionary();
        return Results.ValidationProblem(errors);
    }
    return Results.NoContent();
});

We have added an IValidator argument in the route handler parameter list, so now we can invoke its ValidateAsync method to apply the validation rules against the input Person object. If the validation fails, we extract all the error messages and return them to the client with the usual Results.ValidationProblem method, as described in the previous section.
我们在路由处理程序参数列表中添加了 IValidator 参数,因此现在我们可以调用其 ValidateAsync 方法,以对输入 Person 对象应用验证规则。如果验证失败,我们将提取所有错误消息,并使用通常的 Results.ValidationProblem 方法将它们返回给客户端,如上一节所述。

In conclusion, let’s see what happens if we try to invoke the endpoint using the following input as before:
总之,让我们看看如果我们像以前一样尝试使用以下输入调用终端节点会发生什么情况:

{
  "lastName": "MyLastName",
  "email": "email"
}

We’ll get the following response:
我们将收到以下响应:

{
  "type": 
    "https://tools.ietf.org/html/rfc7231#section-6.5.1",
  "title": "One or more validation errors occurred.",
  "status": 400,
  "errors": {
    "FirstName": [
      "'First Name' non può essere vuoto."
    ],
    "Email": [
      "'Email' non è un indirizzo email valido.",
      "'Email' deve essere lungo tra i 6 e 100 caratteri.
        Hai inserito 5 caratteri."
    ]
  }
}

As mentioned earlier, FluentValidation provides translations for standard error messages, so this is the response you get when running on an Italian system. Of course, we can completely customize the messages with the typical fluent approach, using the WithMessage method chained to the validation methods defined in the validator. For example, see the following:
如前所述,FluentValidation 为标准错误消息提供翻译,因此这是您在意大利语系统上运行时得到的响应。当然,我们可以使用典型的 Fluent 方法完全自定义消息,使用链接到验证器中定义的验证方法的 WithMessage 方法。例如,请参阅以下内容:

RuleFor(p => p.FirstName).NotEmpty().WithMessage("You must provide the first name");

We’ll talk about localization in further detail in Chapter 9, Leveraging Globalization and Localization.
我们将在第 9 章 利用全球化和本地化 中更详细地讨论本地化。

This is just a quick example of how to define validation rules with FluentValidation and use them with minimal APIs. This library allows many more complex scenarios that are comprehensively described in the official documentation available at https://fluentvalidation.net.
这只是一个快速示例,说明如何使用 FluentValidation 定义验证规则并将其与最少的 API 一起使用。此库允许许多更复杂的场景,这些场景在 https://fluentvalidation.net 上提供的官方文档中进行了全面描述。

Now that we have seen how to add validation to our route handlers, it is important to understand how we can update the documentation created by Swagger with this information.
现在我们已经了解了如何将验证添加到路由处理程序中,了解如何使用此信息更新 Swagger 创建的文档非常重要。

Adding validation information to Swagger
向 Swagger 添加验证信息

Regardless of the solution that has been chosen to handle validation, it is important to update the OpenAPI definition with the indication that a handler can produce a validation problem response, calling the ProducesValidationProblem method after the endpoint declaration:
无论选择哪种解决方案来处理验证,都必须更新 OpenAPI 定义,并指示处理程序可以生成验证问题响应,并在端点声明后调用 ProducesValidationProblem 方法:

app.MapPost("/people", (Person person) =>
{
    //...
})
.Produces(StatusCodes.Status204NoContent)
.ProducesValidationProblem();

In this way, a new response type for the 400 Bad Request status code will be added to Swagger, as we can see in Figure 6.1:
这样,400 Bad Request 状态码的新响应类型就会被添加到 Swagger 中,如图 6.1 所示:

Figure 6.1 – The validation problem response added to Swagger
图 6.1 – 添加到 Swagger 的验证问题响应

Moreover, the JSON schemas that are shown at the bottom of the Swagger UI can show the rules of the corresponding models. One of the benefits of defining validation rules using data annotations is that they are automatically reflected in these schemas:
此外,Swagger UI 底部显示的 JSON 架构可以显示相应模型的规则。使用数据注释定义验证规则的好处之一是,它们会自动反映在这些架构中:

Figure 6.2 – The validation rules for the Person object in Swagger
图 6.2 – Swagger 中 Person 对象的验证规则

Unfortunately, validation rules defined with FluentValidation aren’t automatically shown in the JSON schema of Swagger. We can overcome this limitation by using MicroElements.Swashbuckle.FluentValidation, a small library that, as usual, is available on GitHub (https://github.com/micro-elements/MicroElements.Swashbuckle.FluentValidation) and NuGet (https://www.nuget.org/packages/MicroElements.Swashbuckle.FluentValidation). After adding it to our project, following the same steps described before for the other NuGet packages we have introduced, we just need to call the AddFluentValidationRulesToSwagger extension method:
遗憾的是,使用 FluentValidation 定义的验证规则不会自动显示在 Swagger 的 JSON 架构中。我们可以通过使用 MicroElements.Swashbuckle.FluentValidation 来克服这一限制,这是一个小型库,通常可在 GitHub (https://github.com/micro-elements/MicroElements.Swashbuckle.FluentValidation) 和 NuGet (https://www.nuget.org/packages/MicroElements.Swashbuckle.FluentValidation) 上使用。将其添加到我们的项目后,按照之前针对我们介绍的其他 NuGet 包的相同步骤,我们只需调用 AddFluentValidationRulesToSwagger 扩展方法:

var builder = WebApplication.CreateBuilder(args);
//...
builder.Services.AddFluentValidationRulesToSwagger();

In this way, the JSON schema shown in Swagger will reflect the validation rules, as with the data annotations. However, it’s worth remembering that, at the time of writing, this library does not support all the validators available in FluentValidation. For more information, we can refer to the GitHub page of the library.
这样,Swagger 中显示的 JSON 架构将反映验证规则,就像数据注释一样。但是,值得记住的是,在撰写本文时,此库并不支持 FluentValidation 中可用的所有验证器。有关更多信息,我们可以参考该库的 GitHub 页面。

This ends our overview of validation in minimal APIs. In the next section, we’ll analyze another important theme of every API: how to correctly handle the mapping of data to and from our services.
我们对最小 API 中的验证的概述到此结束。在下一节中,我们将分析每个 API 的另一个重要主题:如何正确处理进出我们服务的数据。

Mapping data to and from APIs
将数据映射到 API 或从 API 映射数据

When dealing with APIs that can be called by any system, there is one golden rule: we should never expose our internal objects to the callers. If we don’t follow this decoupling idea and, for some reason, need to change our internal data structures, we could end up breaking all the clients that interact with us. Both the internal data structures and the objects that are used to dialog with the clients must be able to evolve independently from one another.
在处理任何系统都可以调用的 API 时,有一条黄金法则:我们永远不应该将我们的内部对象暴露给调用者。如果我们不遵循这种解耦的想法,并且出于某种原因需要改变我们的内部数据结构,我们最终可能会破坏所有与我们交互的客户端。内部数据结构和用于与 Client 端对话的对象都必须能够彼此独立地发展。

This requirement for dialog is the reason why mapping is so important. We need to transform input objects of one type into output objects of a different type and vice versa. In this way, we can achieve two objectives:
这种对对话的要求是映射如此重要的原因。我们需要将一种类型的输入对象转换为不同类型的输出对象,反之亦然。通过这种方式,我们可以实现两个目标:

• Evolve our internal data structures without introducing breaking changes with the contracts that are exposed to the callers
改进我们的内部数据结构,而不会对暴露给调用方的合约引入中断性变更

• Modify the format of the objects used to communicate with the clients without the need to change the way these objects are handled internally
修改用于与 Client 端通信的对象的格式,而无需更改内部处理这些对象的方式

In other words, mapping means transforming one object into another, literally, by copying and converting an object’s properties from a source to a destination. However, mapping code is boring, and testing mapping code is even more boring. Nevertheless, we need to fully understand that the process is crucial and strive to adopt it in all scenarios.
换句话说,映射意味着通过将对象的属性从源复制并转换为目标,将一个对象转换为另一个对象。但是,映射代码很无聊,测试映射代码更无聊。尽管如此,我们需要充分理解这个过程是至关重要的,并努力在所有情况下采用它。

So, let’s consider the following object, which could represent a person saved in a database using Entity Framework Core:
因此,让我们考虑以下对象,它可以表示使用 Entity Framework Core 保存在数据库中的人员:

public class PersonEntity
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime BirthDate { get; set; }
    public string City { get; set; }
}

We have set endpoints for getting a list of people or retrieving a specific person.
我们设置了用于获取人员列表或检索特定人员的端点。

The first thought could be to directly return PersonEntity to the caller. The following code is highly simplified, enough for us to understand the scenario:
第一个想法可能是直接将 PersonEntity 返回给调用方。以下代码经过高度简化,足以让我们理解该场景:

app.MapGet("/people/{id:int}", (int id) =>
{
    // In a real application, this entity could be
    // retrieved from a database, checking if the person
    // with the given ID exists.
    var person = new PersonEntity();
    return Results.Ok(person);
})
.Produces(StatusCodes.Status200OK, typeof(PersonEntity));

What happens if we need to modify the schema of the database, adding, for example, the creation date of the entity? In this case, we need to change PersonEntity with a new property that maps the relevant date. However, the callers also get this information now, which we probably don’t want to be exposed. Instead, if we use a so-called data transformation object (DTO) to expose the person, this problem will be redundant:
如果我们需要修改数据库的架构,例如添加实体的创建日期,会发生什么情况?在这种情况下,我们需要使用映射相关日期的新属性更改 PersonEntity。但是,调用方现在也会获得此信息,我们可能不希望这些信息被公开。相反,如果我们使用所谓的数据转换对象 (DTO) 来公开人员,则此问题将是多余的:

public class PersonDto
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime BirthDate { get; set; }
    public string City { get; set; }
}

This means that our API should return an object of the PersonDto type instead of PersonEntity, performing a conversion between the two objects. At first sight, the exercise appears to be a useless duplication of code, as the two classes contain the same properties. However, if we consider the fact that PersonEntity could evolve with new properties that are necessary for the database, or change structure with a new semantic that the caller shouldn’t know, the importance of mapping becomes clear. An example is storing the city in a separate table and exposing it through an Address property. Or suppose that, for security reasons, we don’t want to expose the exact birth date anymore, only the age of the person. Using an ad-hoc DTO, we can easily change the schema and update the mapping without touching our entity, having a better separation of concerns.
这意味着我们的 API 应返回 PersonDto 类型的对象,而不是 PersonEntity,从而在两个对象之间执行转换。乍一看,该练习似乎是无用的代码重复,因为这两个类包含相同的属性。但是,如果我们考虑到 PersonEntity 可能会使用数据库所需的新属性进行演变,或者使用调用方不应知道的新语义更改结构,则映射的重要性就会变得显而易见。例如,将城市存储在单独的表中,并通过 Address 属性公开它。或者假设,出于安全原因,我们不想再公开确切的出生日期,而只想公开人的年龄。使用临时 DTO,我们可以轻松更改架构并更新映射,而无需接触我们的实体,从而更好地分离关注点。

Of course, mapping can be bidirectional. In our example, we need to convert PersonEntity to PersonDto before returning it to the client. However, we could also do the opposite – that is, convert the PersonDto type that comes from a client into PersonEntity to save it to a database. All the solutions we’re talking about are valid for both scenarios.
当然,映射可以是双向的。在我们的示例中,我们需要先将 PersonEntity 转换为 PersonDto,然后再将其返回给客户端。但是,我们也可以执行相反的作,即,将来自客户端的 PersonDto 类型转换为 PersonEntity 以将其保存到数据库。我们讨论的所有解决方案都适用于这两种情况。

We can either perform mapping manually or adopt a third-party library that provides us with this feature. In the following sections, we’ll analyze both approaches, understanding the pros and cons of the available solutions.
我们可以手动执行映射,也可以采用为我们提供此功能的第三方库。在以下部分中,我们将分析这两种方法,了解可用解决方案的优缺点。

Performing manual mapping
执行手动映射

In the previous section, we said that mapping essentially means copying the properties of a source object into the properties of a destination and applying some sort of conversion. The easiest and most effective way to perform this task is to do it manually.
在上一节中,我们说过映射实质上意味着将源对象的属性复制到目标的属性中,并应用某种转换。执行此任务的最简单、最有效的方法是手动执行。

With this approach, we need to take care of all the mapping code by ourselves. From this point of view, there is nothing much more to say; we need a method that takes an object as input and transforms it into another as output, remembering to apply mapping recursively if a class contains a complex property that must be mapped in turn. The only suggestion is to use an extension method so that we can easily call it everywhere we need.
使用这种方法,我们需要自己处理所有的 map 代码。从这个角度来看,没有什么可说的了;我们需要一个方法,将一个对象作为输入并将其转换为另一个作为输出,如果一个类包含必须依次映射的复杂属性,请记住递归地应用映射。唯一的建议是使用扩展方法,这样我们就可以轻松地在需要的任何地方调用它。

A full example of this mapping process is available in the GitHub repository: https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter06.
GitHub 存储库中提供了此映射过程的完整示例:https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter06

This solution guarantees the best performance because we explicitly write all mapping instructions without relying on an automatic system (such as reflection). However, the manual method has a drawback: every time we add a property in the entity that must be mapped to a DTO, we need to change the mapping code. On the other hand, some approaches can simplify mapping, but at the cost of performance overhead. In the next section, we look at one such approach using AutoMapper.
此解决方案保证了最佳性能,因为我们显式编写了所有映射指令,而无需依赖自动系统(例如反射)。但是,手动方法有一个缺点:每次我们在实体中添加必须映射到 DTO 的属性时,都需要更改映射代码。另一方面,某些方法可以简化映射,但会以性能开销为代价。在下一节中,我们将介绍一种使用 AutoMapper 的此类方法。

Mapping with AutoMapper
使用 AutoMapper 进行映射

AutoMapper is probably one the most famous mapping framework for .NET. It uses a fluent configuration API that works with a convention-based matching algorithm to match source values to destination values. As with FluentValidation, the framework is part of the .NET Foundation and is available either on GitHub (https://github.com/AutoMapper/AutoMapper) or NuGet (https://www.nuget.org/packages/AutoMapper). Again, in this case, we have a specific NuGet package, https://www.nuget.org/packages/AutoMapper.Extensions.Microsoft.DependencyInjection, that simplifies its integration into ASP.NET Core projects.
AutoMapper 可能是最著名的 .NET 映射框架之一。它使用 Fluent 配置 API,该 API 与基于约定的匹配算法配合使用,以将源值与目标值匹配。与 FluentValidation 一样,该框架是 .NET Foundation 的一部分,可在 GitHub (https://github.com/AutoMapper/AutoMapper) 或 NuGet (https://www.nuget.org/packages/AutoMapper) 上使用。同样,在本例中,我们有一个特定的 NuGet 包 https://www.nuget.org/packages/AutoMapper.Extensions.Microsoft.DependencyInjection,它简化了它与 ASP.NET Core 项目的集成。

Let’s take a quick look at how to integrate AutoMapper in a minimal API project, showing its main features. The full documentation of the library is available at https://docs.automapper.org.
让我们快速看一下如何将 AutoMapper 集成到一个最小的 API 项目中,展示它的主要功能。该库的完整文档可在 https://docs.automapper.org 上获得。

As usual, the first thing to do is to add the library to our project, following the same instructions we used in the previous sections. Then, we need to configure AutoMapper, telling it how to perform mapping. There are several ways to perform this task, but the recommended approach is to create classes that are inherited from the Profile base class provided by the library and put the configuration into the constructor:
像往常一样,首先要做的是按照我们在前面部分中使用的相同说明将库添加到我们的项目中。然后,我们需要配置 AutoMapper,告诉它如何执行映射。有多种方法可以执行此任务,但推荐的方法是创建从库提供的 Profile 基类继承的类,并将配置放入构造函数中:

public class PersonProfile : Profile
{
    public PersonProfile()
    {
        CreateMap<PersonEntity, PersonDto>();
    }
}

That’s all we need to start: a single instruction to indicate that we want to map PersonEntity to PersonDto, without any other details. We have said that AutoMapper is convention-based. This means that, by default, it maps properties with the same name from the source to the destination, while also performing automatic conversions into compatible types, if necessary. For example, an int property on the source can be automatically mapped to a double property with the same name on the destination. In other words, if source and destination objects have the same property, there is no need for any explicit mapping instruction. However, in our case, we need to perform some transformations, so we can add them fluently after CreateMap:
这就是我们需要开始的全部内容:一条指令,指示我们想要将 PersonEntity 映射到 PersonDto,没有任何其他细节。我们已经说过 AutoMapper 是基于约定的。这意味着,默认情况下,它将具有相同名称的属性从源映射到目标,同时还会根据需要执行自动转换为兼容类型。例如,源上的 int 属性可以自动映射到目标上具有相同名称的 double 属性。换句话说,如果源对象和目标对象具有相同的属性,则不需要任何显式映射指令。但是,在我们的示例中,我们需要执行一些转换,以便我们可以在 CreateMap 之后流畅地添加它们:

public class PersonProfile : Profile
{
    public PersonProfile()
    {
        CreateMap<PersonEntity, PersonDto>()
            .ForMember(dst => dst.Age, opt =>
           opt.MapFrom(src => CalculateAge(src.BirthDate)))
            .ForMember(dst => dst.City, opt => 
              opt.MapFrom(src => src.Address.City));
    }
    private static int CalculateAge(DateTime dateOfBirth)
    {
        var today = DateTime.Today;
        var age = today.Year - dateOfBirth.Year;
        if (today.DayOfYear < dateOfBirth.DayOfYear)
        {
            age--;
        }
        return age;
    }
}

With the ForMember method, we can specify how to map destination properties, dst.Age and dst.City, using conversion expressions. We still don’t need to explicitly map the Id, FirstName, or LastName properties because they exist with these names at both the source and destination.
使用 ForMember 方法,我们可以指定如何映射目标属性 dst。年龄和 dst。City,使用转换表达式。我们仍然不需要显式映射 Id、FirstName 或 LastName 属性,因为它们与这些名称一起存在于源和目标中。

Now that we have defined the mapping profile, we need to register it at startup so that ASP.NET Core can use it. As with FluentValidation, we can invoke an extension method on IServiceCollection:
现在我们已经定义了映射配置文件,我们需要在启动时注册它,以便 ASP.NET Core 可以使用它。与 FluentValidation 一样,我们可以在 IServiceCollection 上调用扩展方法:

builder.Services.AddAutoMapper(typeof(Program).Assembly);

With this line of code, we automatically register all the profiles that are contained in the specified assembly. If we add more profiles to our project, such as a separate Profile class for every entity to map, we don’t need to change the registration instructions.
使用这行代码,我们会自动注册指定程序集中包含的所有配置文件。如果我们向项目添加更多配置文件,例如要映射的每个实体的单独 Profile 类,则无需更改注册说明。

In this way, we can now use the IMapper interface through dependency injection:
这样,我们现在可以通过依赖注入来使用 IMapper 接口:

app.MapGet("/people/{id:int}", (int id, IMapper mapper) =>
{
    var personEntity = new PersonEntity();
    //...
    var personDto = mapper.Map<PersonDto>(personEntity);
    return Results.Ok(personDto);
})
.Produces(StatusCodes.Status200OK, typeof(PersonDto));

After retrieving PersonEntity, for example, from a database using Entity Framework Core, we call the Map method on the IMapper interface, specifying the type of the resulting object and the input class. With this line of code, AutoMapper will use the corresponding profile to convert PersonEntity into a PersonDto instance.
例如,在使用 Entity Framework Core 从数据库中检索 PersonEntity 后,我们在 IMapper 接口上调用 Map 方法,并指定结果对象的类型和输入类。通过这行代码,AutoMapper 将使用相应的配置文件将 PersonEntity 转换为 PersonDto 实例。

With this solution in place, mapping is now much easier to maintain because, as long as we add properties with the same name on the source and destination, we don’t need to change the profile at all. Moreover, AutoMapper supports list mapping and recursive mapping too. So, if we have an entity that must be mapped, such as a property of the AddressEntity type on the PersonEntity class, and the corresponding profile is available, the conversion is again performed automatically.
有了这个解决方案,映射现在更容易维护,因为只要我们在源和目标上添加具有相同名称的属性,我们就根本不需要更改配置文件。此外,AutoMapper 还支持列表映射和递归映射。因此,如果我们有一个必须映射的实体,例如 PersonEntity 类上 AddressEntity 类型的属性,并且相应的配置文件可用,则转换将再次自动执行。

The drawback of this approach is a performance overhead. AutoMapper works by dynamically executing mapping code at runtime, so it uses reflection under the hood. Profiles are created the first time they are used and then they are cached to speed up subsequent mappings. However, profiles are always applied dynamically, so there is a cost for the operation that is dependent on the complexity of the mapping code itself. We have only seen a basic example of AutoMapper. The library is very powerful and can manage quite complex mappings. However, we need to be careful not to abuse it – otherwise, we can negatively impact the performance of our application.
这种方法的缺点是性能开销。AutoMapper 的工作原理是在运行时动态执行映射代码,因此它在后台使用反射。配置文件在首次使用时创建,然后缓存以加快后续映射的速度。但是,配置文件始终是动态应用的,因此作的成本取决于映射代码本身的复杂性。我们只看到了 AutoMapper 的一个基本示例。该库非常强大,可以管理相当复杂的映射。但是,我们需要小心不要滥用它 - 否则,我们可能会对应用程序的性能产生负面影响。

Summary
总结

Validation and mapping are two important features that we need to take into account when developing APIs to build more robust and maintainable applications. Minimal APIs do not provide any built-in way to perform these tasks, so it is important to know how we can add support for this kind of feature. We have seen that we can perform validations with data annotations or using FluentValidation and how to add validation information to Swagger. We have also talked about the significance of data mapping and shown how to either leverage manual mapping or the AutoMapper library, describing the pros and cons of each approach.
验证和映射是我们在开发 API 以构建更健壮且可维护的应用程序时需要考虑的两个重要功能。Minimal API 不提供任何内置方法来执行这些任务,因此了解如何添加对此类功能的支持非常重要。我们已经看到,我们可以使用数据注释或使用 FluentValidation 执行验证,以及如何向 Swagger 添加验证信息。我们还讨论了数据映射的重要性,并展示了如何利用手动映射或 AutoMapper 库,描述了每种方法的优缺点。

In the next chapter, we will talk about how to integrate minimal APIs with a data access layer, showing, for example, how to access a database using Entity Framework Core.
在下一章中,我们将讨论如何将最小 API 与数据访问层集成,例如,展示如何使用 Entity Framework Core 访问数据库。

7 Integration with the Data Access Layer

与 Data Access Layer 集成

In this chapter, we will learn about some basic ways to add a data access layer to the minimal APIs in .NET 6.0. We will see how we can use some topics covered previously in the book to access data with Entity Framework (EF) and then with Dapper. These are two ways to access a database.
在本章中,我们将了解向 .NET 6.0 中的最小 API 添加数据访问层的一些基本方法。我们将了解如何使用本书前面介绍的一些主题,通过 Entity Framework (EF) 和 Dapper 访问数据。这是访问数据库的两种方法。

In this chapter, we will be covering the following topics:
在本章中,我们将介绍以下主题:

• Using Entity Framework
• Using Dapper

By the end of this chapter, you will be able to use EF from scratch in a minimal API project, and use Dapper for the same goal. You will also be able to tell when one approach is better than the other in a project.
在本章结束时,您将能够在最小的 API 项目中从头开始使用 EF,并将 Dapper 用于相同的目标。您还可以判断在项目中何时一种方法优于另一种方法。

Technical requirements
技术要求

To follow along with this chapter, you will need to create an ASP.NET Core 6.0 Web API application. You can use either of the following options:
要按照本章的学习,您需要创建一个 ASP.NET Core 6.0 Web API 应用程序。您可以使用以下任一选项:

• Click on the New Project option in the File menu of Visual Studio 2022, then choose the ASP.NET Core Web API template, select a name and the working directory in the wizard, and be sure to uncheck the Use controllers option in the next step.
单击 Visual Studio 2022 的“文件”菜单中的“新建项目”选项,然后选择 ASP.NET Core Web API 模板,在向导中选择名称和工作目录,并确保在下一步中取消选中“使用控制器”选项。

• Open your console, shell, or Bash terminal, and change to your working directory. Use the following command to create a new Web API application: dotnet new webapi -minimal -o Chapter07
打开您的控制台、shell 或 Bash 终端,然后切换到您的工作目录。使用以下命令创建新的 Web API 应用程序:dotnet new webapi -minimal -o Chapter07

Now, open the project in Visual Studio by double-clicking on the project file or, in Visual Studio Code, type the following command in the already open console:
现在,通过双击项目文件在 Visual Studio 中打开项目,或者在 Visual Studio Code 中,在已打开的控制台中键入以下命令:

cd Chapter07
code.

Finally, you can safely remove all the code related to the WeatherForecast sample, as we don’t need it for this chapter.
最后,您可以安全地删除与 WeatherForecast 示例相关的所有代码,因为本章不需要它。

All the code samples in this chapter can be found in the GitHub repository for this book at https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter07.
本章中的所有代码示例都可以在本书的 GitHub 存储库中找到,网址为 https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter07

Using Entity Framework
使用Entity Framework

We can absolutely say that if we are building an API, it is very likely that we will interact with data.
我们可以肯定地说,如果我们正在构建一个 API,我们很可能会与数据交互。

In addition, this data most probably needs to be persisted after the application restarts or after other events, such as a new deployment of the application. There are many options for persisting data in .NET applications, but EF is the most user-friendly and common solution for a lot of scenarios.
此外,这些数据很可能需要在应用程序重启后或其他事件(例如应用程序的新部署)之后保留。在 .NET 应用程序中保存数据的选项有很多,但 EF 是适用于许多方案的最用户友好和最常见的解决方案。

Entity Framework Core (EF Core) is an extensible, open source, and cross-platform data access library for .NET applications. It enables developers to work with the database by using .NET objects directly and removes, in most cases, the need to know how to write the data access code directly in the database.
Entity Framework Core (EF Core) 是一个适用于 .NET 应用程序的可扩展、开源和跨平台数据访问库。它使开发人员能够直接使用 .NET 对象来处理数据库,并且在大多数情况下,无需知道如何直接在数据库中编写数据访问代码。

On top of this, EF Core supports a lot of databases, including SQLite, MySQL, Oracle, Microsoft SQL Server, and PostgreSQL.
最重要的是,EF Core 支持许多数据库,包括 SQLite、MySQL、Oracle、Microsoft SQL Server 和 PostgreSQL。

In addition, it supports an in-memory database that helps to write tests for our applications or to make the development cycle easier because you don’t need a real database up and running.
此外,它还支持内存数据库,有助于为我们的应用程序编写测试或简化开发周期,因为您不需要启动和运行真正的数据库。

In the next section, we will see how to set up a project for using EF and its main features.
在下一节中,我们将了解如何设置使用 EF 的项目及其主要功能。

Setting up the project
设置项目

From the project root, create an Icecream.cs class and give it the following content:
从项目根目录中,创建一个 Icecream.cs 类并为其提供以下内容:

namespace Chapter07.Models;
public class Icecream
{
    public int Id { get; set; }
    public string? Name { get; set; }
    public string? Description { get; set; }
}

The Icecream class is an object that represents an ice cream in our project. This class should be called a data model, and we will use this object in the next sections of this chapter to map it to a database table.
Icecream 类是表示我们项目中的Icecream的对象。这个类应该被称为 data model,我们将在本章的后面部分使用这个对象来 Map 它到一个数据库表。

Now it’s time to add the EF Core NuGet reference to the project.
现在,可以将 EF Core NuGet 引用添加到项目。

In order to do that, you can use one of the following methods:
为此,您可以使用以下方法之一:

• In a new terminal window, enter the following code to add the EF Core InMemory package:
在新的终端窗口中,输入以下代码以添加 EF Core InMemory 包:
dotnet add package Microsoft.EntityFrameworkCore.InMemory

• If you would like to use Visual Studio 2022 to add the reference, right-click on Dependencies and then select Manage NuGet Packages. Search for Microsoft.EntityFrameworkCore.InMemory and install the package.
如果要使用 Visual Studio 2022 添加引用,请右键单击 “依赖项”,然后选择 “管理 NuGet 包”。搜索 Microsoft.EntityFrameworkCore.InMemory 并安装该包。

In the next section, we will be adding EF Core to our project.
在下一部分中,我们将 EF Core 添加到我们的项目中。

Adding EF Core to the project
将 EF Core 添加到项目

In order to store the ice cream objects in the database, we need to set up EF Core in our project.
为了将冰淇淋对象存储在数据库中,我们需要在项目中设置 EF Core。

To set up an in-memory database, add the following code to the bottom of the Program.cs file:
要设置内存中数据库,请将以下代码添加到 Program.cs 文件的底部:

class IcecreamDb : DbContext
{
    public IcecreamDb(DbContextOptions options) :
      base(options) { }
    public DbSet<Icecream> Icecreams { get; set; } = null!;
}

DbContext object represents a connection to the database, and it’s used to save and query instances of entities in the database.
DbContext 对象表示与数据库的连接,用于保存和查询数据库中的实体实例。

The DbSet represents the instances of the entities, and they will be converted into a real table in the database.
DbSet 表示实体的实例,它们将转换为数据库中的实际表。

In this case, we will have just one table in the database, called Icecreams.
在本例中,数据库中只有一个名为 Icecreams 的表。

In Program.cs, after the builder initialization, add the following code:
在 Program.cs 中,在生成器初始化后,添加以下代码:

builder.Services.AddDbContext<IcecreamDb>(options => options.UseInMemoryDatabase("icecreams"));

Now we are ready to add some API endpoints to start interacting with the database.
现在我们准备添加一些 API 端点以开始与数据库交互。

Adding endpoints to the project
向项目添加端点

Let’s add the code to create a new item in the icecreams list. In Program.cs, add the following code before the app.Run() line of code:
让我们添加代码以在 icecreams 列表中创建一个新项目。在 Program.cs 中,在app.Run() 之前添加以下代码:

app.MapPost("/icecreams", async (IcecreamDb db, Icecream icecream) =>
{
    await db.Icecreams.AddAsync(icecream);
    await db.SaveChangesAsync();
    return Results.Created($"/icecreams/{icecream.Id}",
                           icecream);
});

The first parameter of the MapPost function is the DbContext. By default, the minimal API architecture uses dependency injection to share the instances of the DbContext.
MapPost 函数的第一个参数是 DbContext。默认情况下,最小 API 体系结构使用依赖项注入来共享 DbContext 的实例。

Dependency injection
依赖关系注入

If you want to know more about dependency injection, go to Chapter 4, Dependency Injection in a Minimal API Project.
如果您想了解有关依赖注入的更多信息,请转到第 4 章 最小 API 项目中的依赖注入。

In order to save an item into the database, we use the AddSync method directly from the entity that represents the object.
为了将项保存到数据库中,我们直接从表示对象的实体中使用 AddSync 方法。

To persist the new item in the database, we need to call the SaveChangesAsync() method, which is responsible for saving all the changes that happen to the database before the last call to SaveChangesAsync().
要在数据库中保留新项,我们需要调用 SaveChangesAsync() 方法,该方法负责保存上次调用 SaveChangesAsync() 之前对数据库发生的所有更改。

In a very similar way, we can add the endpoint to retrieve all the items in the icecreams database.
以非常相似的方式,我们可以添加终端节点来检索 icecreams 数据库中的所有项目。

After the code to add an ice cream, we can add the following code:
在添加冰淇淋的代码之后,我们可以添加以下代码:

app.MapGet("/icecreams", async (IcecreamDb db) => await db.Icecreams.ToListAsync());

Also, in this case, the DbContext is available as a parameter and we can retrieve all the items in the database directly from the entities in the DbContext.
此外,在这种情况下,DbContext 可用作参数,我们可以直接从 DbContext 中的实体检索数据库中的所有项。

With the ToListAsync() method, the application loads all the entities in the database and sends them back as the endpoint result.
使用 ToListAsync() 方法,应用程序加载数据库中的所有实体,并将它们作为终端节点结果发送回去。

Make sure you have saved all your changes in the project and run the app.
确保您已保存项目中的所有更改并运行应用程序。

A new browser window will open, and you can navigate to the /swagger URL:
将打开一个新的浏览器窗口,您可以导航到 /swagger URL:

Figure 7.1 – Swagger browser window
图 7.1 – Swagger 浏览器窗口

Select the POST/icecreams button, followed by Try it out.
选择 POST/icecreams 按钮,然后选择 Try it out。

Replace the request body content with the following JSON:
将请求正文内容替换为以下 JSON:

{
  "id": 0,
  "name": "icecream 1",
  "description": "description 1"
}

Click on Execute:
单击 Execute:

Figure 7.2 – Swagger response
图 7.2 – Swagger 响应

Now we have at least one item in the database, and we can try the other endpoint to retrieve all the items in the database.
现在,数据库中至少有一个项目,我们可以尝试使用另一个端点来检索数据库中的所有项目。

Scroll down the page a little bit and select GET/icecreams, followed by Try it out and then Execute.
向下滚动页面并选择 GET/icecreams,然后选择 Try it out,然后选择 Execute。

You will see the list with one item under Response Body.
您将在 Response Body (响应正文) 下看到带有一个项目的列表。

Let’s see how to finalize this first demo by adding the other CRUD operations to our endpoints:
让我们看看如何通过将其他 CRUD作添加到我们的端点来完成第一个演示:

  1. To get an item by ID, add the following code under the app.MapGet route you created earlier:
    要按 ID 获取项目,请在应用程序下添加app.MapGet路由代码:
app.MapGet("/icecreams/{id}", async (IcecreamDb db, int id) => await db.Icecreams.FindAsync(id));

To check this out, you can launch the application again and use the Swagger UI as before.
要检查这一点,您可以再次启动应用程序并像以前一样使用 Swagger UI。

  1. Next, add an item in the database by performing a post call (as in the previous section).
    接下来,通过执行 post 调用在数据库中添加一个项目(如上一节所示)。

  2. Click GET/icecreams/{id) followed by Try it out.
    单击 GET/icecreams/{id) 后跟 Try it out。

  3. Insert the value 1 in the id parameter field and then click on Execute.
    在 id 参数字段中插入值 1,然后单击 Execute。

  4. You will see the item in the Response Body section.
    您将在 Response Body (响应正文) 部分看到该项目。

  5. The following is an example of a response from the API:
    以下是来自 API 的响应示例:

{
  "id": 1,
  "name": "icecream 1",
  "description": "description 1"
}

This is what the response looks like:
响应如下所示:

Figure 7.3 – Response result
图 7.3 – 响应结果

To update an item by ID, we can create a new MapPut endpoint with two parameters: the item with the entity values and the ID of the old entity in the database that we want to update.
要按 ID 更新项目,我们可以创建一个具有两个参数的新 MapPut 终端节点:具有实体值的项目和数据库中要更新的旧实体的 ID。

The code should be like the following snippet:
代码应类似于以下代码段:

app.MapPut("/icecreams/{id}", async (IcecreamDb db, Icecream updateicecream, int id) =>
{
    var icecream = await db.Icecreams.FindAsync(id);
    if (icecream is null) return Results.NotFound();
    icecream.Name = updateicecream.Name;
    icecream.Description = updateicecream.Description;
    await db.SaveChangesAsync();
    return Results.NoContent();
});

Just to be clear, first of all, we need to find the item in the database with the ID from the parameters. If we don’t find an item in the database, it’s a good practice to return a Not Found HTTP status to the caller.
需要明确的是,首先,我们需要在数据库中找到具有参数中 ID 的项目。如果我们在数据库中找不到项目,最好将 Not Found HTTP 状态返回给调用者。

If we find the entity in the database, we update the entity with the new values and we save all the changes in the database before sending back the HTTP status No Content.
如果我们在数据库中找到实体,我们将使用新值更新实体,并在发回 HTTP 状态 No Content 之前保存数据库中的所有更改。

The last CRUD operation we need to perform is to delete an item from the database.
我们需要执行的最后一个 CRUD作是从数据库中删除一个项目。

This operation is very similar to the update operation because, first of all, we need to find the item in the database and then we can try to perform the delete operation.
此操作与更新作非常相似,因为首先,我们需要在数据库中找到该项目,然后我们可以尝试执行删除作。

The following code snippet shows how to implement a delete operation with the right HTTP verb of the minimal API:
以下代码片段显示了如何使用最小 API 的正确 HTTP 动词实施删除作:

app.MapDelete("/icecreams/{id}", async (IcecreamDb db, int id) =>
{
    var icecream = await db.Icecreams.FindAsync(id);
    if (icecream is null)
    {
        return Results.NotFound();
    }
    db.Icecreams.Remove(icecream);
    await db.SaveChangesAsync();
    return Results.Ok();
});

In this section, we have learned how to use EF in a minimal API project.
在本节中,我们学习了如何在最小 API 项目中使用 EF。

We saw how to add the NuGet packages to start working with EF, and how to implement the entire set of CRUD operations in a minimal API .NET 6 project.
我们了解了如何添加 NuGet 包以开始使用 EF,以及如何在最小的 API .NET 6 项目中实现整套 CRUD作。

In the next section, we will see how to implement the same project with the same logic but using Dapper as the primary library to access data.
在下一节中,我们将了解如何使用相同的逻辑实现相同的项目,但使用 Dapper 作为主库来访问数据。

Using Dapper
使用 Dapper

Dapper is an Object-Relational Mapper (ORM) or, to be more precise, a micro ORM. With Dapper, we can write SQL statements directly in .NET projects like we can do in SQL Server (or another database). One of the best advantages of using Dapper in a project is the performance, because it doesn’t translate queries from .NET objects and doesn’t add any layers between the application and the library to access the database. It extends the IDbConnection object and provides a lot of methods to query the database. This means we have to write queries that are compatible with the database provider.
Dapper 是一个对象关系映射器 (ORM),或者更准确地说,是一个微型 ORM。使用 Dapper,我们可以直接在 .NET 项目中编写 SQL 语句,就像在 SQL Server(或其他数据库)中一样。在项目中使用 Dapper 的最大优势之一是性能,因为它不会转换来自 .NET 对象的查询,也不会在应用程序和库之间添加任何层来访问数据库。它扩展了 IDbConnection 对象,并提供了许多查询数据库的方法。这意味着我们必须编写与数据库提供程序兼容的查询。

It supports synchronous and asynchronous method executions. This is a list of the methods that Dapper adds to the IDbConnection interface:
它支持同步和异步方法执行。以下是 Dapper 添加到 IDbConnection 接口的方法列表:

• Execute
• Query
• QueryFirst
• QueryFirstOrDefault
• QuerySingle
• QuerySingleOrDefault
• QueryMultiple

As we mentioned, it provides an async version for all these methods. You can find the right methods by adding the Async keyword at the end of the method name.
正如我们所提到的,它为所有这些方法提供了一个异步版本。您可以通过在方法名称的末尾添加 Async 关键字来查找正确的方法。

In the next section, we will see how to set up a project for using Dapper with a SQL Server LocalDB.
在下一节中,我们将了解如何设置一个项目,以便将 Dapper 与 SQL Server LocalDB 结合使用。

Setting up the project
设置项目

The first thing we are going to do is to create a new database. You can use your SQL Server LocalDB instance installed with Visual Studio by default or another SQL Server instance in your environment.
我们要做的第一件事是创建一个新数据库。您可以使用默认随 Visual Studio 一起安装的 SQL Server LocalDB 实例,也可以使用环境中的其他 SQL Server 实例。

You can execute the following script in your database to create one table and populate it with data:
您可以在数据库中执行以下脚本来创建一个表并使用数据填充它:

CREATE TABLE [dbo].[Icecreams](
     [Id] [int] IDENTITY(1,1) NOT NULL,
     [Name] [nvarchar](50) NOT NULL,
     [Description] [nvarchar](255) NOT NULL)
GO
INSERT [dbo].[Icecreams] ([Name], [Description]) VALUES ('Icecream 1','Description 1')
INSERT [dbo].[Icecreams] ([Name], [Description]) VALUES ('Icecream 2','Description 2')
INSERT [dbo].[Icecreams] ([Name], [Description]) VALUES ('Icecream 3','Description 3')

Once we have the database, we can install these NuGet packages with the following command in the Visual Studio terminal:
拥有数据库后,我们可以在 Visual Studio 终端中使用以下命令安装这些 NuGet 包:

Install-Package Dapper
Install-Package Microsoft.Data.SqlClient

Now we can continue to add the code to interact with the database. In this example, we are going to use a repository pattern.
现在我们可以继续添加代码以与数据库交互。在此示例中,我们将使用存储库模式。

Creating a repository pattern
创建存储库模式

In this section, we are going to create a simple repository pattern, but we will try to make it as simple as possible so we can understand the main features of Dapper:
在本节中,我们将创建一个简单的存储库模式,但我们将尝试使其尽可能简单,以便我们了解 Dapper 的主要功能:

  1. In the Program.cs file, add a simple class that represents our entity in the database:
    public class Icecream
    在 Program.cs 文件中,添加一个表示数据库中实体的简单类:

    {
    public int Id { get; set; }
    public string? Name { get; set; }
    public string? Description { get; set; }
    }
  2. After this, modify the appsettings.json file by adding the connection string at the end of the file:
    在此之后,通过在文件末尾添加连接字符串来修改 appsettings.json 文件:

    "ConnectionStrings": {
    "SqlConnection":
      "Data Source=(localdb)\\MSSQLLocalDB;
       Initial Catalog=Chapter07;
       Integrated Security=True;
       Connect Timeout=30;
       Encrypt=False;
       TrustServerCertificate=False;"
    }

If you are using LocalDB, the connection string should be the right one for your environment as well.
如果您使用的是 LocalDB,则连接字符串也应适合您的环境。

  1. Create a new class in the root of the project called DapperContext and give it the following code:
    在项目的根目录中创建一个名为 DapperContext 的新类,并为其提供以下代码:

    public class DapperContext
    {
    private readonly IConfiguration _configuration;
    private readonly string _connectionString;
    public DapperContext(IConfiguration configuration)
    {
        _configuration = configuration;
        _connectionString = _configuration
          .GetConnectionString("SqlConnection");
    }
    
    public IDbConnection CreateConnection()
        => new SqlConnection(_connectionString);
    }

We injected with dependency injection the IConfiguration interface to retrieve the connection string from the settings file.
我们通过依赖项注入注入 IConfiguration 接口从设置文件中检索连接字符串。

  1. Now we are going to create the interface and the implementation of our repository. In order to do that, add the following code to the Program.cs file.
    现在,我们将创建接口和存储库的实现。为此,请将以下代码添加到 Program.cs 文件中。
public interface IIcecreamsRepository
{
}
public class IcecreamsRepository : IIcecreamsRepository
{
    private readonly DapperContext _context;
    public IcecreamsRepository(DapperContext context)
    {
        _context = context;
    }
}

In the next sections, we will be adding some code to the interface and to the implementation of the repository.
在接下来的部分中,我们将向接口和存储库的实现添加一些代码。

Finally, we can register the context, the interface, and its implementation as a service.
在接下来的部分中,我们将向接口和存储库的实现添加一些代码。

  1. Let’s put the following code after the builder initialization in the Program.cs file:
    让我们在 builder 初始化后将以下代码放入 Program.cs 文件中:

    builder.Services.AddSingleton<DapperContext>();
    builder.Services.AddScoped<IIcecreamsRepository, IcecreamsRepository>();

Now we are ready to implement the first query.
现在我们已准备好实现第一个查询。

Using Dapper to query the database
使用 Dapper 查询数据库

First of all, let’s modify the IIcecreamsRepository interface by adding a new method:
首先,我们通过添加新方法来修改 IIcecreamsRepository 接口:

public Task<IEnumerable<Icecream>> GetIcecreams();

Then, let’s implement this method in the IcecreamsRepository class:
然后,让我们在 IcecreamsRepository 类中实现此方法:

public async Task<IEnumerable<Icecream>> GetIcecreams()
{
    var query = "SELECT * FROM Icecreams";
    using (var connection = _context.CreateConnection())
    {
        var result = 
          await connection.QueryAsync<Icecream>(query);
        return result.ToList();
    }
}

Let’s try to understand all the steps in this method. We created a string called query, where we store the SQL query to fetch all the entities from the database.
让我们尝试了解此方法中的所有步骤。我们创建了一个名为 query 的字符串,我们在其中存储 SQL 查询以从数据库中获取所有实体。

Then, inside the using statement, we used DapperContext to create the connection.
然后,在 using 语句中,我们使用 DapperContext 创建连接。

Once the connection was created, we used it to call the QueryAsync method and passed the query as an argument.
创建连接后,我们使用它来调用 QueryAsync 方法并将查询作为参数传递。

Dapper, when the results return from the database, converted them into IEnumerable<T> automatically.
当结果从数据库返回时,Dapper 会自动将它们转换为 IEnumerable<T>

The following is the final code of the interface and our first implementation:
以下是接口的最终代码和我们的第一个实现:

public interface IIcecreamsRepository
{
    public Task<IEnumerable<Icecream>> GetIcecreams();
}
public class IcecreamsRepository : IIcecreamsRepository
{
    private readonly DapperContext _context;
    public IcecreamsRepository(DapperContext context)
    {
        _context = context;
    }
    public async Task<IEnumerable<Icecream>> GetIcecreams()
    {
        var query = "SELECT * FROM Icecreams";
        using (var connection =
              _context.CreateConnection())
        {
            var result = 
              await connection.QueryAsync<Icecream>(query);
            return result.ToList();
        }
    }
}

In the next section, we will see how to add a new entity to the database and how to use the ExecuteAsync method to run a query.
在下一节中,我们将了解如何向数据库添加新实体,以及如何使用 ExecuteAsync 方法运行查询。

Adding a new entity in the database with Dapper
使用 Dapper 在数据库中添加新实体

Now we are going to manage adding a new entity to the database for future implementations of the API post request.
现在,我们将管理向数据库添加新实体,以便将来实现 API post 请求。

Let’s modify the interface by adding a new method called CreateIcecream with an input parameter of the Icecream type:
让我们通过添加一个名为 CreateIcecream 的新方法来修改接口,该方法的输入参数为 Icecream 类型:

public Task CreateIcecream(Icecream icecream);

Now we must implement this method in the repository class:
现在我们必须在 repository 类中实现此方法:

public async Task CreateIcecream(Icecream icecream)
{
    var query = "INSERT INTO Icecreams (Name, Description)
      VALUES (@Name, @Description)";
    var parameters = new DynamicParameters();
    parameters.Add("Name", icecream.Name, DbType.String);
    parameters.Add("Description", icecream.Description,
                    DbType.String);
    using (var connection = _context.CreateConnection())
    {
        await connection.ExecuteAsync(query, parameters);
    }
}

Here, we create the query and a dynamic parameters object to pass all the values to the database.
在这里,我们创建查询和动态参数对象,以将所有值传递给数据库。

We populate the parameters with the values from the Icecream object in the method parameter.
我们在 method 参数中使用 Icecream 对象的值填充参数。

We create the connection with the Dapper context and then we use the ExecuteAsync method to execute the INSERT statement.
我们使用 Dapper 上下文创建连接,然后使用 ExecuteAsync 方法执行 INSERT 语句。

This method returns an integer value as a result, representing the number of affected rows in the database. In this case, we don’t use this information, but you can return this value as the result of the method if you need it.
此方法返回一个整数值作为结果,该值表示数据库中受影响的行数。在这种情况下,我们不会使用此信息,但如果需要,可以将此值作为方法的结果返回。

Implementing the repository in the endpoints
在端点中实施存储库

To add the final touch to our minimal API, we need to implement the two endpoints to manage all the methods in our repository pattern:
为了对我们的最小 API 进行最后的润色,我们需要实现两个端点来管理存储库模式中的所有方法:

app.MapPost("/icecreams", async (IIcecreamsRepository repository, Icecream icecream) =>
{
    await repository.CreateIcecream(icecream);
    return Results.Ok();
});
app.MapGet("/icecreams", async (IIcecreamsRepository repository) => await repository.GetIcecreams());

In both map methods, we pass the repository as a parameter because, as usual in the minimal API, the services are passed as parameters in the map methods.
在这两种 map 方法中,我们都将存储库作为参数传递,因为与最小 API 一样,服务在 map 方法中作为参数传递。

This means that the repository is always available in all parts of the code.
这意味着存储库在代码的所有部分中始终可用。

In the MapGet endpoint, we use the repository to load all the entities from the implementation of the repository and we use the result as the result of the endpoint.
在 MapGet 端点中,我们使用存储库加载存储库实现中的所有实体,并将结果用作端点的结果。

In the MapPost endpoint, in addition to the repository parameter, we accept also the Icecream entity from the body of the request and we use the same entity as a parameter to the CreateIcecream method of the repository.
在 MapPost 终端节点中,除了存储库参数之外,我们还接受请求正文中的 Icecream 实体,并将同一实体用作存储库的 CreateIcecream 方法的参数。

Summary
总结

In this chapter, we learned how to interact with a data access layer in a minimal API project with the two most common tools in a real-world scenario: EF and Dapper.
在本章中,我们学习了如何使用实际场景中最常用的两种工具(EF 和 Dapper)与最小 API 项目中的数据访问层进行交互。

For EF, we covered some basic features, such as setting up a project to use this ORM and how to perform some basic operations to implement a full CRUD API endpoint.
对于 EF,我们介绍了一些基本功能,例如设置项目以使用此 ORM,以及如何执行一些基本作来实现完整的 CRUD API 终端节点。

We did basically the same thing with Dapper as well, starting from an empty project, adding Dapper, setting up the project for working with a SQL Server LocalDB, and implementing some basic interactions with the entities of the database.
我们对 Dapper 也做了基本相同的作,从一个空项目开始,添加 Dapper,设置项目以使用 SQL Server LocalDB,并实现与数据库实体的一些基本交互。

In the next chapter, we’ll focus on authentication and authorization in a minimal API project. It’s important, first of all, to protect your data in the database.
在下一章中,我们将重点介绍最小 API 项目中的身份验证和授权。首先,保护数据库中的数据很重要。

Part 3: Advanced Development and Microservices Concepts

第 3 部分:高级开发和微服务概念

In this advanced section of the book, we want to show more scenarios that are typical in backend development. We will also go over the performance of this new framework and understand the scenarios in which it is really useful.
在本书的这个高级部分,我们想展示更多后端开发中的典型场景。我们还将介绍这个新框架的性能,并了解它真正有用的场景。

We will cover the following chapters in this section:
在本节中,我们将介绍以下章节:

Chapter 8, Adding Authentication and Authorization
第 8 章 添加验证和授权

Chapter 9, Leveraging Globalization and Localization
第 9 章 利用全球化和本地化

Chapter 10, Evaluating and Benchmarking the Performance of Minimal APIs
第 10 章 评估最小 API 的性能并对其进行基准测试

8 Adding Authentication and Authorization

8 添加身份验证和授权

Any kind of application must deal with authentication and authorization. Often, these terms are used interchangeably, but they actually refer to different scenarios. In this chapter of the book, we will explain the difference between authentication and authorization and show how to add these features to a minimal API project.
任何类型的应用程序都必须处理身份验证和授权。通常,这些术语可以互换使用,但它们实际上指的是不同的场景。在本书的这一章中,我们将解释身份验证和授权之间的区别,并展示如何将这些功能添加到最小的 API 项目中。

Authentication can be performed in many different ways: using local accounts with external login providers, such as Microsoft, Google, Facebook, and Twitter; using Azure Active Directory and Azure B2C; and using authentication servers such as Identity Server and Okta. Moreover, we may have to deal with requirements such as two-factor authentication and refresh tokens. In this chapter, however, we will focus on the general aspects of authentication and authorization and see how to implement them in a minimal API project, in order to provide a general understanding of the topic. The information and samples that will be provided will show how to effectively work with authentication and authorization and how to customize their behaviors according to our requirements.
可以通过多种不同的方式执行身份验证:使用外部登录提供程序(如 Microsoft、Google、Facebook 和 Twitter)的本地帐户;使用 Azure Active Directory 和 Azure B2C;以及使用 Identity Server 和 Okta 等身份验证服务器。此外,我们可能必须处理双重身份验证和刷新令牌等要求。但是,在本章中,我们将重点介绍身份验证和授权的一般方面,并了解如何在最小的 API 项目中实现它们,以便对该主题有一个大致的理解。将提供的信息和示例将展示如何有效地使用身份验证和授权,以及如何根据我们的要求自定义它们的行为。

In this chapter, we will be covering the following topics:
在本章中,我们将介绍以下主题:

• Introducing authentication and authorization
身份验证和授权简介

• Protecting a minimal API
保护最小 API

• Handling authorization – roles and policies
处理授权 – 角色和策略

Technical requirements
技术要求

To follow the examples in this chapter, you will need to create an ASP.NET Core 6.0 Web API application. Refer to the Technical requirements section in Chapter 2, Exploring Minimal APIs and Their Advantages, for instructions on how to do so.
要遵循本章中的示例,您需要创建一个 ASP.NET Core 6.0 Web API 应用程序。有关如何执行此作的说明,请参阅第 2 章 “探索最小 API 及其优势”中的“技术要求”部分。

If you’re using your console, shell, or Bash terminal to create the API, remember to change your working directory to the current chapter number: Chapter08.
如果您使用控制台、shell 或 Bash 终端创建 API,请记住将工作目录更改为当前章节编号:Chapter08。

All the code samples in this chapter can be found in the GitHub repository for this book at https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter08.
本章中的所有代码示例都可以在本书的 GitHub 存储库中找到,网址为 https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter08

Introducing authentication and authorization
身份验证和授权简介

As said at the beginning, the terms authentication and authorization are often used interchangeably, but they represent different security functions. Authentication is the process of verifying that users are who they say they are, while authorization is the task of granting an authenticated user permission to do something. So, authorization must always follow authentication.
如开头所述,术语 authentication 和 authorization 经常互换使用,但它们代表不同的安全功能。身份验证是验证用户是否是他们所声称的身份的过程,而授权是授予经过身份验证的用户执行某项作的权限的任务。因此,授权必须始终遵循身份验证。

Let’s think about the security in an airport: first, you show your ID to authenticate your identity; then, at the gate, you present the boarding pass to be authorized to board the flight and get access to the plane.
让我们考虑一下机场的安检:首先,您出示您的身份证以验证您的身份;然后,在登机口,您出示登机牌以获得登机和登机权。

Authentication and authorization in ASP.NET Core are handled by corresponding middleware and work in the same way in minimal APIs and controller-based projects. They allow the restriction of access to endpoints depending on user identity, roles, policies, and so on, as we’ll see in detail in the following sections.
ASP.NET Core 中的身份验证和授权由相应的中间件处理,并且在最小 API 和基于控制器的项目中以相同的方式工作。它们允许根据用户身份、角色、策略等限制对终端节点的访问,我们将在以下部分中详细介绍。

You can find a great overview of ASP.NET Core authentication and authorization in the official documentation available at https://docs.microsoft.com/aspnet/core/security/authentication and https://docs.microsoft.com/aspnet/core/security/authorization.
您可以在 https://docs.microsoft.com/aspnet/core/security/authenticationhttps://docs.microsoft.com/aspnet/core/security/authorization 上提供的官方文档中找到 ASP.NET Core 身份验证和授权的精彩概述。

Protecting a minimal API
保护最小 API

Protecting a minimal API means correctly setting up authentication and authorization. There are many types of authentication solutions that are adopted in modern applications. In web applications, we typically use cookies, while when dealing with web APIs, we use methods such as an API key, basic authentication, and JSON Web Token (JWT). JWTs are the most commonly used, and in the rest of the chapter, we’ll focus on this solution.
保护最小 API 意味着正确设置身份验证和授权。现代应用程序中采用的身份验证解决方案有多种类型。在 Web 应用程序中,我们通常使用 cookie,而在处理 Web API 时,我们使用 API 密钥、基本身份验证和 JSON Web 令牌 (JWT) 等方法。JWT 是最常用的,在本章的其余部分,我们将重点介绍此解决方案。

Note : A good starting point to understand what JWTs are and how they are used is available at https://jwt.io/introduction.
注意 : 了解 JWT 是什么以及如何使用 JWT 的良好起点位于 https://jwt.io/introduction

To enable authentication and authorization based on JWT, the first thing to do is to add the Microsoft.AspNetCore.Authentication.JwtBearer NuGet package to our project, using one of the following ways:
要启用基于 JWT 的身份验证和授权,首先要做的是使用以下方法之一将 Microsoft.AspNetCore.Authentication.JwtBearer NuGet 包添加到我们的项目中:

• Option 1: If you’re using Visual Studio 2022, right-click on the project and choose the Manage NuGet Packages command to open Package Manager GUI, then search for Microsoft.AspNetCore.Authentication.JwtBearer and click on Install.
选项 1:如果您使用的是 Visual Studio 2022,请右键单击项目并选择“管理 NuGet 包”命令以打开包管理器 GUI,然后搜索 Microsoft.AspNetCore.Authentication.JwtBearer 并单击“安装”。

• Option 2: Open Package Manager Console if you’re inside Visual Studio 2022, or open your console, shell, or Bash terminal, go to your project directory, and execute the following command:
选项 2:如果您在 Visual Studio 2022 中,请打开包管理器控制台,或者打开控制台、shell 或 Bash 终端,转到您的项目目录,然后执行以下命令:
dotnet add package Microsoft.AspNetCore.Authentication.JwtBearer

Now, we need to add authentication and authorization services to the service provider, so that they are available through dependency injection:
现在,我们需要向服务提供商添加身份验证和授权服务,以便它们可以通过依赖项注入使用:

var builder = WebApplication.CreateBuilder(args);
//...
builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme).AddJwtBearer();
builder.Services.AddAuthorization();

This is the minimum code that is necessary to add JWT authentication and authorization support to an ASP.NET Core project. It isn’t a real working solution yet, because it is missing the actual configuration, but it is enough to verify how endpoint protection works.
这是向 ASP.NET Core 项目添加 JWT 身份验证和授权支持所需的最少代码。它还不是一个真正的有效解决方案,因为它缺少实际配置,但足以验证 Endpoint Protection 的工作原理。

In the AddAuthentication() method, we specify that we want to use the bearer authentication scheme. This is an HTTP authentication scheme that involves security tokens that are in fact called bearer tokens. These tokens must be sent in the Authorization HTTP header with the format Authorization: Bearer <token>. Then, we call AddJwtBearer() to tell ASP.NET Core that it must expect a bearer token in the JWT format. As we’ll see later, the bearer token is an encoded string generated by the server in response to a login request. After that, we use AddAuthorization() to also add authorization services.
在 AddAuthentication() 方法中,我们指定要使用不记名身份验证方案。这是一种 HTTP 身份验证方案,它涉及实际上称为持有者令牌的安全令牌。这些令牌必须在 Authorization HTTP 标头中以 Authorization: Bearer <token>格式发送。然后,我们调用 AddJwtBearer() 来告诉 ASP.NET Core 它必须需要 JWT 格式的不记名令牌。正如我们稍后将看到的,持有者令牌是服务器为响应登录请求而生成的编码字符串。之后,我们使用 AddAuthorization() 也添加授权服务。

Now, we need to insert authentication and authorization middleware in the pipeline so that ASP.NET Core will be instructed to check the token and apply all the authorization rules:
现在,我们需要在管道中插入身份验证和授权中间件,以便指示 ASP.NET Core 检查令牌并应用所有授权规则:

var app = builder.Build();
//..
app.UseAuthentication();
app.UseAuthorization();
//...
app.Run();

Important Note : We have said that authorization must follow authentication. This means that the authentication middleware must come first; otherwise, the security will not work as expected.
重要提示 : 我们已经说过,授权必须在身份验证之后进行。这意味着身份验证中间件必须放在第一位;否则,安全性将无法按预期工作。

Finally, we can protect our endpoints using the Authorize attribute or the RequireAuthorization() method:
最后,我们可以使用 Authorize 属性或 RequireAuthorization() 方法保护我们的端点:

app.MapGet("/api/attribute-protected", [Authorize] () => "This endpoint is protected using the Authorize attribute");
app.MapGet("/api/method-protected", () => "This endpoint is protected using the RequireAuthorization method")
.RequireAuthorization();

Note : The ability to specify an attribute directly on a lambda expression (as in the first endpoint of the previous example) is a new feature of C# 10.
注意 : 直接在 lambda 表达式上指定属性的功能(如上一个示例的第一个终结点所示)是 C# 10 的一项新功能。

If we now try to call each of these methods using Swagger, we’ll get a 401 unauthorized response, which should look as follows:
如果我们现在尝试使用 Swagger 调用这些方法中的每一个,我们将得到一个 401 未授权的响应,它应该如下所示:

Figure 8.1 – Unauthorized response in Swagger
图 8.1 – Swagger 中未经授权的响应

Note that the message contains a header indicating that the expected authentication scheme is Bearer, as we have declared in the code.
请注意,该消息包含一个标头,指示预期的身份验证方案是 Bearer,正如我们在代码中声明的那样。

So, now we know how to restrict access to our endpoints to authenticated users. But our work isn’t finished: we need to generate a JWT bearer, validate it, and find a way to pass such a token to Swagger so that we can test our protected endpoints.
因此,现在我们知道如何将对终端节点的访问限制为经过身份验证的用户。但我们的工作还没有完成:我们需要生成一个 JWT bearer,验证它,并找到一种方法将这样的令牌传递给 Swagger,以便我们可以测试受保护的端点。

Generating a JWT bearer
生成 JWT 持有者

We have said that a JWT bearer is generated by the server as a response to a login request. ASP.NET Core provides all the APIs we need to create it, so let’s see how to perform this task.
我们已经说过,JWT bearer 是由服务器生成的,作为对登录请求的响应。ASP.NET Core 提供了创建它所需的所有 API,让我们看看如何执行此任务。

The first thing to do is to define the login request endpoint to authenticate the user with their username and password:
首先要做的是定义登录请求端点,以使用用户的用户名和密码对用户进行身份验证:

app.MapPost("/api/auth/login", (LoginRequest request) =>
{
    if (request.Username == "marco" && request.Password == 
        "P@$$w0rd")
    {
        // Generate the JWT bearer...
    }
    return Results.BadRequest();
});

For the sake of simplicity, in the preceding example, we have used hardcoded values, but in a real application, we’d use, for example, ASP.NET Core Identity, the part of ASP.NET Core that is responsible for user management. More information on this topic is available in the official documentation at https://docs.microsoft.com/aspnet/core/security/authentication/identity.
为简单起见,在前面的示例中,我们使用了硬编码值,但在实际应用程序中,我们将使用 ASP.NET Core Identity,这是 Core 中负责用户管理 ASP.NET 部分。有关此主题的更多信息,请参阅 https://docs.microsoft.com/aspnet/core/security/authentication/identity 的官方文档。

In a typical login workflow, if the credentials are invalid, we return a 400 Bad Request response to the client. If, instead, the username and password are correct, we can effectively generate a JWT bearer, using the classes available in ASP.NET Core:
在典型的登录工作流程中,如果凭证无效,我们会向客户端返回 400 Bad Request 响应。相反,如果用户名和密码正确,我们可以使用 ASP.NET Core 中可用的类有效地生成 JWT bearer:

var claims = new List<Claim>()
{
    new(ClaimTypes.Name, request.Username)
};
var securityKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes("mysecuritystring"));
var credentials = new SigningCredentials(securityKey, SecurityAlgorithms.HmacSha256);
var jwtSecurityToken = new JwtSecurityToken(
    issuer: "https://www.packtpub.com",
    audience: "Minimal APIs Client",
    claims: claims, expires: DateTime.UtcNow.AddHours(1), 
      signingCredentials: credentials);
var accessToken = new JwtSecurityTokenHandler()
  .WriteToken(jwtSecurityToken);
return Results.Ok(new { AccessToken = accessToken });

JWT bearer creation involves many different concepts, but through the preceding code example, we’ll focus on the basic ones. This kind of bearer contains information that allows verifying the user identity, along with other declarations that describe the properties of the user. These properties are called claims and are expressed as string key-value pairs. In the preceding code, we created a list with a single claim that contains the username. We can add as many claims as we need, and we can also have claims with the same name. In the next sections, we’ll see how to use claims, for example, to enforce authorization.
JWT bearer 创建涉及许多不同的概念,但通过前面的代码示例,我们将重点介绍基本概念。这种类型的 bearer 包含允许验证用户身份的信息,以及描述用户属性的其他声明。这些属性称为声明,表示为字符串键值对。在前面的代码中,我们创建了一个列表,其中包含一个包含用户名的声明。我们可以根据需要添加任意数量的声明,也可以拥有具有相同名称的声明。在接下来的部分中,我们将了解如何使用声明,例如,强制实施授权。

Next in the preceding code, we defined the credentials (SigningCredentials) to sign the JWT bearer. The signature depends on the actual token content and is used to check that the token hasn’t been tampered with. In fact, if we change anything in the token, such as a claim value, the signature will consequentially change. As the key to sign the bearer is known only by the server, it is impossible for a third party to modify the token and sustain its validity. In the preceding code, we used SymmetricSecurityKey, which is never shared with clients.
接下来,在前面的代码中,我们定义了凭证 (SigningCredentials) 来对 JWT 持有者进行签名。签名取决于实际的 Token 内容,用于检查 Token 是否未被篡改。事实上,如果我们更改 Token 中的任何内容,例如声明值,签名也会随之更改。由于对 bearer 进行签名的密钥只有服务器知道,因此第三方无法修改 Token 并维持其有效性。在上面的代码中,我们使用了 SymmetricSecurityKey,它永远不会与客户端共享。

We used a short string to create the credentials, but the only requirement is that the key should be at least 32 bytes or 16 characters long. In .NET, strings are Unicode and therefore, each character takes 2 bytes. We also needed to set the algorithm that the credentials will use to sign the token. To this end, we have specified the Hash-Based Message Authentication Code (HMAC) and the hash function, SHA256, specifying the SecurityAlgorithms.HmacSha256 value. This algorithm is quite a common choice in these kinds of scenarios.
我们使用了一个短字符串来创建凭证,但唯一的要求是密钥应至少为 32 字节或 16 个字符长。在 .NET 中,字符串是 Unicode,因此每个字符占用 2 个字节。我们还需要设置凭证将用于对令牌进行签名的算法。为此,我们指定了基于哈希的消息身份验证代码 (HMAC) 和哈希函数 SHA256,并指定了 SecurityAlgorithms.HmacSha256 值。在这类场景中,这种算法是一个非常常见的选择。

Note : You can find more information about the HMAC and the SHA256 hash function at https://docs.microsoft.com/dotnet/api/system.security.cryptography.hmacsha256#remarks.
注意 : 您可以在 https://docs.microsoft.com/dotnet/api/system.security.cryptography.hmacsha256#remarks 中找到有关 HMAC 和 SHA256 哈希函数的更多信息。

By this point in the preceding code, we finally have all the information to create the token, so we can instantiate a JwtSecurityToken object. This class can use many parameters to build the token, but for the sake of simplicity, we have specified only the minimum set for a working example:
在前面的代码中,到这一点时,我们终于拥有了创建令牌的所有信息,因此我们可以实例化 JwtSecurityToken 对象。这个类可以使用许多参数来构建令牌,但为了简单起见,我们只为工作示例指定了最小集:

Issuer: A string (typically a URI) that identifies the name of the entity that is creating the token
颁发者:一个字符串(通常是 URI),用于标识创建令牌的实体的名称

Audience: The recipient that the JWT is intended for, that is, who can consume the token
受众:JWT 的目标接收者,即可以使用令牌的用户

The list of claims
索赔列表

The expiration time of the token (in UTC)
Token 的过期时间(UTC 单位)

The signing credentials
签名凭证

Tip In the preceding code example, values used to build the token are hardcoded, but in a real-life application, we should place them in an external source, for example, in the appsettings.json configuration file.
提示 : 在前面的代码示例中,用于构建令牌的值是硬编码的,但在实际应用程序中,我们应该将它们放在外部源中,例如,在 appsettings.json 配置文件中。

You can find further information on creating a token at https://docs.microsoft.com/dotnet/api/system.identitymodel.tokens.jwt.jwtsecuritytoken.
您可以在 https://docs.microsoft.com/dotnet/api/system.identitymodel.tokens.jwt.jwtsecuritytoken 中找到有关创建令牌的更多信息。

After all the preceding steps, we could create JwtSecurityTokenHandler, which is responsible for actually generating the bearer token and returning it to the caller with a 200 OK response.
完成上述所有步骤后,我们可以创建 JwtSecurityTokenHandler,它负责实际生成不记名令牌并将其返回给调用方,并给出 200 OK 响应。

So, now we can try the login endpoint in Swagger. After inserting the correct username and password and clicking the Execute button, we will get the following response:
所以,现在我们可以尝试 Swagger 中的登录端点。在插入正确的用户名和密码并单击 Execute 按钮后,我们将得到以下响应:

Figure 8.2 – The JWT bearer as a result of the login request in Swagger
图 8.2 – Swagger 中登录请求的结果 JWT 持有者

We can copy the token value and insert it in the URL of the site https://jwt.ms to see what it contains. We’ll get something like this:
我们可以复制 token 值并将其插入到站点的 URL 中 https://jwt.ms 以查看它包含的内容。我们将得到如下结果:

{
  "alg": "HS256",
  "typ": "JWT"
}.{
  "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": "marco",
  "exp": 1644431527,
  "iss": "https://www.packtpub.com",
  "aud": "Minimal APIs Client"
}.[Signature]

In particular, we see the claims that have been configured:
具体而言,我们会看到已配置的声明:

• name: The name of the logged user
name:已登录用户的名称

• exp: The token expiration time, expressed in Unix epoch
exp:Token 过期时间,以 Unix 纪元表示

• iss: The issuer of the token
iss:令牌的发行者

• aud: The audience (receiver) of the token
aud:令牌的受众(接收者)

This is the raw view, but we can switch to the Claims tab to see the decoded list of all the claims, with a description of their meaning, where available.
这是原始视图,但我们可以切换到 Claims 选项卡,查看所有声明的解码列表,以及其含义的描述(如果可用)。

There is one important point that requires attention: by default, the JWT bearer isn’t encrypted (it’s just a Base64-encoded string), so everyone can read its content. Token security does not depend on the inability to be decoded, but on the fact that it is signed. Even if the token’s content is clear, it is impossible to modify it because in this case, the signature (which uses a key that is known only by the server) will become invalid.
有一点需要注意:默认情况下,JWT bearer 未加密(它只是一个 Base64 编码的字符串),因此每个人都可以读取其内容。令牌安全性不取决于无法解码,而是取决于它是否已签名。即使 Token 的内容很清楚,也无法修改它,因为在这种情况下,签名(使用只有服务器知道的密钥)将失效。

So, it’s important not to insert sensitive data in the token; claims such as usernames, user IDs, and roles are usually fine, but, for example, we should not insert information related to privacy. To give a deliberately exaggerated example, we mustn’t insert a credit card number in the token! In any case, keep in mind that even Microsoft for Azure Active Directory uses JWT, with no encryption, so we can trust this security system.
因此,不要在令牌中插入敏感数据非常重要;用户名、用户 ID 和角色等声明通常没问题,但例如,我们不应插入与隐私相关的信息。举一个故意夸大的例子,我们不能在令牌中插入信用卡号!无论如何,请记住,即使是 Microsoft for Azure Active Directory 也使用 JWT,没有加密,因此我们可以信任这个安全系统。

In conclusion, we have described how to obtain a valid JWT. The next steps are to pass the token to our protected endpoints and instruct our minimal API on how to validate it.
总之,我们已经描述了如何获取有效的 JWT。接下来的步骤是将令牌传递给我们受保护的终端节点,并指示我们的最小 API 如何验证它。

Validating a JWT bearer
验证 JWT 持有者

After creating the JWT bearer, we need to pass it in every HTTP request, inside the Authorization HTTP header, so that ASP.NET Core can verify its validity and allow us to invoke the protected endpoints. So, we have to complete the AddJwtBearer() method invocation that we showed earlier with the description of the rules to validate the bearer:
创建 JWT 不记名后,我们需要在 Authorization HTTP 标头内的每个 HTTP 请求中传递它,以便 ASP.NET Core 可以验证其有效性并允许我们调用受保护的端点。因此,我们必须完成之前展示的 AddJwtBearer() 方法调用,其中包含验证 bearer 的规则说明:

builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddJwtBearer(options =>
{
    options.TokenValidationParameters = new TokenValidationParameters
    {
        ValidateIssuerSigningKey = true,
        IssuerSigningKey = new SymmetricSecurityKey(
          Encoding.UTF8.GetBytes("mysecuritystring")),
        ValidIssuer = "https://www.packtpub.com",
        ValidAudience = "Minimal APIs Client"
    };
});

In the preceding code, we added a lambda expression with which we defined the TokenValidationParameter object that contains the token validation rules. First of all, we checked the issuer signing key, that is, the signature of the token, as shown in the Generating a JWT bearer section, to verify that the JWT has not been tampered with. The security string that has been used to sign the token is required to perform this check, so we specify the same value (mysecuritystring) that we inserted during the login request.
在前面的代码中,我们添加了一个 lambda 表达式,我们用该表达式定义了包含令牌验证规则的 TokenValidationParameter 对象。首先,我们检查了颁发者的签名密钥,即 Token 的签名,如 生成 JWT bearer 部分所示,以验证 JWT 是否未被篡改。执行此检查需要用于对令牌进行签名的安全字符串,因此我们指定了在登录请求期间插入的相同值 (mysecuritystring)。

Then, we specify what valid values for the issuer and the audience of the token are. If the token has been emitted from a different issuer, or was intended for another audience, the validation fails. This is an important security check; we should be sure that the bearer has been issued by someone we expected to issue it and for the audience we want.
然后,我们指定令牌的颁发者和受众的有效值。如果令牌是从其他颁发者发出的,或者是针对其他受众的,则验证将失败。这是一项重要的安全检查;我们应该确保 Bearer 是由我们预期会颁发它的人签发的,并且是针对我们想要的受众。

Tip : As already pointed out, we should place the information used to work with the token in an external source, so that we can reference the correct values during token generation and validation, avoiding hardcoding them or writing their values twice.
提示 : 如前所述,我们应该将用于处理令牌的信息放在外部源中,以便我们可以在令牌生成和验证期间引用正确的值,避免对它们进行硬编码或重复写入它们的值。

We don’t need to specify that we also want to validate the token expiration because this check is automatically enabled. A clock skew is applied when validating the time to compensate for slight differences in clock time or to handle delays between the client request and the instant at which it is processed by the server. The default value is 5 minutes, which means that an expired token is considered valid for a 5-minute timeframe after its actual expiration. We can reduce the clock skew, or disable it, using the ClockSkew property of the TokenValidationParameter class.
我们不需要指定我们还要验证令牌过期,因为此检查是自动启用的。在验证时间时应用 clock skew 以补偿 clock time 的微小差异或处理 Client 端请求与服务器处理请求的时刻之间的延迟。默认值为 5 分钟,这意味着过期的令牌在实际过期后的 5 分钟内被视为有效。我们可以使用 TokenValidationParameter 类的 ClockSkew 属性来减少或禁用时钟偏差。

Now, the minimal API has all the information to check the bearer token validity. In order to test whether everything works as expected, we need a way to tell Swagger how to send the token within a request, as we’ll see in the next section.
现在,最小 API 拥有检查持有者令牌有效性的所有信息。为了测试一切是否按预期工作,我们需要一种方法来告诉 Swagger 如何在请求中发送令牌,我们将在下一节中看到。

Adding JWT support to Swagger
向 Swagger 添加 JWT 支持

We have said that the bearer token is sent in the Authorization HTTP header of a request. If we want to use Swagger to verify the authentication system and test our protected endpoints, we need to update the configuration so that it will be able to include this header in the requests.
我们已经说过,持有者令牌是在请求的 Authorization HTTP 标头中发送的。如果我们想使用 Swagger 来验证身份验证系统并测试受保护的端点,我们需要更新配置,以便它能够在请求中包含此标头。

To perform this task, it is necessary to add a bit of code to the AddSwaggerGen() method:
要执行此任务,必须向 AddSwaggerGen() 方法添加一些代码:

var builder = WebApplication.CreateBuilder(args);
//...
builder.Services.AddSwaggerGen(options =>
{
    options.AddSecurityDefinition(JwtBearerDefaults.AuthenticationScheme, new OpenApiSecurityScheme
    {
        Type = SecuritySchemeType.ApiKey,
        In = ParameterLocation.Header,
        Name = HeaderNames.Authorization,
        Description = "Insert the token with the 'Bearer ' 
                       prefix"
    });
    options.AddSecurityRequirement(new
      OpenApiSecurityRequirement
    {
        {
            new OpenApiSecurityScheme
            {
                Reference = new OpenApiReference
                {
                    Type = ReferenceType.SecurityScheme,
                    Id = 
                     JwtBearerDefaults.AuthenticationScheme
                }
            },
            Array.Empty<string>()
        }
    });
});

In the preceding code, we defined how Swagger handles authentication. Using the AddSecurityDefinition() method, we described how our API is protected; we used an API key, which is the bearer token, in the header with the name Authorization. Then, with AddSecurityRequirement(), we specified that we have a security requirement for our endpoints, which means that the security information must be sent for every request.
在上面的代码中,我们定义了 Swagger 如何处理身份验证。使用 AddSecurityDefinition() 方法,我们描述了如何保护我们的 API;我们在标头中使用了名为 Authorization 的 API 密钥,即不记名令牌。然后,使用 AddSecurityRequirement(),我们指定了端点的安全要求,这意味着必须为每个请求发送安全信息。

After adding the preceding code, if we now run our application, the Swagger UI will contain something new.
添加上述代码后,如果我们现在运行应用程序,Swagger UI 将包含一些新内容。

Figure 8.3 – Swagger showing the authentication features
图 8.3 – Swagger 显示身份验证功能

Upon clicking the Authorize button or any of the padlock icons at the right of the endpoints, the following window will show up, allowing us to insert the bearer token:
单击 Authorize 按钮或端点右侧的任何挂锁图标后,将显示以下窗口,允许我们插入不记名令牌:

Figure 8.4 – The window that allows setting the bearer token
图 8.4 – 允许设置 bearer token 的窗口

The last thing to do is to insert the token in the Value textbox and confirm by clicking on Authorize. From now on, the specified bearer will be sent along with every request made with Swagger.
最后要做的是将令牌插入 Value 文本框中,然后单击 Authorize 进行确认。从现在开始,指定的 bearer 将与使用 Swagger 发出的每个请求一起发送。

We have finally completed all the required steps to add authentication support to minimal APIs. Now, it’s time to verify that everything works as expected. In the next section, we’ll perform some tests.
我们终于完成了向最小 API 添加身份验证支持所需的所有步骤。现在,是时候验证一切是否按预期工作了。在下一节中,我们将执行一些测试。

Testing authentication
测试身份验证

As described in the previous sections, if we call one of the protected endpoints, we get a 401 Unauthorized response. To verify that token authentication works, let’s call the login endpoint to get a token. After that, click on the Authorize button in Swagger and insert the obtained token, remembering the Bearer prefix. Now, we’ll get a 200 OK response, meaning that we are able to correctly invoke the endpoints that require authentication. We can also try changing a single character in the token to again get the 401 Unauthorized response, because in this case, the signature will not be the expected one, as described before. In the same way, if the token is formally valid but has expired, we will obtain a 401 response.
如前面部分所述,如果我们调用其中一个受保护的终端节点,则会收到 401 Unauthorized 响应。要验证令牌身份验证是否有效,让我们调用登录终端节点以获取令牌。之后,点击 Swagger 中的 Authorize 按钮并插入获取的令牌,记住 Bearer 前缀。现在,我们将收到 200 OK 响应,这意味着我们能够正确调用需要身份验证的终端节点。我们还可以尝试更改令牌中的单个字符以再次获得 401 Unauthorized 响应,因为在这种情况下,签名将不是预期的签名,如前所述。同理,如果 Token 正式有效但已过期,我们将获得 401 响应。

As we have defined endpoints that can be reached only by authenticated users, a common requirement is to access user information within the corresponding route handlers. In Chapter 2, Exploring Minimal APIs and Their Advantages, we showed that minimal APIs provide a special binding that directly provides a ClaimsPrincipal object representing the logged user:
由于我们已经定义了只有经过身份验证的用户才能访问的端点,因此一个常见的要求是访问相应路由处理程序中的用户信息。在第 2 章 探索最小 API 及其优势中,我们展示了最小 API 提供了一个特殊的绑定,该绑定直接提供表示已记录用户的 ClaimsPrincipal 对象:

app.MapGet("/api/me", [Authorize] (ClaimsPrincipal user) => $"Logged username: {user.Identity.Name}");

The user parameter of the route handler is automatically filled with user information. In this example, we just get the name, which in turn is read from the token claims, but the object exposes many properties that allow us to work with authentication data. We can refer to the official documentation at https://docs.microsoft.com/dotnet/api/system.security.claims.claimsprincipal.identity for further details.
路由处理程序的 user 参数会自动填充用户信息。在此示例中,我们只获取 name,而 name 又是从 token 声明中读取的,但该对象公开了许多允许我们处理身份验证数据的属性。有关详细信息,请参阅 https://docs.microsoft.com/dotnet/api/system.security.claims.claimsprincipal.identity 上的官方文档。

This ends our overview of authentication. In the next section, we’ll see how to handle authorization.
我们对身份验证的概述到此结束。在下一节中,我们将了解如何处理授权。

Handling authorization – roles and policies
处理授权 – 角色和策略

Right after the authentication, there is the authorization step, which grants an authenticated user permission to do something. Minimal APIs provide the same authorization features as controller-based projects, based on the concepts of roles and policies.
在身份验证之后,立即执行授权步骤,该步骤授予经过身份验证的用户执行某些作的权限。Minimal API 基于角色和策略的概念,提供与基于控制器的项目相同的授权功能。

When an identity is created, it may belong to one or more roles. For example, a user can belong to the Administrator role, while another can be part of two roles: User and Stakeholder. Typically, each user can perform only the operations that are allowed by their roles. Roles are just claims that are inserted in the JWT bearer upon authentication. As we’ll see in a moment, ASP.NET Core provides built-in support to verify whether a user belongs to a role.
创建身份时,它可能属于一个或多个角色。例如,一个用户可以属于 Administrator 角色,而另一个用户可以属于两个角色:User 和 Slikeholder。通常,每个用户只能执行其角色允许的作。角色只是在身份验证时插入到 JWT 持有者中的声明。正如我们稍后将看到的,ASP.NET Core 提供了内置支持来验证用户是否属于某个角色。

While role-based authorization covers many scenarios, there are cases in which this kind of security isn’t enough because we need to apply more specific rules to check whether the user has the right to perform some activities. In such a situation, we can create custom policies that allow us to specify more detailed authorization requirements and even completely define the authorization logic based on our algorithms.
虽然基于角色的授权涵盖了许多场景,但在某些情况下,这种安全性是不够的,因为我们需要应用更具体的规则来检查用户是否有权执行某些活动。在这种情况下,我们可以创建自定义策略,允许我们指定更详细的授权要求,甚至根据我们的算法完全定义授权逻辑。

In the next sections, we’ll see how to manage both role-based and policy-based authorization in our APIs, so that we can cover all our requirements, that is, allowing access to certain endpoints only to users with specific roles or claims, or based on our custom logic.
在接下来的部分中,我们将了解如何在 API 中管理基于角色和基于策略的授权,以便我们可以满足所有要求,即仅允许具有特定角色或声明的用户访问某些终端节点,或者允许基于我们的自定义逻辑访问某些终端节点。

Handling role-based authorization
处理基于角色的授权

As already introduced, roles are claims. This means that they must be inserted in the JWT bearer token upon authentication, just like any other claims:
如前所述,角色是声明。这意味着,在身份验证时,必须将它们插入到 JWT 不记名令牌中,就像任何其他声明一样:

app.MapPost("/api/auth/login", (LoginRequest request) =>
{
    if (request.Username == "marco" && request.Password == 
        "P@$$w0rd")
    {
        var claims = new List<Claim>()
        {
            new(ClaimTypes.Name, request.Username),
            new(ClaimTypes.Role, "Administrator"),
            new(ClaimTypes.Role, "User")
        };

    //...
}

In this example, we statically add two claims with name ClaimTypes.Role: Administrator and User. As said in the previous sections, in a real-world application, these values typically come from a complete user management system built, for example, with ASP.NET Core Identity.
在此示例中,我们静态添加两个名称为 ClaimTypes.Role 的声明:Administrator 和 User。如前几节所述,在实际应用程序中,这些值通常来自一个完整的用户管理系统,例如,使用 ASP.NET Core Identity 构建。

As in all the other claims, roles are inserted in the JWT bearer. If now we try to invoke the login endpoint, we’ll notice that the token is longer because it contains a lot of information, which we can verify using the https://jwt.ms site again, as follows:
与所有其他声明一样,角色也插入到 JWT 持有者中。如果现在我们尝试调用登录端点,我们会注意到令牌更长,因为它包含大量信息,我们可以再次使用 https://jwt.ms 站点验证这些信息,如下所示:

{
  "alg": "HS256",
  "typ": "JWT"
}.{
  "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": "marco",
  "http://schemas.microsoft.com/ws/2008/06/identity/claims/role": [
    "Administrator",
    "User"
  ],
  "exp": 1644755166,
  "iss": "https://www.packtpub.com",
  "aud": "Minimal APIs Client"
}.[Signature]

In order to restrict access to a particular endpoint only for users that belong to a given role, we need to specify this role as an argument in the Authorize attribute or the RequireAuthorization() method:
为了限制仅属于给定角色的用户访问特定端点,我们需要将此角色指定为 Authorize 属性或 RequireAuthorization() 方法中的参数:

app.MapGet("/api/admin-attribute-protected", [Authorize(Roles = "Administrator")] () => { });
app.MapGet("/api/admin-method-protected", () => { })
.RequireAuthorization(new AuthorizeAttribute { Roles = "Administrator" });

In this way, only users who are assigned the Administrator role can access the endpoints. We can also specify more roles, separating them with a comma: the user will be authorized if they have at least one of the specified roles.
这样,只有分配了 Administrator 角色的用户才能访问终端节点。我们还可以指定更多角色,用逗号分隔:如果用户至少拥有一个指定的角色,则用户将被授权。

Important Note : Role names are case sensitive.
重要提示 : 角色名称区分大小写。

Now suppose we have the following endpoint:
现在假设我们有以下端点:

app.MapGet("/api/stackeholder-protected", [Authorize(Roles = "Stakeholder")] () => { });

This method can only be consumed by a user who is assigned the Stakeholder role. However, in our example, this role isn’t assigned. So, if we use the previous bearer token and try to invoke this endpoint, of course, we’ll get an error. But in this case, it won’t be 401 Unauthorized, but rather 403 Forbidden. We see this behavior because the user is actually authenticated (meaning the token is valid, so no 401 error), but they don’t have the authorization to execute the method, so access is forbidden. In other words, authentication errors and authorization errors lead to different HTTP status codes.
此方法只能由分配了 Stakeholder 角色的用户使用。但是,在我们的示例中,未分配此角色。因此,如果我们使用以前的 bearer token 并尝试调用此 endpoint,我们当然会收到错误。但在这种情况下,它不会是 401 Unauthorized,而是 403 Forbidden。我们看到这种行为是因为用户实际上已经过身份验证(意味着令牌有效,因此没有 401 错误),但他们没有执行该方法的授权,因此禁止访问。换句话说,身份验证错误和授权错误会导致不同的 HTTP 状态代码。

There is another important scenario that involves roles. Sometimes, we don’t need to restrict endpoint access at all but need to adapt the behavior of the handler according to the specific user role, such as when retrieving only a certain type of information. In this case, we can use the IsInRole() method, which is available on the ClaimsPrincipal object:
还有另一个涉及角色的重要方案。有时,我们根本不需要限制端点访问,但需要根据特定的用户角色来调整处理程序的行为,例如当只检索某种类型的信息时。在这种情况下,我们可以使用 IsInRole() 方法,该方法在 ClaimsPrincipal 对象上可用:

app.MapGet("/api/role-check", [Authorize] (ClaimsPrincipal user) =>
{
    if (user.IsInRole("Administrator"))
    {
        return "User is an Administrator";
    }
    return "This is a normal user";
});

In this endpoint, we only use the Authorize attribute to check whether the user is authenticated or not. Then, in the route handler, we check whether the user has the Administrator role. If yes, we just return a message, but we can imagine that administrators can retrieve all the available information, while normal users get only a subset, based on the values of the information itself.
在此终端节点中,我们只使用 Authorize 属性来检查用户是否经过身份验证。然后,在路由处理程序中,我们检查用户是否具有 Administrator 角色。如果是,我们只返回一条消息,但我们可以想象管理员可以检索所有可用信息,而普通用户只能根据信息本身的值获得一个子集。

As we have seen, with role-based authorization, we can perform different types of authorization checks in our endpoints, to cover many scenarios. However, this approach cannot handle all situations. If roles aren’t enough, we need to use authorization based on policies, which we will discuss in the next section.
正如我们所看到的,通过基于角色的授权,我们可以在端点中执行不同类型的授权检查,以涵盖许多场景。但是,此方法无法处理所有情况。如果角色还不够,我们需要使用基于策略的授权,我们将在下一节中讨论。

Applying policy-based authorization
应用基于策略的授权
Policies are a more general way to define authorization rules. Role-based authorization can be considered a specific policy authorization that involves a roles check. We typically use policies when we need to handle more complex scenarios.
策略是定义授权规则的更通用方法。基于角色的授权可被视为涉及角色检查的特定策略授权。当我们需要处理更复杂的场景时,我们通常会使用策略。

This kind of authorization requires two steps:
这种授权需要两个步骤:

  1. Defining a policy with a rule set
    使用规则集定义策略
  2. Applying a certain policy on the endpoints
    在端点上应用特定策略

Policies are added in the context of the AddAuthorization() method, which we saw in the previous section, Protecting a minimal API. Each policy has a unique name, which is used to later reference it, and a set of rules, which are typically described in a fluent manner.
策略是在 AddAuthorization() 方法的上下文中添加的,我们在上一节 保护最小 API 中看到了。每个策略都有一个唯一的名称(用于以后引用它)和一组规则,这些规则通常以流畅的方式进行描述。

We can use policies when role authorization is not enough. Suppose that the bearer token also contains the ID of the tenant to which the user belongs:
当角色授权不足时,我们可以使用策略。假设 bearer token 还包含用户所属租户的 ID:

var claims = new List<Claim>()
{
    // ...
    new("tenant-id", "42")
};

Again, in a real-world scenario, this value could come from a database that stores the properties of the user. Suppose that we want to only allow users who belong to a particular tenant to reach an endpoint. As tenant-id is a custom claim, ASP.NET Core doesn’t know how to use it to enforce authorization. So, we can’t use the solutions shown earlier. We need to define a custom policy with the corresponding rule:
同样,在实际方案中,此值可能来自存储用户属性的数据库。假设我们只想允许属于特定租户的用户访问终端节点。由于 tenant-id 是一个自定义声明,因此 ASP.NET Core 不知道如何使用它来强制实施授权。因此,我们不能使用前面显示的解决方案。我们需要定义一个带有相应规则的自定义策略:

builder.Services.AddAuthorization(options =>
{
    options.AddPolicy("Tenant42", policy =>
    {
        policy.RequireClaim("tenant-id", "42");
    });
});

In the preceding code, we created a policy named Tenant42, which requires that the token contains the tenant-id claim with the value 42. The policy variable is an instance of AuthorizationPolicyBuilder and exposes methods that allow us to fluently specify the authorization rules; we can specify that a policy requires certain users, roles, and claims to be satisfied. We can also chain multiple requirements in the same policy, writing, for example, something such as policy.RequireRole(“Administrator”).RequireClaim(“tenant-id”). The full list of methods is available on the documentation page at https://docs.microsoft.com/dotnet/api/microsoft.aspnetcore.authorization.authorizationpolicybuilder.
在上面的代码中,我们创建了一个名为 Tenant42 的策略,该策略要求令牌包含值为 42 的 tenant-id 声明。policy 变量是 AuthorizationPolicyBuilder 的一个实例,它公开了允许我们流畅地指定授权规则的方法;我们可以指定策略要求满足某些用户、角色和声明。我们还可以将多个需求链接在同一个策略中,例如,编写诸如 policy 之类的内容。RequireRole(“管理员”)。RequireClaim(“tenant-id”)的完整的方法列表可在 https://docs.microsoft.com/dotnet/api/microsoft.aspnetcore.authorization.authorizationpolicybuilder 的文档页面上找到。

Then, in the method we want to protect, we have to specify the policy name, as usual with the Authorize attribute or the RequireAuthorization() method:
然后,在我们想要保护的方法中,我们必须指定策略名称,就像通常使用 Authorize 属性或 RequireAuthorization() 方法一样:

app.MapGet("/api/policy-attribute-protected", [Authorize(Policy = "Tenant42")] () => { });
app.MapGet("/api/policy-method-protected", () => { })
.RequireAuthorization("Tenant42");

If we try to execute these preceding endpoints with a token that doesn’t have the tenant-id claim, or its value isn’t 42, we get a 403 Forbidden result, as happened with the role check.
如果我们尝试使用没有 tenant-id 声明或其值不是 42 的令牌执行这些前面的终结点,则会收到 403 Forbidden 结果,就像角色检查一样。

There are scenarios in which declaring a list of allowed roles and claims isn’t enough: for example, we would need to perform more complex checks or verify authorization based on dynamic parameters. In these cases, we can use the so-called policy requirements, which comprise a collection of authorization rules for which we can provide custom verification logic.
在某些情况下,声明允许的角色和声明列表是不够的:例如,我们需要执行更复杂的检查或根据动态参数验证授权。在这些情况下,我们可以使用所谓的策略要求,它包含一组授权规则,我们可以为其提供自定义验证逻辑。

To adopt this solution, we need two objects:
要采用此解决方案,我们需要两个对象:

• A requirement class that implements the IAuthorizationRequirement interface and defines the requirement we want to manage
实现 IAuthorizationRequirement 接口并定义我们要管理的要求的要求类

• A handler class that inherits from AuthorizationHandler and contains the logic to verify the requirement
一个从 AuthorizationHandler 继承并包含验证要求的逻辑的处理程序类

Let’s suppose we don’t want users who don’t belong to the Administrator role to access certain endpoints during a maintenance time window. This is a perfectly valid authorization rule, but we cannot afford it using the solutions we have seen up to now. The rule involves a condition that considers the current time, so the policy cannot be statically defined.
假设我们不希望不属于 Administrator 角色的用户在维护时段内访问某些终端节点。这是一个完全有效的授权规则,但使用我们目前看到的解决方案,我们无法承受它。该规则涉及考虑当前时间的条件,因此不能静态定义策略。

So, we start by creating a custom requirement:
因此,我们首先创建自定义需求:

public class MaintenanceTimeRequirement : IAuthorizationRequirement
{
    public TimeOnly StartTime { get; init; }
    public TimeOnly EndTime { get; init; }
}

The requirement contains the start and end times of the maintenance window. During this interval, we only want administrators to be able to operate.
该要求包含维护时段的开始和结束时间。在此间隔期间,我们只希望管理员能够进行作。

Note : TimeOnly is a new data type that has been introduced with C# 10 and allows us to store only only the time of the day (and not the date). More information is available at https://docs.microsoft.com/dotnet/api/system.timeonly.
注意 : TimeOnly 是 C# 10 中引入的一种新数据类型,它允许我们只存储一天中的时间(而不是日期)。有关更多信息,请访问 https://docs.microsoft.com/dotnet/api/system.timeonly

Note that the IAuthorizationRequirement interface is just a placeholder. It doesn’t contain any method or property to be implemented; it serves only to identify that the class is a requirement. In other words, if we don’t need any additional information for the requirement, we can create a class that implements IAuthorizationRequirement but actually has no content at all.
请注意,IAuthorizationRequirement 接口只是一个占位符。它不包含任何要实现的方法或属性;它仅用于标识该类是必需的。换句话说,如果我们不需要要求的任何其他信息,我们可以创建一个实现 IAuthorizationRequirement 但实际上根本没有内容的类。

This requirement must be enforced, so it is necessary to create the corresponding handler:
必须强制执行此要求,因此必须创建相应的处理程序:

public class MaintenanceTimeAuthorizationHandler
    : AuthorizationHandler<MaintenanceTimeRequirement>
{
    protected override Task HandleRequirementAsync(
        AuthorizationHandlerContext context,
        MaintenanceTimeRequirement requirement)
    {
        var isAuthorized = true;
        if (!context.User.IsInRole("Administrator"))
        {
            var time = TimeOnly.FromDateTime(DateTime.Now);
            if (time >= requirement.StartTime && time <
                requirement.EndTime)
            {
                isAuthorized = false;
            }
        }
        if (isAuthorized)
        {
            context.Succeed(requirement);
        }
        return Task.CompletedTask;
    }
}

Our handler inherits from AuthorizationHandler<MaintenanceTimeRequirement>, so we need to override the HandleRequirementAsync() method to verify the requirement, using the AuthorizationHandlerContext parameter, which contains a reference to the current user. As said at the beginning, if the user is not assigned the Administrator role, we check whether the current time falls in the maintenance window. If so, the user doesn’t have the right to access.
我们的处理程序继承自 AuthorizationHandler<MaintenanceTimeRequirement>,因此我们需要使用 AuthorizationHandlerContext 参数(包含对当前用户的引用)重写 HandleRequirementAsync() 方法来验证需求。如开头所述,如果未为用户分配 Administrator 角色,我们将检查当前时间是否在维护时段内。如果是这样,则用户无权访问。

At the end, if the isAuthorized variable is true, it means that the authorization can be granted, so we call the Succeed() method on the context object, passing the requirement that we want to validate. Otherwise, we don’t invoke any method on the context, meaning that the requirement hasn’t been verified.
最后,如果 isAuthorized 变量为 true,则表示可以授予授权,因此我们在上下文对象上调用 Succeed() 方法,传递我们要验证的要求。否则,我们不会在上下文中调用任何方法,这意味着需求尚未经过验证。

We haven’t yet finished implementing the custom policy. We still have to define the policy and register the handler in the service provider:
我们尚未完成自定义策略的实施。我们仍然需要定义策略并在服务提供者中注册处理程序:

builder.Services.AddAuthorization(options =>
{
    options.AddPolicy("TimedAccessPolicy", policy =>
    {
        policy.Requirements.Add(new
          MaintenanceTimeRequirement
        {
            StartTime = new TimeOnly(0, 0, 0),
            EndTime = new TimeOnly(4, 0, 0)
        });
    });
});
builder.Services.AddScoped<IAuthorizationHandler, MaintenanceTimeAuthorizationHandler>();

In the preceding code, we defined a maintenance time window from midnight till 4:00 in the morning. Then, we registered the handler as an implementation of the IAuthorizationHandler interface, which in turn is implemented by the AuthorizationHandler class.
在上面的代码中,我们定义了从午夜到凌晨 4:00 的维护时间窗口。然后,我们将处理程序注册为 IAuthorizationHandler 接口的实现,而该接口又由 AuthorizationHandler 类实现。

Now that we have everything in place, we can apply the policy to our endpoints:
现在我们已经准备好了一切,我们可以将策略应用于我们的端点:

app.MapGet("/api/custom-policy-protected", [Authorize(Policy = "TimedAccessPolicy")] () => { });

When we try to reach this endpoint, ASP.NET Core will check the corresponding policy, find that it contains a requirement, and scan all the registrations of the IAuhorizationHandler interface to see whether there is one that is able to handle the requirement. Then, the handler will be invoked, and the result will be used to determine whether the user has the right to access the route. If the policy isn’t verified, we’ll get a 403 Forbidden response.
当我们尝试访问此终端节点时,ASP.NET Core 将检查相应的策略,发现它包含需求,并扫描 IAuhorizationHandler 接口的所有注册,以查看是否有能够处理该要求的接口。然后,将调用处理程序,结果将用于确定用户是否有权访问路由。如果策略未经过验证,我们将收到 403 Forbidden 响应。

We have shown how powerful policies are, but there is more. We can also use them to define global rules that are automatically applied to all endpoints, using the concepts of default and fallback policies, as we’ll see in the next section.
我们已经展示了政策的强大之处,但还有更多。我们还可以使用 default 和 fallback 策略的概念,使用它们来定义自动应用于所有端点的全局规则,我们将在下一节中看到。

Using default and fallback policies
使用 default 和 fallback 策略

Default and fallback policies are useful when we want to define global rules that must be automatically applied. In fact, when we use the Authorize attribute or the RequireAuthorization() method, without any other parameter, we implicitly refer to the default policy defined by ASP.NET Core, which is set to require an authenticated user.
当我们想要定义必须自动应用的全局规则时,Default 和 fallback 策略非常有用。事实上,当我们使用 Authorize 属性或 RequireAuthorization() 方法时,如果没有任何其他参数,我们隐式引用了 ASP.NET Core 定义的默认策略,该策略设置为需要经过身份验证的用户。

If we want to use different conditions by default, we just need to redefine the DefaultPolicy property, which is available in the context of the AddAuthorization() method:
如果我们想默认使用不同的条件,我们只需要重新定义 DefaultPolicy 属性,该属性在 AddAuthorization() 方法的上下文中可用:

builder.Services.AddAuthorization(options =>
{
    var policy = new AuthorizationPolicyBuilder()
      .RequireAuthenticatedUser()
        .RequireClaim("tenant-id").Build();
    options.DefaultPolicy = policy;    
});

We use AuthorizationPolicyBuilder to define all the security requirements, then we set it as a default policy. In this way, even if we don’t specify a custom policy in the Authorize attribute or the RequireAuthorization() method, the system will always verify whether the user is authenticated, and the bearer contains the tenant-id claim. Of course, we can override this default behavior by just specifying roles or policy names in the authorization attribute or method.
我们使用 AuthorizationPolicyBuilder 定义所有安全要求,然后将其设置为默认策略。这样,即使我们没有在 Authorize 属性或 RequireAuthorization() 方法中指定自定义策略,系统也将始终验证用户是否经过身份验证,并且持有者包含 tenant-id 声明。当然,我们可以通过在 authorization 属性或方法中指定角色或策略名称来覆盖此默认行为。

A fallback policy, on the other hand, is the policy that is applied when there is no authorization information on the endpoints. It is useful, for example, when we want all our endpoints to be automatically protected, even if we forget to specify the Authorize attribute or just don’t want to repeat the attribute for each handler. Let us try and understand this using the following code:
另一方面,回退策略是在终端节点上没有授权信息时应用的策略。例如,当我们希望自动保护所有端点时,即使我们忘记指定 Authorize 属性或只是不想为每个处理程序重复该属性,它也很有用。让我们尝试使用以下代码来理解这一点:

builder.Services.AddAuthorization(options =>
{
    options.FallbackPolicy = options.DefaultPolicy;
});

In the preceding code, FallbackPolicy becomes equal to DefaultPolicy. We have said that the default policy requires that the user be authenticated, so the result of this code is that now, all the endpoints automatically need authentication, even if we don’t explicitly protect them.
在上面的代码中,FallbackPolicy 等于 DefaultPolicy。我们已经说过,默认策略要求对用户进行身份验证,因此此代码的结果是,现在,所有端点都自动需要身份验证,即使我们没有明确保护它们。

This is a typical solution to adopt when most of our endpoints have restricted access. We don’t need to specify the Authorize attribute or use the RequireAuthorization() method anymore. In other words, now all our endpoints are protected by default.
当我们的大多数端点都限制访问时,这是一种典型的解决方案。我们不再需要指定 Authorize 属性或使用 RequireAuthorization() 方法。换句话说,现在我们所有的端点都默认受到保护。

If we decide to use this approach, but a bunch of endpoints need public access, such as the login endpoint, which everyone should be able to invoke, we can use the AllowAnonymous attribute or the AllowAnonymous() method:
如果我们决定使用这种方法,但有大量端点需要公共访问,例如每个人都应该能够调用的登录端点,我们可以使用 AllowAnonymous 属性或 AllowAnonymous() 方法:

app.MapPost("/api/auth/login", [AllowAnonymous] (LoginRequest request) => { });
// OR
app.MapPost("/api/auth/login", (LoginRequest request) => { })
.AllowAnonymous();

As the name implies, the preceding code will bypass all authorization checks for the endpoint, including the default and fallback authorization policies.
顾名思义,前面的代码将绕过终端节点的所有授权检查,包括默认和回退授权策略。

To deepen our knowledge of policy-based authentication, we can refer to the official documentation at https://docs.microsoft.com/aspnet/core/security/authorization/policies.
为了加深我们对基于策略的身份验证的了解,我们可以参考 https://docs.microsoft.com/aspnet/core/security/authorization/policies 的官方文档。

Summary
总结

Knowing how authentication and authorization work in minimal APIs is fundamental to developing secure applications. Using JWT bearer authentication roles and policies, we can even define complex authorization scenarios, with the ability to use both standard and custom rules.
了解身份验证和授权在最小 API 中的工作原理是开发安全应用程序的基础。使用 JWT 不记名身份验证角色和策略,我们甚至可以定义复杂的授权场景,并能够使用标准和自定义规则。

In this chapter, we have introduced basic concepts to make a service secure, but there is much more to talk about, especially regarding ASP.NET Core Identity: an API that supports login functionality and allows managing users, passwords, profile data, roles, claims, and more. We can look further into this topic by checking out the official documentation, which is available at https://docs.microsoft.com/aspnet/core/security/authentication/identity.
在本章中,我们介绍了确保服务安全的基本概念,但还有更多内容要讨论,尤其是关于 ASP.NET 核心身份:一个支持登录功能并允许管理用户、密码、配置文件数据、角色、声明等的 API。我们可以通过查看官方文档来进一步了解这个主题,该文档可在 https://docs.microsoft.com/aspnet/core/security/authentication/identity 上获得。

In the next chapter, we will see how to add multilanguage support to our minimal APIs and how to correctly handle applications that work with different date formats, time zones, and so on.
在下一章中,我们将了解如何为我们的最小 API 添加多语言支持,以及如何正确处理使用不同日期格式、时区等的应用程序。

9 Leveraging Globalization and Localization

9 利用全球化和本地化

When developing an application, it is important to think about multi-language support; a multilingual application allows for a wider audience reach. This is also true for web APIs: messages returned by endpoints (for example, validation errors) should be localized, and the service should be able to handle different cultures and deal with time zones. In this chapter of the book, we will talk about globalization and localization, and we will explain what features are available in minimal APIs to work with these concepts. The information and samples that will be provided will guide us when adding multi-language support to our services and correctly handling all the related behaviors so that we will be able to develop global applications.
在开发应用程序时,考虑多语言支持非常重要;多语言应用程序允许更广泛的受众范围。Web API 也是如此:端点返回的消息(例如,验证错误)应该本地化,并且服务应该能够处理不同的区域性并处理时区。在本书的这一章中,我们将讨论全球化和本地化,并将解释最小 API 中有哪些功能可用于处理这些概念。将提供的信息和示例将指导我们向我们的服务添加多语言支持并正确处理所有相关行为,以便我们能够开发全球应用程序。

In this chapter, we will be covering the following topics:
在本章中,我们将介绍以下主题:

• Introducing globalization and localization
全球化和本地化简介

• Localizing a minimal API application
本地化最小 API 应用程序

• Using resource files
使用资源文件

• Integrating localization in validation frameworks
将本地化集成到验证框架中

• Adding UTC support to a globalized minimal API
向全球化的最小 API 添加 UTC 支持

Technical requirements
技术要求

To follow the descriptions in this chapter, you will need to create an ASP.NET Core 6.0 Web API application. Refer to the Technical requirements section in Chapter 1, Introduction to Minimal APIs, for instructions on how to do so.
要按照本章中的描述进行作,您需要创建一个 ASP.NET Core 6.0 Web API 应用程序。有关如何执行此作的说明,请参阅第 1 章 最小 API 简介中的技术要求部分。

If you’re using your console, shell, or Bash terminal to create the API, remember to change your working directory to the current chapter number (Chapter09).
如果您使用控制台、shell 或 Bash 终端创建 API,请记住将工作目录更改为当前章节编号 (Chapter09)。

All the code samples in this chapter can be found in the GitHub repository for this book at https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter09.
本章中的所有代码示例都可以在本书的 GitHub 存储库中找到,网址为 https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter09

Introducing globalization and localization
全球化和本地化简介

When thinking about internationalization, we must deal with globalization and localization, two terms that seem to refer to the same concepts but actually involve different areas. Globalization is the task of designing applications that can manage and support different cultures. Localization is the process of adapting an application to a particular culture, for example, by providing translated resources for each culture that will be supported.
在考虑国际化时,我们必须处理全球化和本地化,这两个术语似乎指的是相同的概念,但实际上涉及不同的领域。全球化的任务是设计能够管理和支持不同区域性的应用程序。本地化是使应用程序适应特定区域性的过程,例如,为将要支持的每种区域性提供翻译资源。

Note : The terms internationalization, globalization, and localization are often abbreviated to I18N, G11N, and L10N, respectively.
注意 : 术语国际化、全球化和本地化通常分别缩写为 I18N、G11N 和 L10N。

As with all the other features that we have already introduced in the previous chapters, globalization and localization can be handled by the corresponding middleware and services that ASP.NET Core provides and work in the same way in minimal APIs and controller-based projects.
与我们在前几章中介绍的所有其他功能一样,全球化和本地化可以由 ASP.NET Core 提供的相应中间件和服务处理,并且在最小的 API 和基于控制器的项目中以相同的方式工作。

You can find a great introduction to globalization and localization in the official documentation available at https://docs.microsoft.com/dotnet/core/extensions/globalization and https://docs.microsoft.com/dotnet/core/extensions/localization, respectively. In the rest of the chapter, we will focus on how to add support for these features in a minimal API project; in this way, we’ll introduce some important concepts and explain how to leverage globalization and localization in ASP.NET Core.
您可以分别在 https://docs.microsoft.com/dotnet/core/extensions/globalizationhttps://docs.microsoft.com/dotnet/core/extensions/localization 上提供的官方文档中找到有关全球化和本地化的精彩介绍。在本章的其余部分,我们将重点介绍如何在最小 API 项目中添加对这些功能的支持;通过这种方式,我们将介绍一些重要的概念,并解释如何在 ASP.NET Core 中利用全球化和本地化。

Localizing a minimal API application
本地化最小 API 应用程序

To enable localization within a minimal API application, let us go through the following steps:
要在最小 API 应用程序中启用本地化,让我们执行以下步骤:

  1. The first step to making an application localizable is to specify the supported cultures by setting the corresponding options, as follows:
    使应用程序可本地化的第一步是通过设置相应的选项来指定受支持的区域性,如下所示:

    var builder = WebApplication.CreateBuilder(args);
    //...
    var supportedCultures = new CultureInfo[] { new("en"), new("it"), new("fr") };
    builder.Services.Configure<RequestLocalizationOptions>(options =>
    {
    options.SupportedCultures = supportedCultures;
    options.SupportedUICultures = supportedCultures;
    options.DefaultRequestCulture = new
    RequestCulture(supportedCultures.First());
    });

In our example, we want to support three cultures – English, Italian, and French – so, we create an array of CultureInfo objects.
在我们的示例中,我们希望支持三种区域性 – 英语、意大利语和法语 – 因此,我们创建了一个 CultureInfo 对象数组。

We’re defining neutral cultures, that is, cultures that have a language but are not associated with a country or region. We could also use specific cultures, such as en-US or en-GB, to represent the cultures of a particular region: for example, en-US would refer to the English culture prevalent in the United States, while en-GB would refer to the English culture prevalent in the United Kingdom. This difference is important because, depending on the scenario, we may need to use country-specific information to correctly implement localization. For example, if we want to show a date, we have to know that the date format in the United States is M/d/yyyy, while in the United Kingdom, it is dd/MM/yyyy. So, in this case, it becomes fundamental to work with specific cultures. We also use specific cultures if we need to support language differences across cultures. For example, a particular word may have different spellings depending on the country (e.g., color in the US versus colour in the UK). That said, for our scenario of minimal APIs, working with neutral cultures is just fine.
我们定义的非特定区域性,即具有某种语言但与国家或地区无关的区域性。我们还可以使用特定区域性(如 en-US 或 en-GB)来表示特定区域的区域性:例如,en-US 表示美国流行的英语区域性,而 en-GB 表示英国流行的英语区域性。这种差异很重要,因为根据具体情况,我们可能需要使用特定于国家/地区的信息来正确实施本地化。例如,如果我们想显示一个日期,我们必须知道美国的日期格式是 M/d/yyyy,而在英国是 dd/MM/yyyy。因此,在这种情况下,与特定文化合作变得至关重要。如果我们需要支持跨文化的语言差异,我们也会使用特定区域性。例如,根据国家/地区,特定单词可能具有不同的拼写(例如,美国的 color 与英国的 colour)。也就是说,对于我们的最小 API 方案,使用非特定区域性就很好了。

  1. Next, we configure RequestLocalizationOptions, setting the cultures and specifying the default one to use if no information about the culture is provided. We specify both the supported cultures and the supported UI cultures:
    接下来,我们配置 RequestLocalizationOptions,设置区域性并指定在未提供有关区域性的信息时要使用的默认区域性。我们指定了受支持的区域性和受支持的 UI 区域性:

• The supported cultures control the output of culture-dependent functions, such as date, time, and number format.
支持的区域性控制依赖于区域性的函数(如日期、时间和数字格式)的输出。

• The supported UI cultures are used to choose which translated strings (from .resx files) are searched for. We will talk about .resx files later in this chapter.
支持的 UI 区域性用于选择要搜索的已翻译字符串(从 .resx 文件)。我们将在本章后面讨论 .resx 文件。

In a typical application, cultures and UI cultures are set to the same values, but of course, we can use different options if needed.
在典型的应用程序中,区域性和 UI 区域性设置为相同的值,但当然,如果需要,我们可以使用不同的选项。

  1. Now that we have configured our service to support globalization, we need to add the localization middleware to the ASP.NET Core pipeline so it will be able to automatically set the culture of the request. Let us do so using the following code:
    现在我们已经将服务配置为支持全球化,我们需要将本地化中间件添加到 ASP.NET Core 管道中,以便它能够自动设置请求的区域性。让我们使用以下代码来做到这一点:

    var app = builder.Build();
    //...
    app.UseRequestLocalization();
    //...
    app.Run();

In the preceding code, with UseRequestLocalization(), we’re adding RequestLocalizationMiddleware to the ASP.NET Core pipeline to set the current culture of each request. This task is performed using a list of RequestCultureProvider that can read information about the culture from various sources. Default providers comprise the following:
在前面的代码中,我们使用 UseRequestLocalization() 将 RequestLocalizationMiddleware 添加到 ASP.NET Core 管道,以设置每个请求的当前区域性。此任务是使用 RequestCultureProvider 列表执行的,该列表可以从各种源读取有关区域性的信息。默认提供程序包括以下内容:

• QueryStringRequestCultureProvider: Searches for the culture and ui-culture query string parameters
• QueryStringRequestCultureProvider:搜索 culture 和 ui-culture 查询字符串参数

• CookieRequestCultureProvider: Uses the ASP.NET Core cookie
CookieRequestCultureProvider:使用 ASP.NET Core Cookie

AcceptLanguageHeaderRequestProvider: Reads the requested culture from the Accept-Language HTTP header
AcceptLanguageHeaderRequestProvider:从 Accept-Language HTTP 标头中读取请求的区域性

For each request, the system will try to use these providers in this exact order, until it finds the first one that can determine the culture. If the culture cannot be set, the one specified in the DefaultRequestCulture property of RequestLocalizationOptions will be used.
对于每个请求,系统将尝试按此确切顺序使用这些提供程序,直到找到可以确定区域性的第一个提供程序。如果无法设置区域性,则将使用 RequestLocalizationOptions 的 DefaultRequestCulture 属性中指定的区域性。

If necessary, it is also possible to change the order of the request culture providers or even define a custom provider to implement our own logic to determine the culture. More information on this topic is available at :
如有必要,还可以更改请求文化提供者的顺序,甚至定义自定义提供者来实现我们自己的逻辑来确定文化。有关此主题的更多信息,请访问:
https://docs.microsoft.com/aspnet/core/fundamentals/localization#use-a-custom-provider.

Important note : The localization middleware must be inserted before any other middleware that might use the request culture.
重要提示 : 本地化中间件必须插入到可能使用请求区域性的任何其他中间件之前。

In the case of web APIs, whether using controller-based or minimal APIs, we usually set the request culture through the Accept-Language HTTP header. In the following section, we will see how to extend Swagger with the ability to add this header when trying to invoke methods.
对于 Web API,无论是使用基于控制器的 API 还是最小的 API,我们通常通过 Accept-Language HTTP 标头来设置请求文化。在下一节中,我们将看到如何扩展 Swagger,使其能够在尝试调用方法时添加此标头。

Adding globalization support to Swagger
向 Swagger 添加全球化支持

We want Swagger to provide us with a way to specify the Accept-Language HTTP header for each request so that we can test our globalized endpoints. Technically speaking, this means adding an operation filter to Swagger that will be able to automatically insert the language header, using the following code:
我们希望 Swagger 为我们提供一种方法来为每个请求指定 Accept-Language HTTP 标头,以便我们可以测试我们的全球化端点。从技术上讲,这意味着向 Swagger 添加一个作过滤器,该过滤器将能够使用以下代码自动插入语言标头:

public class AcceptLanguageHeaderOperationFilter : IOperationFilter
{
     private readonly List<IOpenApiAny>? 
     supportedLanguages;
     public AcceptLanguageHeaderOperationFilter 
     (IOptions<RequestLocalizationOptions> 
     requestLocalizationOptions)
     {
           supportedLanguages = 
           requestLocalizationOptions.Value.
           SupportedCultures?.Select(c => 
           newOpenApiString(c.TwoLetterISOLanguageName)).
           Cast<IOpenApiAny>().           ToList();
     }
     public void Apply(OpenApiOperation operation, 
     OperationFilterContext context)
     {
           if (supportedLanguages?.Any() ?? false)
           {
                 operation.Parameters ??= new 
                 List<OpenApiParameter>();
                 operation.Parameters.Add(new 
                 OpenApiParameter
                 {
                       Name = HeaderNames.AcceptLanguage,
                       In = ParameterLocation.Header,
                       Required = false,
                       Schema = new OpenApiSchema
                       {
                             Type = "string",
                             Enum = supportedLanguages,
                             Default = supportedLanguages.
                             First()
                       }
                 });
           }
     }
}

In the preceding code, AcceptLanguageHeaderOperationFilter takes the RequestLocalizationOptions object via dependency injection that we have defined at startup and extracts the supported languages in the format that Swagger expects from it. Then, in the Apply() method, we add a new OpenApiParameter that corresponds to the Accept-Language header. In particular, with the Schema.Enum property, we provide the list of supported languages using the values we have extracted in the constructor. This method is invoked for every operation (that is, every endpoint), meaning that the parameter will be automatically added to each of them.
在前面的代码中,AcceptLanguageHeaderOperationFilter 通过我们在启动时定义的依赖项注入获取 RequestLocalizationOptions 对象,并以 Swagger 期望的格式提取支持的语言。然后,在 Apply() 方法中,我们添加一个对应于 Accept-Language 标头的新 OpenApiParameter。具体而言,对于 Schema.Enum 属性,我们使用在构造函数中提取的值提供支持的语言列表。每个作(即每个端点)都会调用此方法,这意味着参数将自动添加到每个作中。

Now, we need to add the new filter to Swagger:
现在,我们需要将新过滤器添加到 Swagger:

var builder = WebApplication.CreateBuilder(args);
//...
builder.Services.AddSwaggerGen(options =>
{
     options.OperationFilter<AcceptLanguageHeaderOperation
     Filter>();
});

As we did with the preceding code, for every operation, Swagger will execute the filter, which in turn will add a parameter to specify the language of the request.
正如我们对前面的代码所做的那样,对于每个作,Swagger 将执行过滤器,而过滤器又会添加一个参数来指定请求的语言。

So, let’s suppose we have the following endpoint:
因此,假设我们有以下端点:

app.MapGet("/culture", () => Thread.CurrentThread.CurrentCulture.DisplayName);

In the preceding handler, we just return the culture of the thread. This method takes no parameter; however, after adding the preceding filter, the Swagger UI will show the following:
在前面的处理程序中,我们只返回线程的区域性。此方法不带参数;但是,在添加上述筛选器后,Swagger UI 将显示以下内容:

Figure 9.1 – The Accept-Language header added to Swagger
图 9.1 – 添加到 Swagger 的 Accept-Language 标头

The operation filter has added a new parameter to the endpoint, allowing us to select the language from a dropdown. We can click the Try it out button to choose a value from the list and then click Execute to invoke the endpoint:
作筛选器已向终端节点添加了一个新参数,允许我们从下拉列表中选择语言。我们可以单击 Try it out 按钮从列表中选择一个值,然后单击 Execute 以调用终端节点:

Figure 9.2 – The result of the execution with the Accept-Language HTTP header
图 9.2 – 使用 Accept-Language HTTP 标头执行的结果

This is the result of selecting it as a language request: Swagger has added the Accept-Language HTTP header, which, in turn, has been used by ASP.NET Core to set the current culture. Then, in the end, we get and return the culture display name in the route handler.
这是选择它作为语言请求的结果:Swagger 添加了 Accept-Language HTTP 标头,而 ASP.NET Core 又使用该标头来设置当前区域性。然后,最后,我们在路由处理程序中获取并返回区域性显示名称。

This example shows us that we have correctly added globalization support to our minimal API. In the next section, we’ll go further and work with localization, starting by providing translated resources to callers based on the corresponding languages.
此示例向我们展示了我们已正确地将全球化支持添加到我们的最小 API 中。在下一节中,我们将进一步讨论本地化,首先根据相应的语言向调用者提供翻译后的资源。

Using resource files
使用资源文件

Our minimal API now supports globalization, so it can switch cultures based on the request. This means that we can provide localized messages to callers, for example, when communicating validation errors. This feature is based on the so-called resource files (.resx), a particular kind of XML file that contains key-value string pairs representing messages that must be localized.
我们的最小 API 现在支持全球化,因此它可以根据请求切换区域性。这意味着我们可以向调用者提供本地化消息,例如,在传达验证错误时。此功能基于所谓的资源文件 (.resx),这是一种特殊类型的 XML 文件,其中包含表示必须本地化的消息的键值字符串对。

Note : These resource files are exactly the same as they have been since the early versions of .NET.
注意 : 这些资源文件与自 .NET 早期版本以来完全相同。

Creating and working with resource files
创建和使用资源文件

With resource files, we can easily separate strings from code and group them by culture. Typically, resource files are put in a folder called Resources. To create a file of this kind using Visual Studio, let us go through the following steps:
使用资源文件,我们可以轻松地将字符串与代码分离,并按区域性对它们进行分组。通常,资源文件放在名为 Resources 的文件夹中。要使用 Visual Studio 创建此类文件,让我们执行以下步骤:

Important note : Unfortunately, Visual Studio Code does not provide support for handling .resx files. More information about this topic is available at https://github.com/dotnet/AspNetCore.Docs/issues/2501.
重要提示 : 遗憾的是,Visual Studio Code 不支持处理 .resx 文件。有关此主题的更多信息,请访问 https://github.com/dotnet/AspNetCore.Docs/issues/2501

  1. Right-click on the folder in Solution Explorer and then choose Add | New Item.
    右键单击“解决方案资源管理器”中的文件夹,然后选择“添加”|”新建项目。

  2. In the Add New Item dialog window, search for Resources, select the corresponding template, and name the file, for example, Messages.resx:
    在 Add New Item 对话框窗口中,搜索 Resources,选择相应的模板,然后将文件命名为 Messages.resx:

Figure 9.3 – Adding a resource file to the project
图 9.3 – 将资源文件添加到项目中

The new file will immediately open in the Visual Studio editor.
新文件将立即在 Visual Studio 编辑器中打开。

  1. The first thing to do in the new file is to select Internal or Public (based on the code visibility we want to achieve) from the Access Modifier option so that Visual Studio will create a C# file that exposes the properties to access the resources:
    在新文件中要做的第一件事是从 Access Modifier 选项中选择 Internal 或 Public (基于我们想要实现的代码可见性),以便 Visual Studio 创建一个 C# 文件,该文件公开属性以访问资源:

Figure 9.4 – Changing the Access Modifier of the resource file
图 9.4 – 更改资源文件的访问修饰符

As soon as we change this value, Visual Studio will add a Messages.Designer.cs file to the project and automatically create properties that correspond to the strings we insert in the resource file.
一旦我们更改了此值,Visual Studio 就会将 Messages.Designer.cs 文件添加到项目中,并自动创建与我们插入到资源文件中的字符串相对应的属性。

Resource files must follow a precise naming convention. The file that contains default culture messages can have any name (such as Messages.resx, as in our example), but the other .resx files that provide the corresponding translations must have the same name, with the specification of the culture (neutral or specific) to which they refer. So, we have Messages.resx, which will store default (English) messages.
资源文件必须遵循精确的命名约定。包含默认区域性消息的文件可以具有任何名称(如 Messages.resx,如本例中所示),但提供相应翻译的其他 .resx 文件必须具有相同的名称,并具有它们所引用的区域性(非特定或特定)的规范。因此,我们有 Messages.resx,它将存储默认(英文)消息。

  1. Since we also want to localize our messages in Italian, we need to create another file with the name Messages.it.resx.
    由于我们还希望将消息本地化为 Italian,因此需要创建另一个名为 Messages.it.resx 的文件。

Note : We don’t create a resource file for French culture on purpose because this way, we’ll see how APS.NET Core looks up the localized messages in practice.
注意 : 我们不会故意为法国文化创建资源文件,因为这样,我们将看到 APS.NET Core 在实践中如何查找本地化的消息。

  1. Now, we can start experimenting with resource files. Let’s open the Messages.resx file and set Name to HelloWorld and Value to Hello World!.
    现在,我们可以开始试验资源文件。让我们打开 Messages.resx 文件,并将 Name 设置为 HelloWorld,将 Value 设置为 Hello World!。

In this way, Visual Studio will add a static HelloWorld property in the Messages autogenerated class that allows us to access values based on the current culture.
通过这种方式,Visual Studio 将在 Messages 自动生成的类中添加一个静态 HelloWorld 属性,该属性允许我们访问基于当前区域性的值。

  1. To demonstrate this behavior, also open the Messages.it.resx file and add an item with the same Name, HelloWorld, but now set Value to the translation Ciao mondo!.
    为了演示此行为,还请打开 Messages.it.resx 文件并添加具有相同名称的项 HelloWorld,但现在将 Value 设置为翻译 Ciao mondo!。

  2. Finally, we can add a new endpoint to showcase the usage of the resource files:
    最后,我们可以添加新的端点来展示资源文件的使用情况:

// using Chapter09.Resources;
app.MapGet("/helloworld", () => Messages.HelloWorld);

In the preceding route handler, we simply access the static Mesasges.HelloWorld property that, as discussed before, has been automatically created while editing the Messages.resx file.
在前面的路由处理程序中,我们只需访问静态 Mesasges.HelloWorld 属性,如前所述,该属性是在编辑 Messages.resx 文件时自动创建的。

If we now run the minimal API and try to execute this endpoint, we’ll get the following responses based on the request language that we select in Swagger:
如果我们现在运行最小 API 并尝试执行此终端节点,我们将根据我们在 Swagger 中选择的请求语言获得以下响应:

Table 9.1 – Responses based on the request language
表 9.1 – 基于请求语言的响应

When accessing a property such as HelloWorld, the autogenerated Messages class internally uses ResourceManager to look up the corresponding localized string. First of all, it looks for a resource file whose name contains the requested culture. If it is not found, it reverts to the parent culture of that culture. This means that, if the requested culture is specific, ResourceManager searches for the neutral culture. If no resource file is still found, then the default one is used.
当访问诸如 HelloWorld 之类的属性时,自动生成的 Messages 类在内部使用 ResourceManager 来查找相应的本地化字符串。首先,它查找其名称包含所请求区域性的资源文件。如果未找到,它将还原为该区域性的父区域性。这意味着,如果请求的区域性是特定的,则 ResourceManager 会搜索非特定区域性。如果仍未找到资源文件,则使用默认资源文件。

In our case, using Swagger, we can select only English, Italian, or French as a neutral culture. But what happens if a client sends other values? We can have situations such as the following:
在我们的示例中,使用 Swagger,我们只能选择英语、意大利语或法语作为非特定区域性。但是,如果客户端发送其他值,会发生什么情况呢?我们可能会遇到以下情况:

• The request culture is it-IT: the system searches for Messages.it-IT.resx and then finds and uses Messages.it.resx.
请求区域性是 it-IT:系统搜索 Messages.it-IT.resx,然后查找并使用 Messages.it.resx。

• The request culture is fr-FR: the system searches for Messages.fr-FR.resx, then Messages.fr.resx, and (because neither are available) finally uses the default, Messages.resx.
请求区域性是 fr-FR:系统搜索 Messages.fr-FR.resx,然后搜索 Messages.fr.resx,最后(因为两者都不可用)使用默认的 Messages.resx。

• The request culture is de (German): because this isn’t a supported culture at all, the default request culture will be automatically selected, so strings will be searched for in the Messages.resx file.
请求区域性为 de (德语) :由于这根本不是受支持的区域性,因此将自动选择默认请求区域性,因此将在 Messages.resx 文件中搜索字符串。

Note : If a localized resource file exists, but it doesn’t contain the specified key, then the value of the default file will be used.
注意 : 如果本地化资源文件存在,但不包含指定的键,则将使用默认文件的值。

Formatting localized messages using resource files
使用资源文件设置本地化消息的格式

We can also use resource files to format localized messages. For example, we can add the following strings to the resource files of the project:
我们还可以使用 resource 文件来格式化本地化的消息。例如,我们可以将以下字符串添加到项目的资源文件中:

Table 9.2 – A custom localized message
表 9.2 – 自定义本地化消息

Now, let’s define this endpoint:
现在,让我们定义这个端点:

// using Chapter09.Resources;
app.MapGet("/hello", (string name) =>
{
     var message = string.Format(Messages.GreetingMessage, 
     name);
     return message;
});

As in the preceding code example, we get a string from a resource file according to the culture of the request. But, in this case, the message contains a placeholder, so we can use it to create a custom localized message using the name that is passed to the route handler. If we try to execute the endpoint, we will get results such as these:
与前面的代码示例一样,我们根据请求的区域性从资源文件中获取字符串。但是,在这种情况下,消息包含一个占位符,因此我们可以使用它来使用传递给路由处理程序的名称创建自定义本地化消息。如果我们尝试执行端点,我们将得到如下结果:

Table 9.3 – Responses with custom localized messages based on the request language
表 9.3 – 使用基于请求语言的自定义本地化消息的响应

The possibility to create localized messages with placeholders that are replaced at runtime using different values is a key point for creating truly localizable services.
创建带有占位符的本地化消息的可能性,这些占位符在运行时使用不同的值替换,这是创建真正可本地化服务的关键点。

In the beginning, we said that a typical use case of localization in web APIs is when we need to provide localized error messages upon validation. In the next section, we’ll see how to add this feature to our minimal API.
一开始,我们说过 Web API 中本地化的一个典型用例是我们需要在验证时提供本地化的错误消息。在下一节中,我们将了解如何将此功能添加到我们的最小 API 中。

Integrating localization in validation frameworks
将本地化集成到验证框架中

In Chapter 6, Exploring Validation and Mapping, we talked about how to integrate validation into a minimal API project. We learned how to use the MiniValidation library, rather than FluentValidation, to validate our models and provide validation messages to the callers. We also said that FluentValidation already provides translations for standard error messages.
在 第 6 章 探索验证和映射 中,我们讨论了如何将验证集成到一个最小的 API 项目中。我们学习了如何使用 MiniValidation 库(而不是 FluentValidation)来验证我们的模型并向调用者提供验证消息。我们还说过,FluentValidation 已经为标准错误消息提供了翻译。

However, with both libraries, we can leverage the localization support we have just added to our project to support localized and custom validation messages.
但是,对于这两个库,我们可以利用刚刚添加到项目中的本地化支持来支持本地化和自定义验证消息。

Localizing validation messages with MiniValidation
使用 MiniValidation 本地化验证消息

Using the MiniValidation library, we can use validation based on Data Annotations with minimal APIs. Refer to Chapter 6, Exploring Validation and Mapping, for instructions on how to add this library to the project.
使用 MiniValidation 库,我们可以使用基于数据注释的验证和最少的 API。有关如何将此库添加到项目中的说明,请参阅第 6 章 探索验证和映射。

Then, recreate the same Person class:
然后,重新创建相同的 Person 类:

public class Person
{
     [Required]
     [MaxLength(30)]
     public string FirstName { get; set; }
     [Required]
     [MaxLength(30)]
     public string LastName { get; set; }
     [EmailAddress]
     [StringLength(100, MinimumLength = 6)]
     public string Email { get; set; }
}

Every validation attribute allows us to specify an error message, which can be a static string or a reference to a resource file. Let’s see how to correctly handle the localization for the Required attribute. Add the following values in resource files:
每个 validation 属性都允许我们指定一条错误消息,它可以是静态字符串或对资源文件的引用。让我们看看如何正确处理 Required 属性的本地化。在资源文件中添加以下值:

Table 9.4 – Localized validation error messages used by Data Annotations
表 9.4 – 数据注释使用的本地化验证错误消息

We want it so that when a required validation rule fails, the localized message that corresponds to FieldRequiredAnnotation is returned. Moreover, this message contains a placeholder, because we want to use it for every required field, so we also need the translation of property names.
我们希望,当必需的验证规则失败时,将返回与 FieldRequiredAnnotation 对应的本地化消息。此外,此消息包含一个占位符,因为我们希望将其用于每个必填字段,因此我们还需要属性名称的翻译。

With these resources, we can update the Person class with the following declarations:
有了这些资源,我们可以使用以下声明更新 Person 类:

public class Person
{
     [Display(Name = "FirstName", ResourceType = 
      typeof(Messages))]
     [Required(ErrorMessageResourceName = 
     "FieldRequiredAnnotation",
      ErrorMessageResourceType = typeof(Messages))]
     public string FirstName { get; set; }
     //...
}

Each validation attribute, such as Required (as used in this example), exposes properties that allow us to specify the name of the resource to use and the type of class that contains the corresponding definition. Keep in mind that the name is a simple string, with no check at compile time, so if we write an incorrect value, we’ll only get an error at runtime.
每个验证属性(如 Required(如本例中所示))都公开了允许我们指定要使用的资源的名称以及包含相应定义的类类型的属性。请记住,名称是一个简单的字符串,在编译时没有检查,因此如果我们写入了不正确的值,我们只会在运行时收到错误。

Next, we can use the Display attribute to also specify the name of the field that must be inserted in the validation message.
接下来,我们还可以使用 Display 属性来指定必须插入到验证消息中的字段的名称。

Note : You can find the complete declaration of the Person class with localized data annotations on the GitHub repository at https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/blob/main/Chapter09/Program.cs#L97.
注意 : 您可以在 GitHub 存储库的 https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/blob/main/Chapter09/Program.cs#L97 上找到带有本地化数据注释的 Person 类的完整声明。

Now we can re-add the validation code shown in Chapter 6, Exploring Validation and Mapping. The difference is that now the validation messages will be localized:
现在我们可以重新添加第 6 章 探索验证和映射 中所示的验证代码。不同之处在于,现在验证消息将被本地化:

app.MapPost("/people", (Person person) =>
{
     var isValid = MiniValidator.TryValidate(person, out 
     var errors);
     if (!isValid)
     {
           return Results.ValidationProblem(errors, title: 
           Messages.ValidationErrors);
     }
     return Results.NoContent();
});

In the preceding code, the messages contained in the errors dictionary that is returned by the MiniValidator.TryValidate() method will be localized according to the request culture, as described in the previous sections. We also specify the title parameter in the Results.ValidationProblem() invocation because we want to localize this value too (otherwise, it will always be the default One or more validation errors occurred).
在上面的代码中,MiniValidator.TryValidate() 方法返回的 errors 字典中包含的消息将根据请求区域性进行本地化,如前面的部分所述。我们还在 Results.ValidationProblem() 调用中指定了 title 参数,因为我们也希望本地化此值(否则,它将始终为默认的 One or more validation errors occurred)。

If instead of data annotations, we prefer using FluentValidation, we know that it supports localization of standard error messages by default from Chapter 6, Exploring Validation and Mapping. However, with this library, we can also provide our translations. In the next section, we’ll talk about implementing this solution.
如果我们更喜欢使用 FluentValidation 而不是数据注释,那么我们知道它默认支持第 6 章 探索验证和映射 中的标准错误消息的本地化。但是,有了这个库,我们也可以提供我们的翻译。在下一节中,我们将讨论如何实现此解决方案。

Localizing validation messages with FluentValidation
使用 FluentValidation 本地化验证消息

With FluentValidation, we can totally decouple the validation rules from our models. As said before, refer to Chapter 6, Exploring Validation and Mapping, for instructions on how to add this library to the project and how to configure it.
使用 FluentValidation,我们可以将验证规则与我们的模型完全解耦。如前所述,请参阅 第 6 章 探索验证和映射 ,以获取有关如何将此库添加到项目以及如何配置它的说明。

Next, let us recreate the PersonValidator class:
接下来,让我们重新创建 PersonValidator 类:

public class PersonValidator : AbstractValidator<Person>
{
     public PersonValidator()
     {
           RuleFor(p => p.FirstName).NotEmpty().
           MaximumLength(30);
           RuleFor(p => p.LastName).NotEmpty().
           MaximumLength(30);
           RuleFor(p => p.Email).EmailAddress().Length(6, 
           100);
     }
}

In the case that we haven’t specified any messages, the default ones will be used. Let’s add the following resource to customize the NotEmpty validation rule:
如果我们没有指定任何消息,则将使用默认消息。让我们添加以下资源来自定义 NotEmpty 验证规则:

Table 9.5 – The localized validation error messages used by FluentValidation
表 9.5 – FluentValidation 使用的本地化验证错误消息

Note that, in this case, we also have a placeholder that will be replaced by the property name. However, different from data annotations, FluentValidation uses a placeholder with a name to better identify its meaning.
请注意,在本例中,我们还有一个占位符,该占位符将替换为属性名称。但是,与数据注释不同,FluentValidation 使用带有名称的占位符来更好地识别其含义。

Now, we can add this message in the validator, for example, for the FirstName property:
现在,我们可以在验证器中添加以下消息,例如,对于 FirstName 属性:

RuleFor(p => p.FirstName).NotEmpty().
     WithMessage(Messages.NotEmptyMessage).
     WithName(Messages.FirstName);

We use WithMessage() to specify the message that must be used when the preceding rule fails, following which we add the WithName() invocation to overwrite the default property name used for the {PropertyName} placeholder of the message.
我们使用 WithMessage() 指定在前面的规则失败时必须使用的消息,然后我们添加 WithName() 调用以覆盖用于消息的 {PropertyName} 占位符的默认属性名称。

Note : You can find the complete implementation of the PersonValidator class with localized messages in the GitHub repository at https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/blob/main/Chapter09/Program.cs#L129.
注意 : 您可以在 GitHub 存储库中找到 PersonValidator 类的完整实现以及本地化消息,网址为 https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/blob/main/Chapter09/Program.cs#L129

Finally, we can leverage the localized validator in our endpoint, as we did in Chapter 6, Exploring Validation and Mapping:
最后,我们可以在端点中利用本地化的验证器,就像我们在第 6 章 探索验证和映射中所做的那样:

app.MapPost("/people", async (Person person, IValidator<Person> validator) =>
{
     var validationResult = await validator.
     ValidateAsync(person);
     if (!validationResult.IsValid)
     {
           var errors = validationResult.ToDictionary();
           return Results.ValidationProblem(errors, title: 
           Messages.ValidationErrors);
     }
     return Results.NoContent();
});

As in the case of data annotations, the validationResult variable will contain localized error messages that we return to the caller using the Results.ValidationProblem() method (again, with the definition of the title property).
与数据注释一样,validationResult 变量将包含本地化的错误消息,我们使用 Results.ValidationProblem() 方法(同样,使用 title 属性的定义)将这些错误消息返回给调用者。

Tip : In our example, we have seen how to explicitly assign translations for each property using the WithMessage() method. FluentValidation also provides a way to replace all (or some) of its default messages. You can find more information in the official documentation at https://docs.fluentvalidation.net/en/latest/localization.xhtml#default-messages.
提示 : 在我们的示例中,我们已经看到了如何使用 WithMessage() 方法为每个属性显式分配翻译。FluentValidation 还提供了一种替换其所有(或部分)默认消息的方法。您可以在 https://docs.fluentvalidation.net/en/latest/localization.xhtml#default-messages 的官方文档中找到更多信息。

This ends our overview of localization using resource files. Next, we’ll talk about an important topic when dealing with services that are meant to be used worldwide: the correct handling of different time zones.
我们对使用资源文件的本地化的概述到此结束。接下来,我们将讨论在处理旨在在全球范围内使用的服务时的一个重要话题:正确处理不同的时区。

Adding UTC support to a globalized minimal API
向全球化的最小 API 添加 UTC 支持

So far, we have added globalization and localization support to our minimal API because we want it to be used by the widest audience possible, irrespective of culture. But, if we think about being accessible to a worldwide audience, we should consider several aspects related to globalization. Globalization does not only pertain to language support; there are important factors we need to consider, for example, geographic locations, as well as time zones.
到目前为止,我们已经在我们的最小 API 中添加了全球化和本地化支持,因为我们希望它被尽可能广泛的受众使用,而不受文化影响。但是,如果我们考虑让全世界的受众都能接触到,我们应该考虑与全球化相关的几个方面。全球化不仅与语言支持有关;我们需要考虑一些重要因素,例如地理位置和时区。

So, for example, we can have our minimal API running in Italy, which follows Central European Time (CET) (GMT+1), while our clients can use browsers that execute a single-page application, rather than mobile apps, all over the world. We could also have a database server that contains our data, and this could be in another time zone. Moreover, at a certain point, it may be necessary to provide better support for worldwide users, so we’ll have to move our service to another location, which could have a new time zone. In conclusion, our system could deal with data in different time zones, and, potentially, the same services could switch time zones during their lives.
因此,例如,我们可以在意大利运行我们的最小 API,它遵循中欧时间 (CET) (GMT+1),而我们的客户可以使用执行单页应用程序的浏览器,而不是世界各地的移动应用程序。我们还可以有一个包含我们数据的数据库服务器,它可以在另一个时区。此外,在某个时候,可能需要为全球用户提供更好的支持,因此我们将不得不将我们的服务转移到另一个位置,该位置可能具有新的时区。总之,我们的系统可以处理不同时区的数据,并且相同的服务在其生命周期中可能会切换时区。

In these situations, the ideal solution is working with DateTimeOffset, a data type that includes time zones and that JsonSerializer fully supports, preserving time zone information during serialization and deserialization. If we could always use it, we’d automatically solve any problem related to globalization, because converting a DateTimeOffset value to a different time zone is straightforward. However, there are cases in which we can’t handle the DateTimeOffset type, for example:
在这些情况下,理想的解决方案是使用 DateTimeOffset,这是一种包含时区的数据类型,并且 JsonSerializer 完全支持,在序列化和反序列化期间保留时区信息。如果我们始终可以使用它,我们就会自动解决与全球化相关的任何问题,因为将 DateTimeOffset 值转换为不同的时区非常简单。但是,在某些情况下,我们无法处理 DateTimeOffset 类型,例如:

• When we’re working on a legacy system that relies on DateTime everywhere, updating the code to use DateTimeOffset isn’t an option because it requires too many changes and breaks the compatibility with the old data.
当我们在无处不在都依赖 DateTime 的旧系统上工作时,更新代码以使用 DateTimeOffset 不是一个选项,因为它需要太多更改并破坏与旧数据的兼容性。

• We have a database server such as MySQL that doesn’t have a column type for storing DateTimeOffset directly, so handling it requires extra effort, for example, using two separate columns, increasing the complexity of the domain.
我们有一个数据库服务器,例如 MySQL,它没有用于直接存储 DateTimeOffset 的列类型,因此处理它需要额外的工作,例如,使用两个单独的列,这增加了域的复杂性。

• In some cases, we simply aren’t interested in sending, receiving, and saving time zones – we just want to handle time in a “universal” way.
在某些情况下,我们只是对发送、接收和保存时区不感兴趣——我们只想以 “通用” 的方式处理时间。

So, in all the scenarios where we can’t or don’t want to use the DateTimeOffset data type, one of the best and simplest ways to deal with different time zones is to handle all dates using Coordinated Universal Time (UTC): the service must assume that the dates it receives are in the UTC format and, on the other hand, all the dates returned by the API must be in UTC.
因此,在我们不能或不想使用 DateTimeOffset 数据类型的所有情况下,处理不同时区的最佳和最简单的方法之一是使用协调世界时 (UTC) 处理所有日期:服务必须假定它收到的日期是 UTC 格式,另一方面, API 返回的所有日期都必须采用 UTC 格式。

Of course, we must handle this behavior in a centralized way; we don’t want to have to remember to apply the conversion to and from the UTC format every time we receive or send a date. The well-known JSON.NET library provides an option to specify how to treat the time value when working with a DateTime property, allowing it to automatically handle all dates as UTC and convert them to that format if they represent a local time. However, the current version of Microsoft JsonSerializer used in minimal APIs doesn’t include such a feature. From Chapter 2, Exploring Minimal APIs and Their Advantages, we know that we cannot change the default JSON serializer in minimal APIs, but we can overcome this lack of UTC support by creating a simple JsonConverter:
当然,我们必须以集中的方式处理这种行为;我们不想记住在每次接收或发送日期时都要应用与 UTC 格式之间的转换。众所周知的 JSON.NET 库提供了一个选项,用于指定在使用 DateTime 属性时如何处理时间值,从而允许它自动将所有日期作为 UTC 处理,并在它们表示本地时间时将其转换为该格式。但是,最小 API 中使用的 Microsoft JsonSerializer 的当前版本不包含此类功能。从第 2 章 探索最小 API 及其优势中,我们知道我们无法在最小 API 中更改默认的 JSON 序列化器,但是我们可以通过创建一个简单的 JsonConverter 来克服缺乏 UTC 支持的问题:

public class UtcDateTimeConverter : JsonConverter<DateTime>
{
     public override DateTime Read(ref Utf8JsonReader 
     reader, Type typeToConvert, JsonSerializerOptions  
     options)
     => reader.GetDateTime().ToUniversalTime();
     public override void Write(Utf8JsonWriter writer, 
     DateTime value, JsonSerializerOptions options)
     => writer.WriteStringValue((value.Kind == 
     DateTimeKind.Local ? value.ToUniversalTime() : value)
     .ToString("yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'
     fffffff'Z'"));
}

With this converter, we tell JsonSerializer how to treat DateTime properties:
通过这个转换器,我们告诉 JsonSerializer 如何处理 DateTime 属性:

• When DateTime is read from JSON, the value is converted to UTC using the ToUniversalTime() method.
从 JSON 中读取 DateTime 时,将使用 ToUniversalTime() 方法将该值转换为 UTC。

• When DateTime must be written to JSON, if it represents a local time (DateTimeKind.Local), it is converted to UTC before serialization – then, it is serialized using the Z suffix, which indicates that the time is UTC.
当必须将 DateTime 写入 JSON 时,如果它表示本地时间 (DateTimeKind.Local),则会在序列化之前将其转换为 UTC – 然后,它将使用 Z 后缀进行序列化,这表示时间为 UTC。

Now, before using this converter, let’s add the following endpoint definition:
现在,在使用此转换器之前,让我们添加以下端点定义:

app.MapPost("/date", (DateInput date) =>
{
     return Results.Ok(new
     {
           Input = date.Value,
           DateKind = date.Value.Kind.ToString(),
           ServerDate = DateTime.Now
     });
});
public record DateInput(DateTime Value);

Let’s try to call it, for example, with a date formatted as 2022-03-06T16:42:37-05:00. We’ll obtain something similar to the following:
例如,让我们尝试使用格式为 2022-03-06T16:42:37-05:00 的日期来调用它。我们将获得类似于以下内容的内容:

{
  "input": "2022-03-06T22:42:37+01:00",
  "dateKind": "Local",
  "serverDate": "2022-03-07T18:33:17.0288535+01:00"
}

The input date, containing a time zone, has automatically been converted to the local time of the server (in this case, the server is running in Italy, as stated at the beginning), as also demonstrated by the dateKind field. Moreover, serverDate contains a date that is relative to the server time zone.
包含时区的输入日期已自动转换为服务器的本地时间(在本例中,服务器在意大利运行,如开头所述),dateKind 字段也演示了该日期。此外, serverDate 包含相对于服务器时区的日期。

Now, let’s add UtcDateTimeConverter to JsonSerializer:
现在,让我们将 UtcDateTimeConverter 添加到 JsonSerializer 中:

var builder = WebApplication.CreateBuilder(args);
//...
builder.Services.Configure<Microsoft.AspNetCore.Http.Json.
JsonOptions>(options =>
{
     options.SerializerOptions.Converters.Add(new 
     UtcDateTimeConverter());
});

With this configuration, every DateTime property will be processed using our custom converters. Now, execute the endpoint again, using the same input as before. This time, the result will be as follows:
使用此配置,每个 DateTime 属性都将使用我们的自定义转换器进行处理。现在,使用与之前相同的输入再次执行终端节点。这一次,结果将如下所示:

{
  "input": "2022-03-06T21:42:37.0000000Z",
  "dateKind": "Utc",
  "serverDate": "2022-03-06T17:40:08.1472051Z"
}

The input is the same, but our UtcDateTimeConverter has now converted the date to UTC and, on the other hand, has serialized the server date as UTC; now, our API, in a centralized way, can automatically handle all dates as UTC, no matter its time zone or the time zones of the callers.
输入是相同的,但是我们的 UtcDateTimeConverter 现在已经将日期转换为 UTC,另一方面,已将服务器日期序列化为 UTC;现在,我们的 API 以集中的方式自动将所有日期处理为 UTC,无论其时区或调用者的时区如何。

Finally, there are two other points to make all the systems correctly work with UTC:
最后,还有另外两点可以使所有系统正确地使用 UTC:

• When we need to retrieve the current date in the code, we always have to use DateTime.UtcNow instead of DateTime.Now
当我们需要在代码中检索当前日期时,我们始终必须使用 DateTime.UtcNow 而不是 DateTime.Now

• Client applications must know that they will receive the date in UTC format and act accordingly, for example, invoking the ToLocalTime() method
客户端应用程序必须知道它们将收到 UTC 格式的日期并采取相应的措施,例如,调用 ToLocalTime() 方法

In this way, the minimal API is truly globalized and can work with any time zone; without having to worry about explicit conversion, all times input or output will be always in UTC, so it will be much easier to handle them.
通过这种方式,最小的 API 是真正全球化的,并且可以在任何时区工作;无需担心显式转换,所有时间 input 或 output 都将始终为 UTC,因此处理它们会容易得多。

Summary
总结

Developing minimal APIs with globalization and localization support in mind is fundamental in an interconnected world. ASP.NET Core includes all the features needed to create services that can react to the culture of the user and provide translations based on the request language: the usage of localization middleware, resource files, and custom validation messages allows the creation of services that can support virtually every culture. We have also talked about the globalization-related problems that could arise when working with different time zones and shown how to solve it using the centralized UTC date time format so that our APIs can seamlessly work irrespective of the geographic location and time zone of clients.
在考虑全球化和本地化支持的情况下开发最少的 API 是互联世界的基础。ASP.NET Core 包括创建服务所需的所有功能,这些服务可以响应用户文化并根据请求语言提供翻译:使用本地化中间件、资源文件和自定义验证消息,可以创建几乎可以支持所有文化的服务。我们还讨论了使用不同时区时可能出现的全球化相关问题,并展示了如何使用集中式 UTC 日期时间格式来解决这个问题,以便我们的 API 可以无缝工作,而不受客户的地理位置和时区的影响。

In Chapter 10, Evaluating and Benchmarking the Performance of Minimal APIs, we will talk about why minimal APIs were created and analyze the performance benefits of using minimal APIs over the classic controller-based approach.
在第 10 章 评估最小 API 的性能并对其进行基准测试中,我们将讨论创建最小 API 的原因,并分析使用最小 API 相对于基于控制器的经典方法的性能优势。

10 Evaluating and Benchmarking the Performance of Minimal APIs

评估最小 API 的性能并对其进行基准测试

The purpose of this chapter is to understand one of the motivations for which the minimal APIs framework was created.
本章的目的是了解创建最小 API 框架的动机之一。

This chapter will provide some obvious data and examples of how you can measure the performance of an ASP.NET 6 application using the traditional approach as well as how you can measure the performance of an ASP.NET application using the minimal API approach.
本章将提供一些明显的数据和示例,说明如何使用传统方法测量 ASP.NET 6 应用程序的性能,以及如何使用最小 API 方法测量 ASP.NET 应用程序的性能。

Performance is key to any functioning application; however, very often it takes a back seat.
性能是任何正常运行的应用程序的关键;然而,它经常退居二线。

A performant and scalable application depends not only on our code but also on the development stack. Today, we have moved on from the .NET full framework and .NET Core to .NET and can start to appreciate the performance that the new .NET has achieved, version after version – not only with the introduction of new features and the clarity of the framework but also primarily because the framework has been completely rewritten and improved with many features that have made it fast and very competitive compared to other languages.
高性能和可扩展的应用程序不仅取决于我们的代码,还取决于开发堆栈。今天,我们已经从 .NET 完整框架和 .NET Core 转向 .NET,并且可以开始欣赏新 .NET 所实现的性能,一个版本又一个版本 - 不仅引入了新功能和框架的清晰度,而且主要是因为该框架已被完全重写和改进,具有许多功能,与其他语言相比,这些功能使其速度更快且非常有竞争力。

In this chapter, we will evaluate the performance of the minimal API by comparing its code with identical code that has been developed traditionally. We’ll understand how to evaluate the performance of a web application, taking advantage of the BenchmarkDotNet framework, which can be useful in other application scenarios.
在本章中,我们将通过将最小 API 的代码与传统开发的相同代码进行比较来评估最小 API 的性能。我们将了解如何利用 BenchmarkDotNet 框架评估 Web 应用程序的性能,该框架在其他应用程序场景中可能很有用。

With minimal APIs, we have a new simplified framework that helps improve performance by leaving out some components that we take for granted with ASP.NET.
通过最少的 API,我们有一个新的简化框架,它通过省略一些我们认为理所当然的组件来帮助提高性能 ASP.NET。

The themes we will touch on in this chapter are as follows:
我们将在本章中讨论的主题如下:

• Improvements with minimal APIs
使用最少的 API 进行改进

• Exploring performance with load tests
通过负载测试探索性能

• Benchmarking minimal APIs with BenchmarkDotNet
使用 BenchmarkDotNet 对最小 API 进行基准测试

Technical requirements
技术要求

Many systems can help us test the performance of a framework.
许多系统可以帮助我们测试框架的性能。

We can measure how many requests per second one application can handle compared to another, assuming equal application load. In this case, we are talking about load testing.
我们可以测量一个应用程序每秒可以处理多少个请求,假设应用程序负载相同。在本例中,我们谈论的是负载测试。

To put the minimal APIs on the test bench, we need to install k6, the framework we will use for conducting our tests.
要将最小的 API 放在测试台上,我们需要安装 k6,我们将用于执行测试的框架。

We will launch load testing on a Windows machine with only .NET applications running.
我们将在仅运行 .NET 应用程序的 Windows 计算机上启动负载测试。

To install k6, you can do either one of the following:
要安装 k6,您可以执行以下任一操作:

• If you’re using the Chocolatey package manager (https://chocolatey.org/), you can install the unofficial k6 package with the following command:
如果您使用的是 Chocolatey 包管理器 (https://chocolatey.org/),您可以使用以下命令安装非官方的 k6 包:

choco install k6

• If you’re using Windows Package Manager (https://github.com/microsoft/winget-cli), you can install the official package from the k6 manifests with this command:
如果您使用的是 Windows Package Manager (https://github.com/microsoft/winget-cli),则可以使用以下命令从 k6 清单安装官方软件包:

winget install k6

• You can also test your application published on the internet with Docker:
您还可以使用 Docker 测试在 Internet 上发布的应用程序:

docker pull loadimpact/k6

• Or as we did, we installed k6 on the Windows machine and launched everything from the command line. You can download k6 from this link: https://dl.k6.io/msi/k6-latest-amd64.msi.
或者,我们在 Windows 计算机上安装了 k6 并从命令行启动所有内容。您可以从以下链接下载 k6:https://dl.k6.io/msi/k6-latest-amd64.msi

In the final part of the chapter, we’ll measure the duration of the HTTP method for making calls to the API.
在本章的最后一部分,我们将测量 HTTP 方法调用 API 的持续时间。

We’ll stand at the end of the system as if the API were a black box and measure the reaction time. BenchmarkDotNet is the tool we’ll be using – to include it in our project, we need to reference its NuGet package:
我们将站在系统的末端,就好像 API 是一个黑匣子一样,并测量反应时间。BenchmarkDotNet 是我们将要使用的工具 - 要将其包含在我们的项目中,我们需要引用其 NuGet 包:

dotnet add package BenchmarkDotNet

All the code samples in this chapter can be found in the GitHub repository for this book at the following link:
本章中的所有代码示例都可以在本书的 GitHub 存储库中找到,链接如下:
https://github.com/PacktPublishing/Minimal-APIs-in-ASP.NET-Core-6/tree/main/Chapter10

Improvements with minimal APIs
使用最少的 API 进行改进

Minimal APIs were designed not only to improve the performance of APIs but also for better code convenience and similarity to other languages to bring developers from other platforms closer. Performance has increased both from the point of view of the .NET framework, as each version has incredible improvements, as well as from the point of view of the simplification of the application pipeline. Let’s see in detail what has not been ported and what improves the performance of this framework.
Minimal API 的设计不仅是为了提高 API 的性能,也是为了更好的代码便利性和与其他语言的相似性,从而拉近来自其他平台的开发人员的距离。从 .NET Framework 的角度来看,性能都有所提高,因为每个版本都有令人难以置信的改进,而且从应用程序管道的简化的角度来看也是如此。让我们详细看看哪些内容尚未移植,哪些内容提高了此框架的性能。

The minimal APIs execution pipeline omits the following features, which makes the framework lighter:
最小 API 执行管道省略了以下功能,这使得框架更轻量级:

• Filters, such as IAsyncAuthorizationFilter, IAsyncActionFilter, IAsyncExceptionFilter, IAsyncResultFilter, and IasyncResourceFilter
• Model binding
• Binding for forms, such as IFormFile
• Built-in validation
• Formatters
• Content negotiations
• Some middleware
• View rendering
• JsonPatch
• OData
• API versioning

Performance Improvements in .NET 6
.NET 6 中的性能改进

Version after version, .NET improves its performance. In the latest version of the framework, improvements made over previous versions have been reported. Here’s where you can find a complete summary of what’s new in .NET 6:
一个又一个版本,.NET 提高了其性能。在最新版本的框架中,报告了对以前版本所做的改进。您可以在此处找到 .NET 6 中新增功能的完整摘要:
https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-6/

Exploring performance with load tests
通过负载测试探索性能

How to estimate the performance of minimal APIs? There are many points of view to consider and in this chapter, we will try to address them from the point of view of the load they can support. We decided to adopt a tool – k6 – that performs load tests on a web application and tells us how many requests per second can a minimal API handle.
如何估算最小 API 的性能?有许多观点需要考虑,在本章中,我们将尝试从它们可以支持的负载的角度来解决这些问题。我们决定采用一种工具 k6,它在 Web 应用程序上执行负载测试,并告诉我们最小 API 每秒可以处理多少个请求。

As described by its creators, k6 is an open source load testing tool that makes performance testing easy and productive for engineering teams. The tool is free, developer-centric, and extensible. Using k6, you can test the reliability and performance of your systems and catch performance regressions and problems earlier. This tool will help you to build resilient and performant applications that scale.
正如其创建者所描述的那样,k6 是一种开源负载测试工具,它使工程团队的性能测试变得简单而高效。该工具是免费的、以开发人员为中心且可扩展的。使用 k6,您可以测试系统的可靠性和性能,并更早地捕获性能回归和问题。此工具将帮助您构建可扩展的弹性和高性能应用程序。

In our case, we would like to use the tool for performance evaluation and not for load testing. Many parameters should be considered during load testing, but we will only focus on the http_reqs index, which indicates how many requests have been handled correctly by the system.
在我们的例子中,我们希望使用该工具进行性能评估,而不是进行负载测试。在负载测试期间应考虑许多参数,但我们只关注 http_reqs 指数,它表示系统正确处理了多少个请求。

We agree with the creators of k6 about the purpose of our test, namely performance and synthetic monitoring.
我们同意 k6 的创建者关于我们测试的目的,即性能和综合监控。

Use cases
使用案例

k6 users are typically developers, QA engineers, SDETs, and SREs. They use k6 for testing the performance and reliability of APIs, microservices, and websites. Common k6 use cases include the following:
k6 用户通常是开发人员、QA 工程师、SDET 和 SRE。他们使用 k6 来测试 API、微服务和网站的性能和可靠性。常见的 k6 使用案例包括:

• Load testing: k6 is optimized for minimal resource consumption and designed for running high load tests (spike, stress, and soak tests).
负载测试:k6 针对最小资源消耗进行了优化,专为运行高负载测试(峰值、压力和浸泡测试)而设计。

• Performance and synthetic monitoring: With k6, you can run tests with a small load to continuously validate the performance and availability of your production environment.
性能和综合监控:使用 k6,您可以运行小负载测试,以持续验证生产环境的性能和可用性。

• Chaos and reliability testing: k6 provides an extensible architecture. You can use k6 to simulate traffic as part of your chaos experiments or trigger them from your k6 tests.
混沌和可靠性测试:k6 提供可扩展的架构。您可以使用 k6 在混沌实验中模拟流量,也可以从 k6 测试中触发流量。

However, we have to make several assumptions if we want to evaluate the application from the point of view just described. When a load test is performed, it is usually much more complex than the ones we will perform in this section. When an application is bombarded with requests, not all of them will be successful. We can say that the test passed successfully if a very small percentage of the responses failed. In particular, we usually consider 95 or 98 percentiles of outcomes as the statistic on which to derive the test numbers.
但是,如果我们想从刚才描述的角度评估应用程序,我们必须做出几个假设。执行负载测试时,它通常比我们将在本节中执行的要复杂得多。当应用程序被请求轰炸时,并非所有请求都会成功。如果极小比例的响应失败,我们可以说测试成功通过。特别是,我们通常将 95 或 98 个百分位数的结果视为得出测试数字的统计数据。

With this background, we can perform stepwise load testing as follows: in ramp up, the system will be concerned with running the virtual user (VU) load from 0 to 50 for about 15 seconds. Then, we will keep the number of users stable for 60 seconds, and finally, ramp down the load to zero virtual users for another 15 seconds.
在此背景下,我们可以按如下方式执行逐步负载测试:在加速过程中,系统将关注从 0 到 50 的虚拟用户 (VU) 负载运行约 15 秒。然后,我们将保持用户数量稳定 60 秒,最后,将负载降低到零虚拟用户,再持续 15 秒。

Each newly written stage of the test is expressed in the JavaScript file in the stages section. Testing is therefore conducted under a simple empirical evaluation.
测试的每个新编写阶段都表示在 JavaScript 文件的 stages 部分中。因此,测试是在简单的实证评估下进行的。

First, we create three types of responses, both for the ASP.NET Web API and minimal API:
首先,我们为 ASP.NET Web API 和最小 API 创建三种类型的响应:

• Plain-text.
• Very small JSON data against a call – the data is static and always the same.
针对调用的非常小的 JSON 数据 – 数据是静态的,并且始终相同。

• In the third response, we send JSON data with an HTTP POST method to the API. For the Web API, we check the validation of the object, and for the minimal API, since there is no validation, we return the object as received.
在第三个响应中,我们使用 HTTP POST 方法将 JSON 数据发送到 API。对于 Web API,我们检查对象的验证,对于最小的 API,由于没有验证,我们返回接收的对象。

The following code will be used to compare the performance between the minimal API and the traditional approach:
以下代码将用于比较最小 API 和传统方法之间的性能:

Minimal API
最小 API

app.MapGet("text-plain",() => Results.Content("response"))
.WithName("GetTextPlain");
app.MapPost("validations",(ValidationData validation) => Results.Ok(validation)).WithName("PostValidationData");
app.MapGet("jsons", () =>
     {
           var response = new[]
           {
                new PersonData { Name = "Andrea", Surname = 
                "Tosato", BirthDate = new DateTime
                (2022, 01, 01) },
                new PersonData { Name = "Emanuele", 
                Surname = "Bartolesi", BirthDate = new 
                DateTime(2022, 01, 01) },
                new PersonData { Name = "Marco", Surname = 
                "Minerva", BirthDate = new DateTime
                (2022, 01, 01) }
           };
           return Results.Ok(response);
     })
.WithName("GetJsonData");

Traditional Approach
传统方法

For the traditional approach, three distinct controllers have been designed as shown here:
对于传统方法,设计了三个不同的控制器,如下所示:

[Route("text-plain")]
     [ApiController]
     public class TextPlainController : ControllerBase
     {
           [HttpGet]
           public IActionResult Get()
           {
                 return Content("response");
           }
     }
[Route("validations")]
     [ApiController]
     public class ValidationsController : ControllerBase
     {
           [HttpPost]
           public IActionResult Post(ValidationData data)
           {
                 return Ok(data);
           }
     }
     public class ValidationData
     {
           [Required]
           public int Id { get; set; }
           [Required]
           [StringLength(100)]
           public string Description { get; set; }
     }
[Route("jsons")]
[ApiController]
public class JsonsController : ControllerBase
{
     [HttpGet]
     public IActionResult Get()
     {
           var response = new[]
           {
              new PersonData { Name = "Andrea", Surname = 
              "Tosato", BirthDate = new 
              DateTime(2022, 01, 01) },
              new PersonData { Name = "Emanuele", Surname = 
              "Bartolesi", BirthDate = new 
              DateTime(2022, 01, 01) },
              new PersonData { Name = "Marco", Surname = 
              "Minerva", BirthDate = new 
              DateTime(2022, 01, 01) }
            };
            return Ok(response);
     }
}
     public class PersonData
     {
           public string Name { get; set; }
           public string Surname { get; set; }
           public DateTime BirthDate { get; set; }
     }

In the next section, we will define an options object, where we are going to define the execution ramp described here. We define all clauses to consider the test satisfied. As the last step, we write the real test, which does nothing but call the HTTP endpoint using GET or POST, depending on the test.
在下一节中,我们将定义一个 options 对象,我们将在其中定义此处描述的执行斜坡。我们定义所有子句以认为满足测试。作为最后一步,我们编写真正的测试,它只使用 GET 或 POST 调用 HTTP 终端节点,具体取决于测试。

Writing k6 tests
编写 k6 测试

Let’s create a test for each case scenario that we described in the previous section:
让我们为上一节中描述的每个 case 场景创建一个测试:

import http from "k6/http";
import { check } from "k6";
export let options = {
     summaryTrendStats: ["avg", "p(95)"],
     stages: [
           // Linearly ramp up from 1 to 50 VUs during 10 
              seconds
              { target: 50, duration: "10s" },
           // Hold at 50 VUs for the next 1 minute
              { target: 50, duration: "1m" },
           // Linearly ramp down from 50 to 0 VUs over the 
              last 15 seconds
              { target: 0, duration: "15s" }
     ],
     thresholds: {
           // We want the 95th percentile of all HTTP 
              request durations to be less than 500ms
              "http_req_duration": ["p(95)<500"],
           // Thresholds based on the custom metric we 
              defined and use to track application failures
              "check_failure_rate": [
          // Global failure rate should be less than 1%
             "rate<0.01",
          // Abort the test early if it climbs over 5%
             { threshold: "rate<=0.05", abortOnFail: true },
           ],
     },
};
export default function () {
    // execute http get call
    let response = http.get("http://localhost:7060/jsons");
    // check() returns false if any of the specified 
       conditions fail
    check(response, {
           "status is 200": (r) => r.status === 200,
    });
}

In the preceding JavaScript file, we wrote the test using k6 syntax. We have defined the options, such as the evaluation threshold of the test, the parameters to be measured, and the stages that the test should simulate. Once we have defined the options of the test, we just have to write the code to call the APIs that interest us – in our case, we have defined three tests to call the three endpoints that we want to evaluate.
在上面的 JavaScript 文件中,我们使用 k6 语法编写了测试。我们已经定义了选项,例如测试的评估阈值、要测量的参数以及测试应模拟的阶段。定义测试选项后,我们只需编写代码来调用我们感兴趣的 API – 在我们的例子中,我们已经定义了三个测试来调用我们想要评估的三个端点。

Running a k6 performance test
运行 k6 性能测试

Now that we have written the code to test the performance, let’s run the test and generate the statistics of the tests.
现在我们已经编写了代码来测试性能,让我们运行测试并生成测试的统计信息。

We will report all the general statistics of the collected tests:
我们将报告所收集测试的所有一般统计数据:

  1. First, we need to start the web applications to run the load test. Let’s start with both the ASP.NET Web API application and the minimal API application. We expose the URLs, both the HTTPS and HTTP protocols.
    首先,我们需要启动 Web 应用程序以运行负载测试。让我们从 ASP.NET Web API 应用程序和最小 API 应用程序开始。我们公开 URL,包括 HTTPS 和 HTTP 协议。

  2. Move the shell to the root folder and run the following two commands in two different shells:
    将 shell 移动到根文件夹,并在两个不同的 shell 中运行以下两个命令:

    dotnet .\MinimalAPI.Sample\bin\Release\net6.0\MinimalAPI.Sample.dll --urls=https://localhost:7059/;http://localhost:7060/
    dotnet .\ControllerAPI.Sample\bin\Release\net6.0\ControllerAPI.Sample.dll --urls="https://localhost:7149/;http://localhost:7150/"
  3. Now, we just have to run the three test files for each project.
    现在,我们只需要为每个项目运行三个测试文件。

• This one is for the controller-based Web API:
此 API 适用于基于控制器的 Web API:
k6 run .\K6\Controllers\json.js --summary-export=.\K6\results\controller-json.json

• This one is for the minimal API:
此 API 适用于最小 API:
k6 run .\K6\Minimal\json.js --summary-export=.\K6\results\minimal-json.json

Here are the results.
以下是结果。

For the test in traditional development mode with a plain-text content type, the number of requests served per second is 1,547:
对于纯文本内容类型的传统开发模式下的测试,每秒提供的请求数为 1547:

Figure 10.1 – The load test for a controller-based API and plain text
图 10.1 – 基于控制器的 API 和纯文本的负载测试

For the test in traditional development mode with a json content type, the number of requests served per second is 1,614:
对于传统开发模式下的 json 内容类型的测试,每秒提供的请求数为 1614:

Figure 10.2 – The load test for a controller-based API and JSON result
图 10.2 – 基于控制器的 API 和 JSON 结果的负载测试

For the test in traditional development mode with a json content type and model validation, the number of requests served per second is 1,602:
对于传统开发模式下的 json 内容类型和模型验证的测试,每秒提供的请求数为 1602:

Figure 10.3 – The load test for a controller-based API and validation payload
图 10.3 – 基于控制器的 API 和验证有效负载的负载测试

For the test in minimal API development mode with a plain-text content type, the number of requests served per second is 2,285:
对于在纯文本内容类型的最小 API 开发模式下的测试,每秒提供的请求数为 2285:

Figure 10.4 – The load test for a minimal API and plain text
图 10.4 – 最小 API 和纯文本的负载测试

For the test in minimal API development mode with a json content type, the number of requests served per second is 2,030:
对于在 json 内容类型的最小 API 开发模式下的测试,每秒提供的请求数为 2030:

Figure 10.5 – The load test for a minimal API and JSON result
图 10.5 – 最小 API 和 JSON 结果的负载测试

For the test in minimal API development mode with a json content type with model validation, the number of requests served per second is 2,070:
对于在最小 API 开发模式下使用具有模型验证的 json 内容类型的测试,每秒提供的请求数为 2070:

Figure 10.6 – The load test for a minimal API and no validation payload
图 10.6 – 最小 API 且无验证有效负载的负载测试

In the following image, we show a comparison of the three tested functionalities, reporting the number of requests served with the same functionality:
在下图中,我们显示了三个测试功能的比较,报告了使用相同功能提供的请求数:

Figure 10.7 – The performance results

As we might have expected, minimal APIs are much faster than controller-based web APIs.
正如我们所料,最小的 API 比基于控制器的 Web API 快得多。

The difference is approximately 30%, and that’s no small feat.
差异约为 30%,这可不是一件小事。

Obviously, as previously mentioned, minimal APIs have features missing in order to optimize performance, the most striking being data validation.
显然,如前所述,为了优化性能,最小的 API 缺少一些功能,最引人注目的是数据验证。

In the example, the payload is very small, and the differences are not very noticeable.
在此示例中,有效负载非常小,差异不是很明显。

As the payload and validation rules grow, the difference in speed between the two frameworks will only increase.
随着有效负载和验证规则的增长,两个框架之间的速度差异只会增加。

We have seen how to measure performance with a load testing tool and then evaluate how many requests it can serve per second with the same number of machines and users connected.
我们已经了解了如何使用负载测试工具测量性能,然后评估在连接相同数量的机器和用户的情况下,它每秒可以处理多少个请求。

We can also use other tools to understand how minimal APIs have had a strong positive impact on performance.
我们还可以使用其他工具来了解最少的 API 如何对性能产生强大的积极影响。

Benchmarking minimal APIs with BenchmarkDotNet
使用 BenchmarkDotNet 对最小 API 进行基准测试

BenchmarkDotNet is a framework that allows you to measure written code and compare performance between libraries written in different versions or compiled with different .NET frameworks.
BenchmarkDotNet 是一个框架,可用于测量编写的代码,并比较以不同版本编写或使用不同 .NET 框架编译的库之间的性能。

This tool is used for calculating the time taken for the execution of a task, the memory used, and many other parameters.
此工具用于计算执行任务所花费的时间、使用的内存和许多其他参数。

Our case is a very simple scenario. We want to compare the response times of two applications written to the same version of the .NET Framework.
我们的情况非常简单。我们想要比较写入同一版本的 .NET Framework 的两个应用程序的响应时间。

How do we perform this comparison? We take an HttpClient object and start calling the methods that we have also defined for the load testing case.
我们如何进行这种比较?我们获取一个 HttpClient 对象,并开始调用我们也为负载测试案例定义的方法。

We will therefore obtain a comparison between two methods that exploit the same HttpClient object and recall methods with the same functionality, but one is written with the ASP.NET Web API and the traditional controllers, while the other is written using minimal APIs.
因此,我们将比较两种利用相同 HttpClient 对象和调用具有相同功能的方法,但一种是使用 ASP.NET Web API 和传统控制器编写的,而另一种是使用最少的 API 编写的。

BenchmarkDotNet helps you to transform methods into benchmarks, track their performance, and share reproducible measurement experiments.
BenchmarkDotNet 可帮助您将方法转换为基准测试,跟踪其性能,并共享可重现的测量实验。

Under the hood, it performs a lot of magic that guarantees reliable and precise results thanks to the perfolizer statistical engine. BenchmarkDotNet protects you from popular benchmarking mistakes and warns you if something is wrong with your benchmark design or obtained measurements. The library has been adopted by over 6,800 projects, including .NET Runtime, and is supported by the .NET Foundation (https://benchmarkdotnet.org/).
在引擎盖下,它执行了很多魔力,由于 perfolizer 统计引擎,保证了可靠和精确的结果。BenchmarkDotNet 可保护您免受常见的基准测试错误的影响,并在基准测试设计或获得的测量值出现问题时向您发出警告。该库已被 6,800 多个项目采用,包括 .NET Runtime,并得到 .NET Foundation (https://benchmarkdotnet.org/) 的支持。

Running BenchmarkDotNet
运行 BenchmarkDotNet

We will write a class that represents all the methods for calling the APIs of the two web applications. Let’s make the most of the startup feature and prepare the objects we will send via POST. The function marked as [GlobalSetup] is not computed during runtime, and this helps us calculate exactly how long it takes between the call and the response from the web application:
我们将编写一个类,该类表示用于调用两个 Web 应用程序的 API 的所有方法。让我们充分利用启动功能并准备将通过 POST 发送的对象。标记为 [GlobalSetup] 的函数在运行时不会计算,这有助于我们准确计算调用和 Web 应用程序的响应之间需要多长时间:

  1. Register all the classes in Program.cs that implement BenchmarkDotNet:
    在 Program.cs 中注册所有实现 BenchmarkDotNet 的类:

    BenchmarkSwitcher.FromAssembly(typeof(Program).Assembly).Run(args);

In the preceding snippet, we have registered the current assembly that implements all the functions that will be needed to be evaluated in the performance calculation. The methods marked with [Benchmark] will be executed over and over again to establish the average execution time.
在前面的代码段中,我们注册了当前程序集,该程序集实现了在性能计算中需要评估的所有函数。标有 [Benchmark] 的方法将一遍又一遍地执行,以确定平均执行时间。

  1. The application must be compiled on release and possibly within the production environment:
    应用程序必须在发布时编译,并且可能在生产环境中编译:

    namespace DotNetBenchmarkRunners
    {
     [SimpleJob(RuntimeMoniker.Net60, baseline: true)]
     [JsonExporter]
     public class Performances
     {
           private readonly HttpClient clientMinimal =
           new HttpClient();
           private readonly HttpClient
           clientControllers = new HttpClient();
           private readonly ValidationData data = new
           ValidationData()
           {
                 Id = 1,
                 Description = "Performance"
           };
           [GlobalSetup]
           public void Setup()
           {
                 clientMinimal.BaseAddress = new
                 Uri("https://localhost:7059");
                 clientControllers.BaseAddress = new
                 Uri("https://localhost:7149");
           }
    
           [Benchmark]
           public async Task Minimal_Json_Get() =>
           await clientMinimal.GetAsync("/jsons");
    
           [Benchmark]
           public async Task Controller_Json_Get() =>
           await clientControllers.GetAsync("/jsons");
    
           [Benchmark]
           public async Task Minimal_TextPlain_Get()
           => await clientMinimal.
           GetAsync("/text-plain");
    
           [Benchmark]
           public async Task
           Controller_TextPlain_Get() => await
           clientControllers.GetAsync("/text-plain");
    
           [Benchmark]
           public async Task Minimal_Validation_Post()
           => await clientMinimal.
           PostAsJsonAsync("/validations", data);
    
           [Benchmark]
           public async Task
           Controller_Validation_Post() => await
           clientControllers.
           PostAsJsonAsync("/validations", data);
    
     }
    
     public class ValidationData
     {
           public int Id { get; set; }
           public string Description { get; set; }
     }
    }
  2. Before launching the benchmark application, launch the web applications:
    在启动基准测试应用程序之前,请启动 Web 应用程序:

Minimal API application
最小 API 应用程序

dotnet .\MinimalAPI.Sample\bin\Release\net6.0\MinimalAPI.Sample.dll --urls="https://localhost:7059/;http://localhost:7060/"

Controller-based application
基于控制器的应用程序

dotnet .\ControllerAPI.Sample\bin\Release\net6.0\ControllerAPI.Sample.dll --urls=https://localhost:7149/;http://localhost:7150/

By launching these applications, various steps will be performed and a summary report will be extracted with the timelines that we report here:
通过启动这些应用程序,将执行各种步骤,并提取一份摘要报告,其中包含我们在此处报告的时间表:

dotnet .\DotNetBenchmarkRunners\bin\Release\net6.0\DotNetBenchmarkRunners.dll --filter *

For each method performed, the average value or the average execution time is reported.
对于执行的每种方法,都会报告平均值或平均执行时间。

Table 10.1 – Benchmark HTTP requests for minimal APIs and controllers
表 10.1 – 针对最小 API 和控制器的 HTTP 请求进行基准测试

In the following table, Error denotes how much the average value may vary due to a measurement error. Finally, the standard deviation (StdDev) indicates the deviation from the mean value. The times are given in μs and are therefore very small to measure empirically if not with instruments with that just exposed.
在下表中,Error 表示平均值可能因测量误差而变化的程度。最后,标准差 (StdDev) 表示与平均值的偏差。时间以 μs 为单位,因此如果不是用刚刚曝光的仪器,实证测量的时间非常小。

Summary
总结

In the chapter, we compared the performance of minimal APIs with that of the traditional approach by using two very different methods.
在本章中,我们使用两种截然不同的方法比较了最小 API 的性能与传统方法的性能。

Minimal APIs were not designed for performance alone and evaluating them solely on that basis is a poor starting point.
最小的 API 不仅仅是为了性能而设计的,仅根据该基础评估它们是一个糟糕的起点。

Table 10.1 indicates that there are a lot of differences between the responses of minimal APIs and that of traditional ASP.NET Web API applications.
表 10.1 表明,最小 API 的响应与传统的 ASP.NET Web API 应用程序的响应之间存在很多差异。

The tests were conducted on the same machine with the same resources. We found that minimal APIs performed about 30% better than the traditional framework.
测试是在同一台机器上以相同的资源进行的。我们发现,minimal API 的性能比传统框架高出约 30%。

We have learned about how to measure the speed of our applications – this can be useful for understanding whether the application will hold the load and what response time it can offer. We can also leverage this on small portions of critical code.
我们已经了解了如何测量应用程序的速度 – 这对于了解应用程序是否能够承受负载以及它可以提供多少响应时间非常有用。我们还可以将它用于关键代码的一小部分。

As a final note, the applications tested were practically bare bones. The validation part that should be evaluated in the ASP.NET Web API application is almost irrelevant since there are only two fields to consider. The gap between the two frameworks increases as the number of components that have been eliminated in the minimal APIs that we have already described increases.
最后要注意的是,测试的应用程序几乎是裸露的。应在 ASP.NET Web API 应用程序中评估的验证部分几乎无关紧要,因为只有两个字段需要考虑。随着我们已经描述的最小 API 中已删除的组件数量的增加,这两个框架之间的差距也会增加。

Other Books You May Enjoy
您可能喜欢的其他书籍

If you enjoyed this book, you may be interested in these other books by Packt:
如果您喜欢这本书,您可能会对 Packt 的这些其他书籍感兴趣:

Customizing ASP.NET Core 6.0 - Second Edition
定制 ASP.NET Core 6.0 - 第二版

Jürgen Gutsch
ISBN: 978-1-80323-360-4

Explore various application configurations and providers in ASP.NET Core 6
Enable and work with caches to improve the performance of your application
Understand dependency injection in .NET and learn how to add third-party DI containers
Discover the concept of middleware and write your middleware for ASP.NET Core apps
Create various API output formats in your API-driven projects
Get familiar with different hosting models for your ASP.NET Core app

ASP.NET Core 6 and Angular - Fifth Edition
ASP.NET Core 6 和 Angular - 第五版

Valerio De Sanctis
ISBN: 978-1-80323-970-5

Use the new Visual Studio Standalone TypeScript Angular template
Implement and consume a Web API interface with ASP.NET Core
Set up an SQL database server using a local instance or a cloud datastore
Perform C# and TypeScript debugging using Visual Studio 2022
Create TDD and BDD unit tests using xUnit, Jasmine, and Karma
Perform DBMS structured logging using providers such as SeriLog
Deploy web apps to Azure App Service using IIS, Kestrel, and NGINX
Learn to develop fast and flexible Web APIs using GraphQL
Add real-time capabilities to Angular apps with ASP.NET Core SignalR
Packt is searching for authors like you
If you’re interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

C#数据库仓储模式

C#数据库仓储模式

在C#中进行数据库设计时,可以采用以下几种模式来优化和提高代码的可维护性、可扩展性和性能:

1. 实体框架(Entity Framework)

Entity Framework(EF)是一个对象关系映射(ORM)框架,它允许开发人员使用.NET对象来表示数据库中的数据。通过EF,可以将数据库表映射到C#类,从而简化数据库操作。

// 定义一个与数据库表对应的C#类
public class Product
{
    public int Id { get; set; }
    public string Name { get; set; }
    public decimal Price { get; set; }
}

// 使用Entity Framework上下文类来管理数据库操作
public class MyDbContext : DbContext
{
    public DbSet<Product> Products { get; set; }

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        optionsBuilder.UseSqlServer("YourConnectionStringHere");
    }
}

2. 仓库模式(Repository Pattern)

仓库模式是一种设计模式,用于将数据访问逻辑从应用程序代码中分离出来。通过仓库模式,可以更容易地更改数据存储方式,而不需要修改应用程序代码。

// 定义一个仓库接口
public interface IProductRepository
{
    IEnumerable<Product> GetAll();
    Product GetById(int id);
    void Add(Product product);
    void Update(Product product);
    void Delete(int id);
}

// 实现仓库接口
public class ProductRepository : IProductRepository
{
    private readonly MyDbContext _context;

    public ProductRepository(MyDbContext context)
    {
        _context = context;
    }

    public IEnumerable<Product> GetAll()
    {
        return _context.Products.ToList();
    }

    // 其他方法的实现...
}

3. 单元工作模式(Unit of Work Pattern)

单元工作模式用于管理事务,确保一组操作要么全部成功,要么全部失败。通过使用单元工作模式,可以更容易地处理数据库事务。

public class UnitOfWork : IDisposable
{
    private readonly MyDbContext _context;
    private IProductRepository _productRepository;

    public UnitOfWork(MyDbContext context)
    {
        _context = context;
    }

    public IProductRepository ProductRepository
    {
        get
        {
            if (_productRepository == null)
            {
                _productRepository = new ProductRepository(_context);
            }
            return _productRepository;
        }
    }

    public void Save()
    {
        _context.SaveChanges();
    }

    // 实现IDisposable接口...
}

4. 服务层模式(Service Layer Pattern)

服务层模式用于将业务逻辑从数据访问代码中分离出来。通过服务层模式,可以更容易地测试和维护业务逻辑。

public class ProductService
{
    private readonly IProductRepository _productRepository;

    public ProductService(IProductRepository productRepository)
    {
        _productRepository = productRepository;
    }

    public IEnumerable<Product> GetAllProducts()
    {
        return _productRepository.GetAll();
    }

    // 其他业务逻辑方法的实现...
}

通过使用这些设计模式,可以更好地组织和管理C#中的数据库设计,提高代码的可维护性和可扩展性。

C#数据库设计模式在软件开发和数据库管理中起着重要作用。以下是C#数据库设计模式的一些主要用途:

  1. 代码复用:设计模式提供了一种可重用的解决方案框架,可以帮助开发人员避免重复编写相同的代码。通过使用设计模式,开发人员可以更快地构建应用程序,并减少维护成本。
  2. 提高代码质量:设计模式遵循一定的编程规范和最佳实践,可以帮助开发人员编写出更加健壮、可读和可维护的代码。这有助于提高整个软件系统的质量和稳定性。
  3. 降低系统复杂性:数据库设计模式提供了一种结构化的方法来组织和管理数据库中的数据。通过使用设计模式,开发人员可以更好地理解和处理复杂的数据库结构,从而降低系统的复杂性。
  4. 增强可扩展性:设计模式通常考虑了系统的可扩展性,允许开发人员在未来轻松地添加新功能或修改现有功能。这有助于确保软件系统能够适应不断变化的需求和业务场景。
  5. 促进团队协作:设计模式提供了一种通用的语言和框架,可以帮助开发人员之间更好地沟通和协作。当团队成员都遵循相同的设计模式和编程规范时,可以更容易地理解彼此的代码,并减少误解和冲突。
    在C#中,常见的数据库设计模式包括单例模式、工厂模式、观察者模式等。这些模式可以应用于不同的场景和需求,例如创建数据库连接、管理数据库事务、实现数据绑定等。

请注意,虽然设计模式提供了一种有用的框架和指导原则,但并不是所有情况下都需要严格遵循它们。开发人员应该根据具体的项目需求和团队规范来决定是否使用设计模式,以及如何应用它们。

在C#中进行数据库设计时,选择合适的设计模式对于确保代码的可维护性、可扩展性和性能至关重要。以下是一些建议,可以帮助你选择合适的数据库设计模式:

  1. 单一职责原则(SRP):确保每个类只有一个引起它变化的原因。在设计数据库模式时,这意味着每个表应该只负责一个逻辑实体,并且只包含与该实体直接相关的数据。
  2. 开闭原则(OCP):软件实体(类、模块、函数等)应该对扩展开放,对修改关闭。这意味着当需要添加新功能时,应该通过添加新代码来实现,而不是修改现有代码。在数据库设计中,这可以表现为使用视图、存储过程或触发器等机制来扩展功能,而不是修改现有的表结构。
  3. 依赖倒置原则(DIP):高层模块不应该依赖于低层模块,两者都应该依赖于抽象。在数据库设计中,这可以表现为使用接口或抽象类来定义数据访问逻辑,而不是直接依赖于具体的数据库实现。
  4. 实体-关系模型(ER模型):ER模型是一种用于描述现实世界中实体及其之间关系的概念模型。在C#中,可以使用实体框架等ORM工具将ER模型映射到数据库中。
  5. 规范化:规范化是减少数据冗余和提高数据一致性的过程。在设计数据库时,应该遵循规范化原则,将数据分解为多个相关表,以减少数据冗余和插入、更新、删除异常。
  6. 索引优化:索引是提高数据库查询性能的关键。在设计数据库时,应该根据查询需求合理地创建和使用索引,以提高查询性能。
  7. 安全性考虑:在设计数据库时,应该考虑数据的安全性,包括访问控制、加密和备份等机制。
  8. 可维护性和可扩展性:在设计数据库时,应该考虑到代码的可维护性和可扩展性。使用清晰、简洁的命名规范、注释和文档化可以帮助其他开发者更好地理解和维护代码。此外,设计时应考虑到未来可能的需求变化,以便进行适当的扩展。

总之,在选择数据库设计模式时,需要综合考虑项目的具体需求、性能要求、安全性考虑以及可维护性和可扩展性等因素。通过遵循上述原则和建议,你可以设计出高效、可靠且易于维护的数据库模式。

在C#中进行数据库设计时,遵循一些最佳实践可以帮助你创建出高效、可维护和可扩展的系统。以下是一些关键的设计模式和原则,可以帮助你精确地设计数据库:

  1. 规范化:确保你的数据库设计遵循规范化原则,以减少数据冗余和提高数据完整性。规范化通常涉及将数据分解为多个相关表,并通过主键和外键建立关系。
  2. 使用ORM(对象关系映射)工具:ORM工具如Entity Framework或Dapper可以帮助你将数据库表映射到C#对象,从而使数据库操作更加直观和易于管理。ORM工具还可以自动处理许多常见的数据库任务,如 CRUD 操作和查询优化。
  3. 分层架构:采用分层架构(如MVC、MVVM或Clean Architecture)来组织你的代码。这可以帮助你将关注点分离,使代码更易于测试和维护。数据库访问层通常位于底层,负责与数据库进行交互。
  4. 使用存储过程和函数:存储过程和函数可以封装复杂的数据库逻辑,提高代码的可重用性和性能。它们还可以使数据库操作更加安全和可控。
  5. 遵循设计原则:遵循一些基本的设计原则,如单一职责原则(SRP)、开闭原则(OCP)和依赖倒置原则(DIP)。这些原则可以帮助你创建出更加灵活、可扩展和可维护的系统。
  6. 优化查询性能:确保你的查询是高效的,并避免不必要的数据加载和处理。使用索引、分页和缓存等技术来提高查询性能。
  7. 使用事务:事务可以确保数据的完整性和一致性。在需要执行多个相关操作时,使用事务可以确保所有操作要么全部成功,要么全部失败。
  8. 考虑扩展性:在设计数据库时,考虑未来的扩展需求。确保你的设计可以轻松地添加新表、字段和关系,以适应不断变化的业务需求。
  9. 文档化:对你的数据库设计进行文档化,以便其他开发人员可以理解和使用它。记录表结构、关系、存储过程和函数等信息。
  10. 测试:编写单元测试和集成测试来验证你的数据库设计是否正确和可靠。这可以帮助你在开发过程中及早发现和修复问题。

总之,精确的数据库设计需要综合考虑多个方面,包括规范化、ORM工具的使用、分层架构、存储过程和函数的使用、设计原则的遵循、查询性能的优化、事务的使用、扩展性的考虑、文档化和测试等。通过遵循这些最佳实践,你可以创建出高效、可维护和可扩展的C#数据库系统。

理解和入门依赖注入

理解和入门依赖注入

概览:

  1. 什么是依赖注入?

  2. 伪代码演示
    • 依赖注入提倡使用接口而不是类

  3. C# 代码演示
    • 版本一:最直接粗暴的版本
    • 版本二:随Host自启动的服务

  4. 依赖的实例化策略 —— 每次都实例化/单例/Scoped实例
    • Scoped实例化策略的C#代码演示

1. 什么是依赖注入?

来自百科的定义
在软件工程中,依赖注入(dependency injection,缩写为 DI)是一种软件设计模式,也是实现控制反转的其中一种技术。这种模式能让一个物件接收它所依赖的其他物件。“依赖”是指接收方所需的对象。“注入”是指将“依赖”传递给接收方的过程。在“注入”之后,接收方才会调用该“依赖”。此模式确保了任何想要使用给定服务的对象不需要知道如何建立这些服务。取而代之的是,连接收方对象也不知道它存在的外部代码提供接收方所需的服务。

解析一下:

  1. 依赖注入是一种软件设计模式
  2. 方便实现控制反转(稍后解释)
  3. 通常在面向对象语言中使用 —— 如果对象A代码里调用了对象B,那么 B就是A的“依赖”。
  4. “注入”指的是把对象B传递给对象A的过程
  5. “注入”过程需要一套“外部代码”帮助实现 —— 且称之为 依赖注入系统

2. 伪代码示例

不使用依赖注入

不使用依赖注入,必须在代码的某个地方主动地创建或者获取对象B。

public class ClassA
{
    private readonly ClassB _classB;

    public ClassA()
    {
        _classB = new ClassB(); //主动创建对象B
    }

    public void Process()
    {
        _classB.DoSomething();
        /// ...
    }
}

public class ClassB
{
    public void DoSomething()
    {
        /// ...
    }
}

使用依赖注入(以C#为例)

public class ClassA
{
    private readonly ClassB _classB;

    public ClassA(ClassB classB) // 声明构造器里需要对象B引用,依赖注入框架就会自动注入对象B
    {
        _classB = classB;
    }

    public void Process()
    {
        _classB.DoSomething();
        // ...
    }
}

public class ClassB
{
    public void DoSomething()
    {
        // ...
    }
}

// 伪代码: 向依赖注入系统中注册 ClassA 和 ClassB
DependencyInjectionSystem.AddType(ClassA);
DependencyInjectionSystem.AddType(ClassB);</code></pre>

• 使用依赖注入,不需要在代码里主动地创建或者获取对象B,相反,只需要在构造器参数里声明需要对象B的引用。(在其他一些语言的具体实现里,可能是加注解、或者特殊的修饰符来声明)。
• 伪代码中虚构了一个 DependencyInjectionSystem 指代依赖注入系统
• 需要向依赖注入系统中注册对应的类

有读者可能会问:“那是不是在业务代码中需要写 new ClassA(new ClassB()) 来实例化ClassA呢?"

不需要。使用了依赖注入后,业务代码中基本就看不到 new 的踪影了,你会感觉不管是ClassA对象还是ClassB对象都仿佛是凭空生成的 —— 其实是依赖注入系统在程序运行过程中自动生成的。

提倡使用接口而不是类

在使用依赖注入时,更多时候,对于“依赖”,接口是更推荐的。改进上面的例子。

public class ClassA
{
    private readonly IInterfaceB _b;

    public ClassA(IInterfaceB b)
    {
        _b = b;
    }

    public void Process()
    {
        _b.DoSomething();
        //...
    }
}

public class ClassB : IInterfaceB
{
    public void DoSomething()
    {
        Console.WriteLine("class B is doing something ...");
    }
}

public interface IInterfaceB
{
    void DoSomething();
}

// 伪代码: 向依赖注入系统中注册 ClassA 和 ClassB
DependencyInjectionSystem.AddType(ClassA);
DependencyInjectionSystem.AddType(IInterfaceB, ClassB);

• 新增了一个接口 IInterfaceB
• ClassB 实现了这个接口
• ClassA 里声明引用接口 IInterfaceB 而不是直接引用类 ClassB
• 注册ClassB时需要指明对应的接口 IInterfaceB

优点:解耦了接口和实现,ClassA不需要知道最终它获得的是怎么样的实现,它只需要知道 IInterfaceB 有个方法叫 DoSomething() 并且调用它就行了。当前它可能用的是 ClassB 实例,为了可能用的是改进后的 ClassB2 实例,也可能是全新的 ClassX 实例,但ClassA不需要更改任何代码。

原本是应该 ClassA 来控制使用怎样的 ClassB 实例(IInterfaceB 实现),使用了依赖注入设计后,这种控制关系(决定关系)就反过来由外部控制了,这就是所谓的“控制反转”。

同时鼓励接口的设计可以引导我们写出更符合里氏代换原则的代码 (里氏代换原则_百度百科 )
里氏代换原则(Liskov Substitution Principle LSP)面向对象设计的基本原则之一。 里氏代换原则中说,任何基类可以出现的地方,子类一定可以出现。 LSP是继承复用的基石,只有当衍生类可以替换掉基类,软件单位的功能不受到影响时,基类才能真正被复用,而衍生类也能够在基类的基础上增加新的行为。里氏代换原则是对“开-闭”原则的补充。实现“开-闭”原则的关键步骤就是抽象化。而基类与子类的继承关系就是抽象化的具体实现,所以里氏代换原则是对实现抽象化的具体步骤的规范。

3. C# 代码演示

理论到此为止,接下来看看如何写出能实际运行的代码(以C#为例)

//  .csproj
<Project Sdk="Microsoft.NET.Sdk">

	<PropertyGroup>
		<OutputType>Exe</OutputType>
		<TargetFramework>net6.0</TargetFramework>
		<ImplicitUsings>enable</ImplicitUsings>
		<Nullable>enable</Nullable>
	</PropertyGroup>

	<ItemGroup>
		<PackageReference Include="Microsoft.Extensions.Hosting" Version="6.0.1" />
		<PackageReference Include="Microsoft.Extensions.Hosting.Abstractions" Version="6.0.0" />
	</ItemGroup>

</Project>

ClassA.cs

namespace ConsoleDIApp.demo
{
    public class ClassA
    {
        private readonly IInterfaceB _b;

        public ClassA(IInterfaceB b)
        {
            _b = b;
        }

        public void Process()
        {
            Console.WriteLine("Class A start process");
            _b.DoSomething();
            Console.WriteLine("Class A finish process");
        }
    }
}

ClassB.cs

namespace ConsoleDIApp.demo
{
    public class ClassB : IInterfaceB
    {
        public void DoSomething()
        {
            Console.WriteLine("class B is doing something ...");
        }
    }
}

IInterfaceB.cs

namespace ConsoleDIApp.demo
{
    public interface IInterfaceB
    {
        void DoSomething();
    }
}

Program.cs

using ConsoleDIApp.demo;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace ConsoleDIApp
{
    internal class Program
    {
        public static void Main(string[] args)
        {
            IHost host = Host.CreateDefaultBuilder(args)
                                   .ConfigureServices((context, services) =>
                                   {
                                       services.AddSingleton<ClassA>();
                                       services.AddSingleton<IInterfaceB, ClassB>();
                                   })
                                   .Build();

            ClassA a = host.Services.GetRequiredService<ClassA>();
            a.Process();
        }
    }
}

实际跟依赖注入系统有关的代码只有大概7行。运行以上代码,得到:

Class A start process
class B is doing something ...
Class A finish process

解析:

• Host 以及它内部的 Services 可以简单理解为 C# 提供的 依赖注入系统
• 通过 GetRequiredService 获得想要的对应的实例(这里是 ClassA 实例)
• 调用了ClassA实例 a 的方法 Process() 后的输出结果可以证实 ClassB 的实例的确被自动生成了

问题:这份程序的逻辑上入口其实就是执行 ClassA 实例的 Process() 方法,在上面的代码里显得十分生硬。

有没有更优雅的写法呢? —— 有的!

可运行版本二
新增类HostedService.cs

using ConsoleDIApp.demo;
using Microsoft.Extensions.Hosting;

namespace ConsoleDIApp
{
    public class HostedService : IHostedService
    {
        private readonly ClassA _a;

        public HostedService(ClassA a)
        {
            _a = a;
        }

        public Task StartAsync(CancellationToken cancellationToken)
        {
            _a.Process();
            return Task.CompletedTask;
        }

        public Task StopAsync(CancellationToken cancellationToken)
        {
            return Task.CompletedTask;
        }
    }
}

修改 Program.cs

namespace ConsoleDIApp
{
    internal class Program
    {
        public static async Task Main(string[] args)
        {
            IHost host = Host.CreateDefaultBuilder(args)
                                   .ConfigureServices((context, services) =>
                                   {
                                       services.AddSingleton<ClassA>();
                                       services.AddSingleton<IInterfaceB, ClassB>();

                                       services.AddHostedService<HostedService>();
                                   })
                                   .Build();

            await host.RunAsync();
        }
    }
}

运行的结果是一样的。

解析:
• IHostedService 是一个特殊的接口,实现这个接口的类在通过在 Host services 里注册后,可以在 Host 运行时自动执行
• services.AddHostedService(); 用于注册自动运行的类
• await host.RunAsync(); 运行 Host 实例

上面的全部代码

using ConsoleDIApp.demo;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace ConsoleDIApp
{
    internal class Program
    {
        public static async Task Main(string[] args)
        {
            IHost host = Host.CreateDefaultBuilder(args)
                                   .ConfigureServices((context, services) =>
                                   {
                                       services.AddSingleton<ClassA>();
                                       services.AddSingleton<IInterfaceB, ClassB>();

                                       services.AddHostedService<HostedService>();
                                   })
                                   .Build();

            await host.RunAsync();
        }
    }
}

namespace ConsoleDIApp.demo
{
    public class ClassA
    {
        private readonly IInterfaceB _b;

        public ClassA(IInterfaceB b)
        {
            _b = b;
        }

        public void Process()
        {
            Console.WriteLine("Class A start process");
            _b.DoSomething();
            Console.WriteLine("Class A finish process");
        }
    }

}

namespace ConsoleDIApp.demo
{
    public class ClassB : IInterfaceB
    {
        public void DoSomething()
        {
            Console.WriteLine("class B is doing something ...");
        }
    }
}

namespace ConsoleDIApp.demo
{
    public interface IInterfaceB
    {
        void DoSomething();
    }
}

namespace ConsoleDIApp
{
    public class HostedService : IHostedService
    {
        private readonly ClassA _a;

        public HostedService(ClassA a)
        {
            _a = a;
        }

        public Task StartAsync(CancellationToken cancellationToken)
        {
            _a.Process();
            return Task.CompletedTask;
        }

        public Task StopAsync(CancellationToken cancellationToken)
        {
            return Task.CompletedTask;
        }
    }
}

4. 依赖的实例化策略

上面演示的 ClassB 太简单了,没有内部状态。因此不管是每次注入时都新建一个实例还是每次注入同一个实例,对程序都不会有影响。

但对于一些复杂的类, 比如 Config (存放了程序配置的类), UserPreference (存放了当前用户的偏好设置)就得考虑合适的实例化策略。

显然对于多数程序, Config (存放了程序配置的类)只需要实例化一次就行,重复实例化只会浪费资源;而UserPreference (存放了当前用户的偏好设置),既不是每次都实例化也不是只实例化一次,而是应该针对每个用户实例化一次,同一个用户的所有活动都应该对应同一个 UserPreference 实例。

依赖的实例化策略可以分成三种类型:

每次注入前都生成一个全新的实例
对于一个类只生成一个实例,每次注入同一个实例
针对同一scope(范畴)只生成一个实例 —— scope 需要自定义,比如一次 http request 或者同一用户的所有访问 等等。

对此,C# 有对应的API

分别是

services.AddTransient<IInterfaceB, ClassB>(); // 每次都生成一个实例
services.AddSingleton<IInterfaceB, ClassB>(); // 单例
services.AddScoped<IInterfaceB, ClassB>(); // 每个 scope 一个实例

(services 类型是 Microsoft.Extensions.DependencyInjection.IServiceCollection)

如何获取 Scoped 实例
• IServiceProvider.CreateScope() 方法能创建IServiceScope实例。
• serviceScope.ServiceProvider 域也是一个IServiceProvider实例。
• 获取 Scoped 实例只能通过“可运行版本一”中主动调用IServiceProvider.GetRequiredService()的方式;
• 获取实例通常是一个链路过程,链路上的依赖如果是Scoped 实例就返回该Scoped唯一的实例,如果不是( AddTransient 或 AddSingleton)就按对应的策略实例化对应的依赖。

结合这三点,以下为核心代码

IServiceProvider serviceProvider = host.Services;
IServiceScope serviceScope = serviceProvider.CreateScope();
IServiceProvider scopedServiceProvider = serviceScope.ServiceProvider;
ClassA a = scopedServiceProvider.GetRequiredService<ClassA>();

觉得迷糊不要紧,尝试运行以下示例代码,摸索一下其中的原理
ClassA.cs

namespace ConsoleDIApp.demo
{
    public class ClassA
    {
        private readonly IInterfaceB _b;
        private readonly long _id;

        public ClassA(IInterfaceB b)
        {
            _b = b;
            _id = DateTime.UtcNow.Ticks; // 利用 id 来区分是否是同一个实例
        }

        public void Process()
        {
            Console.WriteLine($"[{_id}]: Class A start process");
            _b.DoSomething();
            Console.WriteLine("Class A finish process");
        }
    }
}

ClassB.cs

namespace ConsoleDIApp.demo
{
    public class ClassB : IInterfaceB
    {
        private readonly long _id;

        public ClassB()
        {
            _id = DateTime.UtcNow.Ticks; // 利用 id 来区分是否是同一个实例
        }

        public void DoSomething()
        {
            Console.WriteLine($"[{_id}]: class B is doing something ...");
        }
    }
}

IInterfaceB.cs

namespace ConsoleDIApp.demo
{
    public interface IInterfaceB
    {
        void DoSomething();
    }
}

HostedService.cs

using ConsoleDIApp.demo;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace ConsoleDIApp
{
    public class HostedService : IHostedService
    {
        private readonly IServiceProvider _services;

        // 关联用户名和IServiceScope实例的集合
        private Dictionary<string, IServiceScope> _scopes = new();

        public HostedService(IServiceProvider a)
        {
            _services = a;
        }

        public Task StartAsync(CancellationToken cancellationToken)
        {
            while (true)
            {
                Console.WriteLine("Input User Name: ");
                var user = Console.ReadLine();
                if (user == null || user == "q")
                {
                    break; // 退出循环
                }
                if (!_scopes.ContainsKey(user))
                {
                    // 对新用户,创建一个新的 IServiceScope 实例
                    IServiceScope newServiceScope = _services.CreateScope();
                    _scopes.Add(user, newServiceScope);
                }

                IServiceScope serviceScope = _scopes[user];
                IServiceProvider serviceProvider = serviceScope.ServiceProvider;
                ClassA a = serviceProvider.GetRequiredService<ClassA>();
                a.Process();
            }
            return Task.CompletedTask;
        }

        public Task StopAsync(CancellationToken cancellationToken)
        {
            return Task.CompletedTask;
        }
    }
}

Program.cs

        public static async Task Main(string[] args)
        {
            IHost host = Host.CreateDefaultBuilder(args)
                                   .ConfigureServices((context, services) =>
                                   {
                                       // 每此实例一个新的ClassA
                                       services.AddTransient<ClassA>();

                                       // 相同scope共享同一个ClassB实例
                                       services.AddScoped<IInterfaceB, ClassB>();
                                       
                                       services.AddHostedService<HostedService>();
                                   })
                                   .Build();



            await host.RunAsync();
        }

运行效果

Input User Name:
tom
[638831071897299805]: Class A start process
[638831071897292850]: class B is doing something ...
Class A finish process
Input User Name:
jerry
[638831071933082335]: Class A start process
[638831071933082295]: class B is doing something ...
Class A finish process
Input User Name:

观察点:

• 每次 ClassA 的 id 都不同 (因为 AddTransient )
• tom 的 ClassB 的 id 都一样 ,jerry 的 ClassB 的 id 都一样, 说明 AddScoped 如预期一样表现

后面的全部代码

using ConsoleDIApp;
using ConsoleDIApp.demo;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace ConsoleDIApp.demo
{
    internal class Program
    {
        public static async Task Main(string[] args)
        {
            IHost host = Host.CreateDefaultBuilder(args)
                                   .ConfigureServices((context, services) =>
                                   {
                                       // 每此实例一个新的ClassA
                                       services.AddTransient<ClassA>();

                                       // 相同scope共享同一个ClassB实例
                                       services.AddScoped<IInterfaceB, ClassB>();

                                       services.AddHostedService<HostedService>();
                                   })
                                   .Build();



            await host.RunAsync();
        }
    }
}

namespace ConsoleDIApp.demo
{
    public class ClassA
    {
        private readonly IInterfaceB _b;
        private readonly long _id;

        public ClassA(IInterfaceB b)
        {
            _b = b;
            _id = DateTime.UtcNow.Ticks; // 利用 id 来区分是否是同一个实例
        }

        public void Process()
        {
            Console.WriteLine($"[{_id}]: Class A start process");
            _b.DoSomething();
            Console.WriteLine("Class A finish process");
        }
    }
}

namespace ConsoleDIApp.demo
{
    public class ClassB : IInterfaceB
    {
        private readonly long _id;

        public ClassB()
        {
            _id = DateTime.UtcNow.Ticks; // 利用 id 来区分是否是同一个实例
        }

        public void DoSomething()
        {
            Console.WriteLine($"[{_id}]: class B is doing something ...");
        }
    }
}

namespace ConsoleDIApp.demo
{
    public interface IInterfaceB
    {
        void DoSomething();
    }
}

namespace ConsoleDIApp
{
    public class HostedService : IHostedService
    {
        private readonly IServiceProvider _services;

        // 关联用户名和IServiceScope实例的集合
        private Dictionary<string, IServiceScope> _scopes = new();

        public HostedService(IServiceProvider a)
        {
            _services = a;
        }

        public Task StartAsync(CancellationToken cancellationToken)
        {
            while (true)
            {
                Console.WriteLine("Input User Name: ");
                var user = Console.ReadLine();
                if (user == null || user == "q")
                {
                    break; // 退出循环
                }
                if (!_scopes.ContainsKey(user))
                {
                    // 对新用户,创建一个新的 IServiceScope 实例
                    IServiceScope newServiceScope = _services.CreateScope();
                    _scopes.Add(user, newServiceScope);
                }

                IServiceScope serviceScope = _scopes[user];
                IServiceProvider serviceProvider = serviceScope.ServiceProvider;
                ClassA a = serviceProvider.GetRequiredService<ClassA>();
                a.Process();
            }
            return Task.CompletedTask;
        }

        public Task StopAsync(CancellationToken cancellationToken)
        {
            return Task.CompletedTask;
        }
    }
}

本文对 C# 的API都只是简单介绍,.Net 开发者可以自行阅读官方文档,进一步理解各个相关的类,方法等。

总结

只要软件中使用了大量的面向对象写法,依赖注入往往是避不开的一个技术。
具体的API可能有所不同,但是背后的思想和设计理念大同小异。
希望本文能对读者有启发!

https://zhuanlan.zhihu.com/p/592698341

C# Concurrency Asynchronous and multithreaded programming

C# Concurrency

Asynchronous and multithreaded programming

 

Nir Dobovizki

©2025 by Manning Publications Co. All rights reserved.

C# Data Structures

[C# Data Structures: Designing for Organizing, Storing and Accessing Information]

By Theophilus Edet

Ultimate ASP.NET Core Web API 2nd Premium Edition

Ultimate ASP.NET Core Web API 2nd Premium Edition

1 PROJECT CONFIGURATION

2 Creating the Required Projects

3 ONION ARCHITECTURE IMPLEMENTATION

4 HANDLING GET REQUESTS

5 GLOBAL ERROR HANDLING

6 GETTING ADDITIONAL RESOURCES

7 CONTENT NEGOTIATION

8 METHOD SAFETY AND METHOD IDEMPOTENCY

9 CREATING RESOURCES

10 WORKING WITH DELETE REQUESTS

11 WORKING WITH PUT REQUESTS

12 WORKING WITH PATCH REQUESTS

13 VALIDATION

14 ASYNCHRONOUS CODE

15 ACTION FILTERS

16 PAGING

17 FILTERING

18 SEARCHING

19 SORTING

20 DATA SHAPING

21 SUPPORTING HATEOAS

22 WORKING WITH OPTIONS AND HEAD REQUESTS

23 ROOT DOCUMENT

24 VERSIONING APIS

25 CACHING

26 RATE LIMITING AND THROTTLING

27 JWT, IDENTITY, AND REFRESH TOKEN

28 REFRESH TOKEN

29 BINDING CONFIGURATION AND OPTIONS PATTERN

30 DOCUMENTING API WITH SWAGGER

31 DEPLOYMENT TO IIS

32 BONUS 1 - RESPONSE PERFORMANCE IMPROVEMENTS

33 BONUS 2 - INTRODUCTION TO CQRS AND MEDIATR WITH ASP.NET CORE WEB API

1 Project configuration

Configuration in .NET Core is very different from what we’re used to in‌ .NET Framework projects. We don’t use the web.config file anymore, but instead, use a built-in Configuration framework that comes out of the box in .NET Core.

To be able to develop good applications, we need to understand how to configure our application and its services first.

In this section, we’ll learn about configuration in the Program class and set up our application. We will also learn how to register different services and how to use extension methods to achieve this.

Of course, the first thing we need to do is to create a new project, so,let’s dive right into it.

1.1 Creating a New Project

Let's open Visual Studio, we are going to use VS 2022, and create a new ASP.NET Core Web API Application:‌

alt text

Now let’s choose a name and location for our project:

alt text

Next, we want to choose a .NET 6.0 from the dropdown list. Also, we don’t want to enable OpenAPI support right now. We’ll do that later in the book on our own. Now we can proceed by clicking the Create button and the project will start initializing:

alt text

1.2 launchSettings.json File Configuration

After the project has been created, we are going to modify the launchSettings.json file, which can be found in the Properties section of the Solution Explorer window.‌

This configuration determines the launch behavior of the ASP.NET Core applications. As we can see, it contains both configurations to launch settings for IIS and self-hosted applications (Kestrel).

For now, let’s change the launchBrowser property to false to prevent the web browser from launching on application start.

{ "$schema": "https://json.schemastore.org/launchsettings.json", "iisSettings": { "windowsAuthentication": false, "anonymousAuthentication": true, "iisExpress": { "applicationUrl": "http://localhost:1629", "sslPort": 44370 } }, "profiles": { "CompanyEmployees": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": false, "launchUrl": "weatherforecast", "applicationUrl": "https://localhost:5001;http://localhost:5000", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "IIS Express": { "commandName": "IISExpress", "launchBrowser": false, "launchUrl": "weatherforecast", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } } } }

This is convenient since we are developing a Web API project and we don’t need a browser to check our API out. We will use Postman (described later) for this purpose.

If you’ve checked Configure for HTTPS checkbox earlier in the setup phase, you will end up with two URLs in the applicationUrl section — one for HTTPS (localhost:5001), and one for HTTP (localhost:5000).

You’ll also notice the sslPort property which indicates that our application, when running in IISExpress, will be configured for HTTPS (port 44370), too.

NOTE: This HTTPS configuration is only valid in the local environment. You will have to configure a valid certificate and HTTPS redirection once you deploy the application.

There is one more useful property for developing applications locally and that’s the launchUrl property. This property determines which URL will the application navigate to initially. For launchUrl property to work, we need to set the launchBrowser property to true. So, for example, if we set the launchUrl property to weatherforecast, we will be redirected to https://localhost:5001/weatherforecast when we launch our application.

1.3 Program.cs Class Explanations

Program.cs is the entry point to our application and it looks like this:‌

var builder = WebApplication.CreateBuilder(args); 
// Add services to the container. 
builder.Services.AddControllers(); 
var app = builder.Build(); 
// Configure the HTTP request pipeline. 
app.UseHttpsRedirection(); 
app.UseAuthorization(); 
app.MapControllers(); app.Run();

Compared to the Program.cs class from .NET 5, there are some major changes. Some of the most obvious are:

• Top-level statements
• Implicit using directives
• No Startup class (on the project level)

“Top-level statements” means the compiler generates the namespace, class, and method elements for the main program in our application. We can see that we don’t have the class block in the code nor the Main method. All of that is generated for us by the compiler. Of course, we can add other functions to the Program class and those will be created as the local functions nested inside the generated Main method. Top-level statements are meant to simplify the entry point to the application and remove the extra “fluff” so we can focus on the important stuff instead.

“Implicit using directives” mean the compiler automatically adds a different set of using directives based on a project type, so we don’t have to do that manually. These using directives are stored in the obj/Debug/net6.0 folder of our project under the name CompanyEmployees.GlobalUsings.g.cs:

// <auto-generated/>
global using global::Microsoft.AspNetCore.Builder; 
global using global::Microsoft.AspNetCore.Hosting; 
global using global::Microsoft.AspNetCore.Http; 
global using global::Microsoft.AspNetCore.Routing;
global using global::Microsoft.Extensions.Configuration; 
global using global::Microsoft.Extensions.DependencyInjection; 
global using global::Microsoft.Extensions.Hosting;
global using global::Microsoft.Extensions.Logging; 
global using global::System;
global using global::System.Collections.Generic; 
global using global::System.IO;
global using global::System.Linq; 
global using global::System.Net.Http;
global using global::System.Net.Http.Json; 
global using global::System.Threading;

global using global::System.Threading.Tasks;

This means that we can use different classes from these namespaces in our project without adding using directives explicitly in our project files. Of course, if you don’t want this type of behavior, you can turn it off by visiting the project file and disabling the ImplicitUsings tag:

<ImplicitUsings>disable</ImplicitUsings>

By default, this is enabled in the .csproj file, and we are going to keep it like that.

Now, let’s take a look at the code inside the Program class. With this line of code:

var builder = WebApplication.CreateBuilder(args);

The application creates a builder variable of the type WebApplicationBuilder. The WebApplicationBuilder class is responsible for four main things:

• Adding Configuration to the project by using the builder.Configuration property
• Registering services in our app with the builder.Services property
• Logging configuration with the builder.Logging property
• Other IHostBuilder and IWebHostBuilder configuration

Compared to .NET 5 where we had a static CreateDefaultBuilder class, which returned the IHostBuilder type, now we have the static CreateBuilder method, which returns WebApplicationBuilder type.

Of course, as we see it, we don’t have the Startup class with two familiar methods: ConfigureServices and Configure. Now, all this is replaced by the code inside the Program.cs file.

Since we don’t have the ConfigureServices method to configure our services, we can do that right below the builder variable declaration. In the new template, there’s even a comment section suggesting where we should start with service registration. A service is a reusable part of the code that adds some functionality to our application, but we’ll talk about services more later on.

In .NET 5, we would use the Configure method to add different middleware components to the application’s request pipeline. But since we don’t have that method anymore, we can use the section below the var app = builder.Build(); part to do that. Again, this is marked with the comment section as well:

alt text

NOTE: If you still want to create your application using the .NET 5 way, with Program and Startup classes, you can do that, .NET 6 supports it as well. The easiest way is to create a .NET 5 project, copy the Startup and Program classes and paste it into the .NET 6 project.

Since larger applications could potentially contain a lot of different services, we can end up with a lot of clutter and unreadable code in the Program class. To make it more readable for the next person and ourselves, we can structure the code into logical blocks and separate those blocks into extension methods.

1.4 Extension Methods and CORS Configuration

An extension method is inherently a static method. What makes it different from other static methods is that it accepts this as the first parameter, and this represents the data type of the object which will be using that extension method. We’ll see what that means in a moment.‌

An extension method must be defined inside a static class. This kind of method extends the behavior of a type in .NET. Once we define an extension method, it can be chained multiple times on the same type of object.

So, let’s start writing some code to see how it all adds up.

We are going to create a new folder Extensions in the project and create a new class inside that folder named ServiceExtensions. The ServiceExtensions class should be static.

public static class ServiceExtensions
{
}

Let’s start by implementing something we need for our project immediately so we can see how extensions work.

The first thing we are going to do is to configure CORS in our application. CORS (Cross-Origin Resource Sharing) is a mechanism to give or restrict access rights to applications from different domains.

If we want to send requests from a different domain to our application, configuring CORS is mandatory. So, to start, we’ll add a code that allows all requests from all origins to be sent to our API:

public static void ConfigureCors(this IServiceCollection services) => 
    services.AddCors(options =>
    {
        options.AddPolicy("CorsPolicy", builder => 
        builder.AllowAnyOrigin()
            .AllowAnyMethod()
            .AllowAnyHeader());
    });

We are using basic CORS policy settings because allowing any origin, method, and header is okay for now. But we should be more restrictive with those settings in the production environment. More precisely, as restrictive as possible.

Instead of the AllowAnyOrigin() method which allows requests from any source, we can use the WithOrigins("https://example.com") which will allow requests only from that concrete source. Also, instead of AllowAnyMethod() that allows all HTTP methods, we can use WithMethods("POST", "GET") that will allow only specific HTTP methods. Furthermore, you can make the same changes for the AllowAnyHeader() method by using, for example, the WithHeaders("accept", "content- type") method to allow only specific headers.

1.5 IIS Configuration

ASP.NET Core applications are by default self-hosted, and if we want to host our application on IIS, we need to configure an IIS integration which will eventually help us with the deployment to IIS. To do that, we need to add the following code to the ServiceExtensions class:‌

public static void ConfigureIISIntegration(this IServiceCollection services) => 
    services.Configure<IISOptions>(options =>
    {
    });

We do not initialize any of the properties inside the options because we are fine with the default values for now. But if you need to fine-tune the configuration right away, you might want to take a look at the possible options:

Option Default Setting
AutomaticAuthentication true if true,the authentication middleware sets the HttpContext.User and responds to generic challenges,if false,the authentication middleware only provides an identity(HttpContext.User ) and responds to challenges when explicitly requested by the AuthenticationScheme . Windows Authentication must be enabled in IIS for AutomaticAuthentication to function.
AuthenticationDisplayName null Sets the display name shown to users on login pages
ForwardClientCertificate true if true and the MS-ASPNETCORE-CLIENTCERT request header is present,the HttpContext.Connection.ClientCertificate is populated.

Now, we mentioned extension methods are great for organizing your code and extending functionalities. Let’s go back to our Program class and modify it to support CORS and IIS integration now that we’ve written extension methods for those functionalities. We are going to remove the first comment and write our code over it:

using CompanyEmployees.Extensions;

var builder = WebApplication.CreateBuilder(args);

builder.Services.ConfigureCors();
builder.Services.ConfigureIISIntegration();

builder.Services.AddControllers(); 

var app = builder.Build();

And let's add a few mandatory methods to the second part of the Program class (the one for the request pipeline configuration):

var app = builder.Build();

if (app.Environment.IsDevelopment()) app.UseDeveloperExceptionPage();
else
app.UseHsts();

app.UseHttpsRedirection();

app.UseStaticFiles();
app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.All
});

app.UseCors("CorsPolicy");
app.UseAuthorization();
app.MapControllers(); 
app.Run();

We’ve added CORS and IIS configuration to the section where we need to configure our services. Furthermore, CORS configuration has been added to the application’s pipeline inside the second part of the Program class.But as you can see, there are some additional methods unrelated to IIS configuration. Let’s go through those and learn what they do.

• app.UseForwardedHeaders() will forward proxy headers to the
current request. This will help us during application deployment. Pay attention that we require Microsoft.AspNetCore.HttpOverrides using directive to introduce the ForwardedHeaders enumeration

• app.UseStaticFiles() enables using static files for the request. If we don’t set a path to the static files directory, it will use a wwwroot folder in our project by default.

• app.UseHsts() will add middleware for using HSTS, which adds the Strict-Transport-Security header.

1.6 Additional Code in the Program Class

We have to pay attention to the AddControllers() method. This method registers only the controllers in IServiceCollection and not Views or Pages because they are not required in the Web API project which we are building.‌

Right below the controller registration, we have this line of code:

var app = builder.Build();

With the Build method, we are creating the app variable of the type WebApplication. This class (WebApplication) is very important since it implements multiple interfaces like IHost that we can use to start and stop the host, IApplicationBuilder that we use to build the middleware pipeline (as you could’ve seen from our previous custom code), and IEndpointRouteBuilder used to add endpoints in our app.

The UseHttpRedirection method is used to add the middleware for the redirection from HTTP to HTTPS. Also, we can see the UseAuthorization method that adds the authorization middleware to the specified IApplicationBuilder to enable authorization capabilities.

Finally, we can see the MapControllers method that adds the endpoints from controller actions to the IEndpointRouteBuilder and the Run method that runs the application and block the calling thread until the host shutdown.

Microsoft advises that the order of adding different middlewares to the application builder is very important, and we are going to talk about that in the middleware section of this book.

1.7 Environment-Based Settings

While we develop our application, we use the “development” environment. But as soon as we publish our application, it goes to the “production” environment. Development and production environments should have different URLs, ports, connection strings, passwords, and other sensitive information.‌

Therefore, we need to have a separate configuration for each environment and that’s easy to accomplish by using .NET Core-provided mechanisms.

As soon as we create a project, we are going to see the appsettings.json file in the root, which is our main settings file, and when we expand it we are going to see the appsetings.Development.json file by default. These files are separate on the file system, but Visual Studio makes it obvious that they are connected somehow:

alt text

The apsettings.{EnvironmentSuffix}.json files are used to override the main appsettings.json file. When we use a key-value pair from the original file, we override it. We can also define environment-specific values too.

For the production environment, we should add another file: appsettings.Production.json:

alt text

The appsettings.Production.json file should contain the configuration for the production environment.

To set which environment our application runs on, we need to set up the ASPNETCORE_ENVIRONMENT environment variable. For example, to run the application in production, we need to set it to the Production value on the machine we do the deployment to.

We can set the variable through the command prompt by typing set ASPNETCORE_ENVIRONMENT=Production in Windows or export ASPNET_CORE_ENVIRONMENT=Production in Linux.

ASP.NET Core applications use the value of that environment variable to decide which appsettings file to use accordingly. In this case, that will be appsettings.Production.json.

If we take a look at our launchSettings.json file, we are going to see that this variable is currently set to Development.

Now, let’s talk a bit more about the middleware in ASP.NET Core applications.

1.8 ASP.NET Core Middleware

As we already used some middleware code to modify the application’s pipeline (CORS, Authorization...), and we are going to use the middleware throughout the rest of the book, we should be more familiar with the ASP.NET Core middleware.‌

ASP.NET Core middleware is a piece of code integrated inside the application’s pipeline that we can use to handle requests and responses. When we talk about the ASP.NET Core middleware, we can think of it as a code section that executes with every request.

Usually, we have more than a single middleware component in our application. Each component can:

• Pass the request to the next middleware component in the pipeline and also

• It can execute some work before and after the next component in the pipeline

To build a pipeline, we are using request delegates, which handle each HTTP request. To configure request delegates, we use the Run, Map, and Use extension methods. Inside the request pipeline, an application executes each component in the same order they are placed in the code- top to bottom:

alt text

Additionally, we can see that each component can execute custom logic before using the next delegate to pass the execution to another component. The last middleware component doesn’t call the next delegate, which means that this component is short-circuiting the pipeline. This is a terminal middleware because it stops further middleware from processing the request. It executes the additional logic and then returns the execution to the previous middleware components.

Before we start with examples, it is quite important to know about the order in which we should register our middleware components. The order is important for the security, performance, and functionality of our applications:

alt text

As we can see, we should register the exception handler in the early stage of the pipeline flow so it could catch all the exceptions that can happen in the later stages of the pipeline. When we create a new ASP.NET Core app, many of the middleware components are already registered in the order from the diagram. We have to pay attention when registering additional existing components or the custom ones to fit this recommendation.

For example, when adding CORS to the pipeline, the app in the development environment will work just fine if you don’t add it in this order. But we’ve received several questions from our readers stating that they face the CORS problem once they deploy the app. But once we suggested moving the CORS registration to the required place, the problem disappeared.

Now, we can use some examples to see how we can manipulate the application’s pipeline. For this section’s purpose, we are going to create a separate application that will be dedicated only to this section of the book. The later sections will continue from the previous project, that we’ve already created.

1.8.1 Creating a First Middleware Component‌

Let’s start by creating a new ASP.NET Core Web API project, and name it MiddlewareExample.

In the launchSettings.json file, we are going to add some changes regarding the launch profiles:

{ "profiles": { "MiddlewareExample": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "launchUrl": "weatherforecast", "applicationUrl": "https://localhost:5001;http://localhost:5000", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } } }

}

Now, inside the Program class, right below the UseAuthorization part, we are going to use an anonymous method to create a first middleware component:

app.UseAuthorization(); 
app.Run(async context => { await context.Response.WriteAsync("Hello from the middleware component."); }); 
app.MapControllers();

We use the Run method, which adds a terminal component to the app pipeline. We can see we are not using the next delegate because the Run method is always terminal and terminates the pipeline. This method accepts a single parameter of the RequestDelegate type. If we inspect this delegate we are going to see that it accepts a single HttpContext parameter:

namespace Microsoft.AspNetCore.Http { public delegate Task RequestDelegate(HttpContext context); }

So, we are using that context parameter to modify our requests and responses inside the middleware component. In this specific example, we are modifying the response by using the WriteAsync method. For this method, we need Microsoft.AspNetCore.Http namespace.

Let’s start the app, and inspect the result:

alt text

There we go. We can see a result from our middleware.

1.8.2 Working with the Use Method‌

To chain multiple request delegates in our code, we can use the Use method. This method accepts a Func delegate as a parameter and returns a Task as a result:

public static IApplicationBuilder Use(this IApplicationBuilder app, Func<HttpContext, Func<Task>, Task> middleware);

So, this means when we use it, we can make use of two parameters, context and next:

app.UseAuthorization(); app.Use(async (context, next) => { Console.WriteLine($"Logic before executing the next delegate in the Use method"); await next.Invoke(); Console.WriteLine($"Logic after executing the next delegate in the Use method"); }); app.Run(async context => { Console.WriteLine($"Writing the response to the client in the Run method"); await context.Response.WriteAsync("Hello from the middleware component."); }); app.MapControllers();

As you can see, we add several logging messages to be sure what the order of executions inside middleware components is. First, we write to a console window, then we invoke the next delegate passing the execution to another component in the pipeline. In the Run method, we write a second message to the console window and write a response to the client. After that, the execution is returned to the Use method and we write the third message (the one below the next delegate invocation) to the console window.

The Run method doesn’t accept the next delegate as a parameter, so without it to send the execution to another component, this component short-circuits the request pipeline.

Now, let’s start the app and inspect the result, which proves our execution order:

alt text

Maybe you will see two sets of messages but don’t worry, that’s because the browser sends two sets of requests, one for the /weatherforecast and another for the favicon.ico. If you, for example, use Postman to test this, you will see only one set of messages.

One more thing to mention. We shouldn’t call the next.Invoke after we send the response to the client. This can cause exceptions if we try to set the status code or modify the headers of the response.

For example:

app.Use(async (context, next) => { await context.Response.WriteAsync("Hello from the middleware component."); await next.Invoke(); Console.WriteLine($"Logic after executing the next delegate in the Use method"); }); app.Run(async context => { Console.WriteLine($"Writing the response to the client in the Run method"); context.Response.StatusCode = 200; await context.Response.WriteAsync("Hello from the middleware component."); });

Here we write a response to the client and then call next.Invoke. Of course, this passes the execution to the next component in the pipeline. There, we try to set the status code of the response and write another one. But let’s inspect the result:

alt text

We can see the error message, which is pretty self-explanatory.

1.8.3 Using the Map and MapWhen Methods‌

To branch the middleware pipeline, we can use both Map and MapWhen methods. The Map method is an extension method that accepts a path string as one of the parameters:

public static IApplicationBuilder Map(this IApplicationBuilder app, PathString pathMatch, Action<IApplicationBuilder> configuration)

When we provide the pathMatch string, the Map method will compare it to the start of the request path. If they match, the app will execute the branch.

So, let’s see how we can use this method by modifying the Program class:

app.Use(async (context, next) => { Console.WriteLine($"Logic before executing the next delegate in the Use method"); await next.Invoke(); Console.WriteLine($"Logic after executing the next delegate in the Use method"); }); app.Map("/usingmapbranch", builder => { builder.Use(async (context, next) => { Console.WriteLine("Map branch logic in the Use method before the next delegate"); await next.Invoke(); Console.WriteLine("Map branch logic in the Use method after the next delegate"); }); builder.Run(async context => { Console.WriteLine($"Map branch response to the client in the Run method"); await context.Response.WriteAsync("Hello from the map branch."); }); }); app.Run(async context => { Console.WriteLine($"Writing the response to the client in the Run method"); await context.Response.WriteAsync("Hello from the middleware component."); });

By using the Map method, we provide the path match, and then in the delegate, we use our well-known Use and Run methods to execute middleware components.

Now, if we start the app and navigate to /usingmapbranch, we are going to see the response in the browser:

alt text

But also, if we inspect console logs, we are going to see our new messages:

alt text

Here, we can see the messages from the Use method before the branch, and the messages from the Use and Run methods inside the Map branch. We are not seeing any message from the Run method outside the branch. It is important to know that any middleware component that we add after the Map method in the pipeline won’t be executed. This is true even if we don’t use the Run middleware inside the branch.

1.8.4 Using MapWhen Method‌

If we inspect the MapWhen method, we are going to see that it accepts two parameters:

public static IApplicationBuilder MapWhen(this IApplicationBuilder app, Func<HttpContext, bool> predicate, Action<IApplicationBuilder> configuration)

This method uses the result of the given predicate to branch the request pipeline.

So, let’s see it in action:

app.Map("/usingmapbranch", builder => { ... }); app.MapWhen(context => context.Request.Query.ContainsKey("testquerystring"), builder => { builder.Run(async context => { await context.Response.WriteAsync("Hello from the MapWhen branch.");
}); }); app.Run(async context => { ... });

Here, if our request contains the provided query string, we execute the Run method by writing the response to the client. So, as we said, based on the predicate’s result the MapWhen method branch the request pipeline.

Now, we can start the app and navigate to https://localhost:5001?testquerystring=test:

alt text

And there we go. We can see our expected message. Of course, we can chain multiple middleware components inside this method as well.

So, now we have a good understanding of using middleware and its order of invocation in the ASP.NET Core application. This knowledge is going to be very useful to us once we start working on a custom error handling middleware (a few sections later).

In the next chapter, we’ll learn how to configure a Logger service because it’s really important to have it configured as early in the project as possible. We can close this app, and continue with the CompanyEmployees app.

2 Configuring a logging service

Why do logging messages matter so much during application development? While our application is in the development stage, it's easy to debug the code and find out what happened. But debugging in a production environment is not that easy.‌

That's why log messages are a great way to find out what went wrong and why and where the exceptions have been thrown in our code in the production environment. Logging also helps us more easily follow the flow of our application when we don’t have access to the debugger.

.NET Core has its implementation of the logging mechanism, but in all our projects we prefer to create our custom logger service with the external logger library NLog.

We are going to do that because having an abstraction will allow us to have any logger behind our interface. This means that we can start with NLog, and at some point, we can switch to any other logger and our interface will still work because of our abstraction.

2.1 Creating the Required Projects

Let’s create two new projects. In the first one named Contracts, we are going to keep our interfaces. We will use this project later on too, to define our contracts for the whole application. The second one, LoggerService, we are going to use to write our logger logic in.‌

To create a new project, right-click on the solution window, choose Add, and then NewProject. Choose the Class Library (C#) project template:

alt text

Finally, name it Contracts, and choose the .NET 6.0 as a version. Do the same thing for the second project and name it LoggerService. Now that we have these projects in place, we need to reference them from our main project.

To do that, navigate to the solution explorer. Then in the LoggerService project, right-click on Dependencies and choose the Add Project Reference option. Under Projects, click Solution and check the Contracts project.

Now, in the main project right click on Dependencies and then click on Add Project Reference. Check the LoggerService checkbox to import it. Since we have referenced the Contracts project through the LoggerService, it will be available in the main project too.

2.2 Creating the ILoggerManager Interface and Installing NLog

Our logger service will contain four methods for logging our messages:‌

• Info messages
• Debug messages
• Warning messages
• Error messages

To achieve this, we are going to create an interface named ILoggerManager inside the Contracts project containing those four method definitions.

So, let’s do that first by right-clicking on the Contracts project, choosing the Add -> New Item menu, and then selecting the Interface option where we have to specify the name ILoggerManager and click the Add button. After the file creation, we can add the code:

public interface ILoggerManager { void LogInfo(string message); void LogWarn(string message); void LogDebug(string message); void LogError(string message); }

Before we implement this interface inside the LoggerService project, we need to install the NLog library in our LoggerService project. NLog is a logging platform for .NET which will help us create and log our messages.

We are going to show two different ways of adding the NLog library to our project.

  1. In the LoggerService project, right-click on the Dependencies and choose Manage NuGet Packages. After the NuGet Package Manager window appears, just follow these steps:

alt text

  1. From the View menu, choose Other Windows and then click on the Package Manager Console. After the console appears, type:
    Install-Package NLog.Extensions.Logging -Version 1.7.4

After a couple of seconds, NLog is up and running in our application.

2.3 Implementing the Interface and Nlog.Config File

In the LoggerService project, we are going to create a new‌ class: LoggerManager. We can do that by repeating the same steps for the interface creation just choosing the class option instead of an interface. Now let’s have it implement the ILoggerManager interface we previously defined:

public class LoggerManager : ILoggerManager { private static ILogger logger = LogManager.GetCurrentClassLogger(); public LoggerManager() { } public void LogDebug(string message) => logger.Debug(message); public void LogError(string message) => logger.Error(message); public void LogInfo(string message) => logger.Info(message); public void LogWarn(string message) => logger.Warn(message); }

As you can see, our methods are just wrappers around NLog’s methods. Both ILogger and LogManager are part of the NLog namespace. Now, we need to configure it and inject it into the Program class in the section related to the service configuration.

NLog needs to have information about where to put log files on the file system, what the name of these files will be, and what is the minimum level of logging that we want.

We are going to define all these constants in a text file in the main project and name it nlog.config. So, let’s right-click on the main project, choose Add -> New Item, and then search for the Text File. Select the Text File, and add the name nlog.config.

<?xml version="1.0" encoding="utf-8" ?> <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" autoReload="true" internalLogLevel="Trace" internalLogFile=".\internal_logs\internallog.txt">
<targets> <target name="logfile" xsi:type="File" fileName=".\logs\${shortdate}_logfile.txt" layout="${longdate} ${level:uppercase=true} ${message}"/> </targets> <rules> <logger name="*" minlevel="Debug" writeTo="logfile" /> </rules> </nlog>

You can find the internal logs at the project root, and the logs folder in the bin\debug folder of the main project once we start the app. Once the application is published both folders will be created at the root of the output folder which is what we want.

NOTE: If you want to have more control over the log output, we suggest renaming the current file to nlog.development.config and creating another configuration file called nlog.production.config. Then you can do something like this in the code: env.ConfigureNLog($"nlog.{env.EnvironmentName}.config"); to get the different configuration files for different environments. From our experience production path is what matters, so this might be a bit redundant.

2.4 Configuring Logger Service for Logging Messages

Setting up the configuration for a logger service is quite easy. First, we need to update the Program class and include the path to the configuration file for the NLog configuration:‌

using NLog; var builder = WebApplication.CreateBuilder(args); LogManager.LoadConfiguration(string.Concat(Directory.GetCurrentDirectory(), "/nlog.config")); builder.Services.ConfigureCors(); builder.Services.ConfigureIISIntegration();

We are using NLog’s LogManager static class with the LoadConfiguration method to provide a path to the configuration file.

NOTE: If VisualStudio asks you to install the NLog package in the main project, don’t do it. Just remove the LoggerService reference from the main project and add it again. We have already installed the required package in the LoggerService project and the main project should be able to reference it as well.

The next thing we need to do is to add the logger service inside the .NET Core’s IOC container. There are three ways to do that:

• By calling the services.AddSingleton method, we can create a service the first time we request it and then every subsequent request will call the same instance of the service. This means that all components share the same service every time they need it and the same instance will be used for every method call.

• By calling the services.AddScoped method, we can create a service once per request. That means whenever we send an HTTP request to the application, a new instance of the service will be created.

• By calling the services.AddTransient method, we can create a service each time the application requests it. This means that if multiple components need the service, it will be created again for every single component request.

So, let’s add a new method in the ServiceExtensions class:


public static void ConfigureLoggerService(this IServiceCollection services) => services.AddSingleton<ILoggerManager, LoggerManager>();

And after that, we need to modify the Program class to include our newly created extension method:

builder.Services.AddControllers();
builder.Services.ConfigureLoggerService();
builder.Services.ConfigureCors(); builder.Services.ConfigureIISIntegration();

Every time we want to use a logger service, all we need to do is to inject it into the constructor of the class that needs it. .NET Core will resolve that service and the logging features will be available.

This type of injecting a class is called Dependency Injection and it is built into .NET Core.

Let’s learn a bit more about it.

2.5 DI, IoC, and Logger Service Testing

What is Dependency Injection (DI) exactly and what is IoC (Inversion of Control)?‌

Dependency injection is a technique we use to achieve the decoupling of objects and their dependencies. It means that rather than instantiating an object explicitly in a class every time we need it, we can instantiate it once and then send it to the class.

This is often done through a constructor. The specific approach we utilize is also known as the Constructor Injection.

In a system that is designed around DI, you may find many classes requesting their dependencies via their constructors. In this case, it is helpful to have a class that manages and provides dependencies to classes through the constructor.

These classes are referred to as containers or more specifically, Inversion of Control containers. An IoC container is essentially a factory that is responsible for providing instances of the types that are requested from it.

To test our logger service, we are going to use the default WeatherForecastController. You can find it in the main project in the Controllers folder. It comes with the ASP.NET Core Web API template.

In the Solution Explorer, we are going to open the Controllers folder and locate the WeatherForecastController class. Let’s modify it:

[Route("[controller]")] [ApiController] public class WeatherForecastController : ControllerBase { private ILoggerManager _logger; public WeatherForecastController(ILoggerManager logger) { _logger = logger; } [HttpGet] public IEnumerable<string> Get() { _logger.LogInfo("Here is info message from our values controller."); _logger.LogDebug("Here is debug message from our values controller."); _logger.LogWarn("Here is warn message from our values controller."); _logger.LogError("Here is an error message from our values controller."); return new string[] { "value1", "value2" }; } }

Now let’s start the application and browse to https://localhost:5001/weatherforecast.

As a result, you will see an array of two strings. Now go to the folder that you have specified in the nlog.config file, and check out the result. You should see two folders: the internal_logs folder and the logs folder. Inside the logs folder, you should find a file with the following logs:

imagealt text

That’s all we need to do to configure our logger for now. We’ll add some messages to our code along with the new features.

3 Onion architecture implementation

In this chapter, we are going to talk about the Onion architecture, its layers, and the advantages of using it. We will learn how to create different layers in our application to separate the different application parts and improve the application's maintainability and testability.‌

That said, we are going to create a database model and transfer it to the MSSQL database by using the code first approach. So, we are going to learn how to create entities (model classes), how to work with the DbContext class, and how to use migrations to transfer our created database model to the real database. Of course, it is not enough to just create a database model and transfer it to the database. We need to use it as well, and for that, we will create a Repository pattern as a data access layer.

With the Repository pattern, we create an abstraction layer between the data access and the business logic layer of an application. By using it, we are promoting a more loosely coupled approach to access our data in the database.

Also, our code becomes cleaner, easier to maintain, and reusable. Data access logic is stored in a separate class, or sets of classes called a repository, with the responsibility of persisting the application’s business model.

Additionally, we are going to create a Service layer to extract all the business logic from our controllers, thus making the presentation layer and the controllers clean and easy to maintain.

So, let’s start with the Onion architecture explanation.

3.1 About Onion Architecture

The Onion architecture is a form of layered architecture and we can visualize these layers as concentric circles. Hence the name Onion architecture. The Onion architecture was first introduced by Jeffrey Palermo, to overcome the issues of the traditional N-layered architecture approach.‌

There are multiple ways that we can split the onion, but we are going to choose the following approach where we are going to split the architecture into 4 layers:

• Domain Layer
• Service Layer
• Infrastructure Layer
• Presentation Layer

Conceptually, we can consider that the Infrastructure and Presentation layers are on the same level of the hierarchy.

Now, let us go ahead and look at each layer with more detail to see why we are introducing it and what we are going to create inside of that layer:

alt text

We can see all the different layers that we are going to build in our project.

3.1.1 Advantages of the Onion Architecture‌

Let us take a look at what are the advantages of Onion architecture, and why we would want to implement it in our projects.

All of the layers interact with each other strictly through the interfaces defined in the layers below. The flow of dependencies is towards the core of the Onion. We will explain why this is important in the next section.

Using dependency inversion throughout the project, depending on abstractions (interfaces) and not the implementations, allows us to switch out the implementation at runtime transparently. We are depending on abstractions at compile-time, which gives us strict contracts to work with, and we are being provided with the implementation at runtime.

Testability is very high with the Onion architecture because everything depends on abstractions. The abstractions can be easily mocked with a mocking library such as Moq. We can write business logic without concern about any of the implementation details. If we need anything from an external system or service, we can just create an interface for it and consume it. We do not have to worry about how it will be implemented.The higher layers of the Onion will take care of implementing that interface transparently.

3.1.2 Flow of Dependencies‌

The main idea behind the Onion architecture is the flow of dependencies, or rather how the layers interact with each other. The deeper the layer resides inside the Onion, the fewer dependencies it has.

The Domain layer does not have any direct dependencies on the outside layers. It is isolated, in a way, from the outside world. The outer layers are all allowed to reference the layers that are directly below them in the hierarchy.

We can conclude that all the dependencies in the Onion architecture flow inwards. But we should ask ourselves, why is this important?

The flow of dependencies dictates what a certain layer in the Onion architecture can do. Because it depends on the layers below it in the hierarchy, it can only call the methods that are exposed by the lower layers.

We can use lower layers of the Onion architecture to define contracts or interfaces. The outer layers of the architecture implement these interfaces. This means that in the Domain layer, we are not concerning ourselves with infrastructure details such as the database or external services.

Using this approach, we can encapsulate all of the rich business logic in the Domain and Service layers without ever having to know any implementation details. In the Service layer, we are going to depend only on the interfaces that are defined by the layer below, which is the Domain layer.

So, after all the theory, we can continue with our project implementation.

Let’s start with the models and the Entities project.

3.2 Creating Models

Using the example from the second chapter of this book, we are going to extract a new Class Library project named Entities.‌

Inside it, we are going to create a folder named Models, which will contain all the model classes (entities). Entities represent classes that Entity Framework Core uses to map our database model with the tables from the database. The properties from entity classes will be mapped to the database columns.

So, in the Models folder we are going to create two classes and modify them:

public class Company { [Column("CompanyId")]
public Guid Id { get; set; } [Required(ErrorMessage = "Company name is a required field.")] [MaxLength(60, ErrorMessage = "Maximum length for the Name is 60 characters.")] public string? Name { get; set; } [Required(ErrorMessage = "Company address is a required field.")] [MaxLength(60, ErrorMessage = "Maximum length for the Address is 60 characters")] public string? Address { get; set; } public string? Country { get; set; } public ICollection<Employee>? Employees { get; set; } } public class Employee { [Column("EmployeeId")] public Guid Id { get; set; } [Required(ErrorMessage = "Employee name is a required field.")] [MaxLength(30, ErrorMessage = "Maximum length for the Name is 30 characters.")] public string? Name { get; set; } [Required(ErrorMessage = "Age is a required field.")] public int Age { get; set; } [Required(ErrorMessage = "Position is a required field.")] [MaxLength(20, ErrorMessage = "Maximum length for the Position is 20 characters.")] public string? Position { get; set; } [ForeignKey(nameof(Company))] public Guid CompanyId { get; set; } public Company? Company { get; set; } }

We have created two classes: the Company and Employee. Those classes contain the properties which Entity Framework Core is going to map to the columns in our tables in the database. But not all the properties will be mapped as columns. The last property of the Company class (Employees) and the last property of the Employee class (Company) are navigational properties; these properties serve the purpose of defining the relationship between our models.

We can see several attributes in our entities. The [Column] attribute will specify that the Id property is going to be mapped with a different name in the database. The [Required] and [MaxLength] properties are here for validation purposes. The first one declares the property as mandatory and the second one defines its maximum length.

Once we transfer our database model to the real database, we are going to see how all these validation attributes and navigational properties affect the column definitions.

3.3 Context Class and the Database Connection

Before we start with the context class creation, we have to create another‌ .NET Class Library and name it Repository. We are going to use this project for the database context and repository implementation.

Now, let's create the context class, which will be a middleware component for communication with the database. It must inherit from the Entity Framework Core’s DbContext class and it consists of DbSet properties, which EF Core is going to use for the communication with the database.Because we are working with the DBContext class, we need to install the Microsoft.EntityFrameworkCore package in the Repository project. Also, we are going to reference the Entities project from the Repository project:

alt text

Then, let’s navigate to the root of the Repository project and create the RepositoryContext class:

public class RepositoryContext : DbContext { public RepositoryContext(DbContextOptions options) : base(options) { }
public DbSet<Company>? Companies { get; set; } public DbSet<Employee>? Employees { get; set; } }

After the class modification, let’s open the appsettings.json file, in the main project, and add the connection string named sqlconnection:

{ "Logging": { "LogLevel": { "Default": "Warning" } }, "ConnectionStrings": { "sqlConnection": "server=.; database=CompanyEmployee; Integrated Security=true" }, "AllowedHosts": "*" }

It is quite important to have the JSON object with the ConnectionStrings name in our appsettings.json file, and soon you will see why.

But first, we have to add the Repository project’s reference into the main project.

Then, let’s create a new ContextFactory folder in the main project and inside it a new RepositoryContextFactory class. Since our RepositoryContext class is in a Repository project and not in the main one, this class will help our application create a derived DbContext instance during the design time which will help us with our migrations:

public class RepositoryContextFactory : IDesignTimeDbContextFactory<RepositoryContext> { public RepositoryContext CreateDbContext(string[] args) { var configuration = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json") .Build(); var builder = new DbContextOptionsBuilder<RepositoryContext>() .UseSqlServer(configuration.GetConnectionString("sqlConnection")); return new RepositoryContext(builder.Options); } }

We are using the IDesignTimeDbContextFactory <out TContext> interface that allows design-time services to discover implementations of this interface. Of course, the TContext parameter is our RepositoryContext class.

For this, we need to add two using directives:

using Microsoft.EntityFrameworkCore.Design; 
using Repository;

Then, we have to implement this interface with the CreateDbContext method. Inside it, we create the configuration variable of the IConfigurationRoot type and specify the appsettings file, we want to use. With its help, we can use the GetConnectionString method to access the connection string from the appsettings.json file. Moreover, to be able to use the UseSqlServer method, we need to install the Microsoft.EntityFrameworkCore.SqlServer package in the main project and add one more using directive:

using Microsoft.EntityFrameworkCore;

If we navigate to the GetConnectionString method definition, we will see that it is an extension method that uses the ConnectionStrings name from the appsettings.json file to fetch the connection string by the provided key:

alt text

Finally, in the CreateDbContext method, we return a new instance of our RepositoryContext class with provided options.

3.4 Migration and Initial Data Seed

Migration is a standard process of creating and updating the database from our application. Since we are finished with the database model creation, we can transfer that model to the real database. But we need to modify our CreateDbContext method first:‌

var builder = new DbContextOptionsBuilder<RepositoryContext>() .    
    UseSqlServer(configuration.GetConnectionString("sqlConnection"),    b => b.MigrationsAssembly("CompanyEmployees"));

We have to make this change because migration assembly is not in our main project, but in the Repository project. So, we’ve just changed the project for the migration assembly.

Before we execute our migration commands, we have to install an additional ef core library: Microsoft.EntityFrameworkCore.Tools

Now, let’s open the Package Manager Console window and create our first migration:

PM> Add-Migration DatabaseCreation

With this command, we are creating migration files and we can find them in the Migrations folder in our main project:

alt text

With those files in place, we can apply migration:

PM> Update-Database

Excellent. We can inspect our database now:

alt text

Once we have the database and tables created, we should populate them with some initial data. To do that, we are going to create another folder in the Repository project called Configuration and add the CompanyConfiguration class:

public class CompanyConfiguration : IEntityTypeConfiguration<Company> { public void Configure(EntityTypeBuilder<Company> builder) { builder.HasData ( new Company { Id = new Guid("c9d4c053-49b6-410c-bc78-2d54a9991870"), Name = "IT_Solutions Ltd", Address = "583 Wall Dr. Gwynn Oak, MD 21207", Country = "USA" }, new Company { Id = new Guid("3d490a70-94ce-4d15-9494-5248280c2ce3"), Name = "Admin_Solutions Ltd", Address = "312 Forest Avenue, BF 923", Country = "USA" } ); } }

Let’s do the same thing for the EmployeeConfiguration class:

public class EmployeeConfiguration : IEntityTypeConfiguration<Employee> { public void Configure(EntityTypeBuilder<Employee> builder) { builder.HasData ( new Employee { Id = new Guid("80abbca8-664d-4b20-b5de-024705497d4a"), Name = "Sam Raiden", Age = 26, Position = "Software developer", CompanyId = new Guid("c9d4c053-49b6-410c-bc78-2d54a9991870") }, new Employee { Id = new Guid("86dba8c0-d178-41e7-938c-ed49778fb52a"), Name = "Jana McLeaf", Age = 30, Position = "Software developer", CompanyId = new Guid("c9d4c053-49b6-410c-bc78-2d54a9991870") }, new Employee { Id = new Guid("021ca3c1-0deb-4afd-ae94-2159a8479811"), Name = "Kane Miller", Age = 35, Position = "Administrator", CompanyId = new Guid("3d490a70-94ce-4d15-9494-5248280c2ce3") } ); } }

To invoke this configuration, we have to change the RepositoryContext class:

public class RepositoryContext: DbContext { public RepositoryContext(DbContextOptions options) : base(options) { } protected override void OnModelCreating(ModelBuilder modelBuilder) { modelBuilder.ApplyConfiguration(new CompanyConfiguration()); modelBuilder.ApplyConfiguration(new EmployeeConfiguration()); } public DbSet<Company> Companies { get; set; } public DbSet<Employee> Employees { get; set; } }

Now, we can create and apply another migration to seed these data to the database:

PM> Add-Migration InitialData
PM> Update-Database

This will transfer all the data from our configuration files to the respective tables.

3.5 Repository Pattern Logic

After establishing a connection to the database and creating one, it's time to create a generic repository that will provide us with the CRUD methods. As a result, all the methods can be called upon any repository class in our project.‌

Furthermore, creating the generic repository and repository classes that use that generic repository is not going to be the final step. We will go a step further and create a wrapper class around repository classes and inject it as a service in a dependency injection container.

Consequently, we will be able to instantiate this class once and then call any repository class we need inside any of our controllers.

The advantages of this approach will become clearer once we use it in the project.

That said, let’s start by creating an interface for the repository inside the Contracts project:

public interface IRepositoryBase<T> { IQueryable<T> FindAll(bool trackChanges); IQueryable<T> FindByCondition(Expression<Func<T, bool>> expression, bool trackChanges); void Create(T entity); void Update(T entity); void Delete(T entity); }

Right after the interface creation, we are going to reference Contracts inside the Repository project. Also, in the Repository project, we are going to create an abstract class RepositoryBase — which is going to implement the IRepositoryBase interface:

public abstract class RepositoryBase<T> : IRepositoryBase<T> where T : class { protected RepositoryContext RepositoryContext; public RepositoryBase(RepositoryContext repositoryContext) => RepositoryContext = repositoryContext; public IQueryable<T> FindAll(bool trackChanges) => !trackChanges ? RepositoryContext.Set<T>() .AsNoTracking() : RepositoryContext.Set<T>(); public IQueryable<T> FindByCondition(Expression<Func<T, bool>> expression, bool trackChanges) => !trackChanges ? RepositoryContext.Set<T>() .Where(expression) .AsNoTracking() : RepositoryContext.Set<T>() .Where(expression); public void Create(T entity) => RepositoryContext.Set<T>().Add(entity); public void Update(T entity) => RepositoryContext.Set<T>().Update(entity); public void Delete(T entity) => RepositoryContext.Set<T>().Remove(entity); }

This abstract class as well as the IRepositoryBase interface work with the generic type T. This type T gives even more reusability to the RepositoryBase class. That means we don’t have to specify the exact model (class) right now for the RepositoryBase to work with. We can do that later on.

Moreover, we can see the trackChanges parameter. We are going to use it to improve our read-only query performance. When it’s set to false, we attach the AsNoTracking method to our query to inform EF Core that it doesn’t need to track changes for the required entities. This greatly improves the speed of a query.

3.6 Repository User Interfaces and Classes

Now that we have the RepositoryBase class, let’s create the user classes that will inherit this abstract class.‌

By inheriting from the RepositoryBase class, they will have access to all the methods from it. Furthermore, every user class will have its interface for additional model-specific methods.

This way, we are separating the logic that is common for all our repository user classes and also specific for every user class itself.

Let’s create the interfaces in the Contracts project for the Company and Employee classes:

namespace Contracts { public interface ICompanyRepository { } } namespace Contracts { public interface IEmployeeRepository { } }

After this, we can create repository user classes in the Repository project.

The first thing we are going to do is to create the CompanyRepository class:

public class CompanyRepository : RepositoryBase<Company>, ICompanyRepository { public CompanyRepository(RepositoryContext repositoryContext) : base(repositoryContext) { } }

And then, the EmployeeRepository class:

public class EmployeeRepository : RepositoryBase<Employee>, IEmployeeRepository
{ public EmployeeRepository(RepositoryContext repositoryContext) : base(repositoryContext) { } }

After these steps, we are finished creating the repository and repository- user classes. But there are still more things to do.

3.7 Creating a Repository Manager

It is quite common for the API to return a response that consists of data from multiple resources; for example, all the companies and just some employees older than 30. In such a case, we would have to instantiate both of our repository classes and fetch data from their resources.‌

Maybe it’s not a problem when we have only two classes, but what if we need the combined logic of five or even more different classes? It would just be too complicated to pull that off.

With that in mind, we are going to create a repository manager class, which will create instances of repository user classes for us and then register them inside the dependency injection container. After that, we can inject it inside our services with constructor injection (supported by ASP.NET Core). With the repository manager class in place, we may call any repository user class we need.

But we are also missing one important part. We have the Create, Update, and Delete methods in the RepositoryBase class, but they won’t make any change in the database until we call the SaveChanges method. Our repository manager class will handle that as well.

That said, let’s get to it and create a new interface in the Contract project:

public interface IRepositoryManager { ICompanyRepository Company { get; } IEmployeeRepository Employee { get; }
void Save(); }

And add a new class to the Repository project:

public sealed class RepositoryManager : IRepositoryManager { private readonly RepositoryContext _repositoryContext; private readonly Lazy<ICompanyRepository> _companyRepository; private readonly Lazy<IEmployeeRepository> _employeeRepository; public RepositoryManager(RepositoryContext repositoryContext) { _repositoryContext = repositoryContext; _companyRepository = new Lazy<ICompanyRepository>(() => new CompanyRepository(repositoryContext)); _employeeRepository = new Lazy<IEmployeeRepository>(() => new EmployeeRepository(repositoryContext)); } public ICompanyRepository Company => _companyRepository.Value; public IEmployeeRepository Employee => _employeeRepository.Value; public void Save() => _repositoryContext.SaveChanges(); }

As you can see, we are creating properties that will expose the concrete repositories and also we have the Save() method to be used after all the modifications are finished on a certain object. This is a good practice because now we can, for example, add two companies, modify two employees, and delete one company — all in one action — and then just call the Save method once. All the changes will be applied or if something fails, all the changes will be reverted:

_repository.Company.Create(company); _repository.Company.Create(anotherCompany); _repository.Employee.Update(employee); _repository.Employee.Update(anotherEmployee); _repository.Company.Delete(oldCompany); _repository.Save();

The interesting part with the RepositoryManager implementation is that we are leveraging the power of the Lazy class to ensure the lazy initialization of our repositories. This means that our repository instances are only going to be created when we access them for the first time, and not before that.

After these changes, we need to register our manager class in the main project. So, let’s first modify the ServiceExtensions class by adding this code:

public static void ConfigureRepositoryManager(this IServiceCollection services) => services.AddScoped<IRepositoryManager, RepositoryManager>();

And in the Program class above the AddController() method, we have to add this code:

builder.Services.ConfigureRepositoryManager();

Excellent.

As soon as we add some methods to the specific repository classes, and add our service layer, we are going to be able to test this logic.

So, we did an excellent job here. The repository layer is prepared and ready to be used to fetch data from the database.

Now, we can continue towards creating a service layer in our application.

3.8 Adding a Service Layer

The Service layer sits right above the Domain layer (the Contracts project is the part of the Domain layer), which means that it has a reference to the Domain layer. The Service layer will be split into two‌ projects, Service.Contracts and Service.

So, let’s start with the Service.Contracts project creation (.NET Core Class Library) where we will hold the definitions for the service interfaces that are going to encapsulate the main business logic. In the next section, we are going to create a presentation layer and then, we will see the full use of this project.

Once the project is created, we are going to add three interfaces inside it.

ICompanyService:

public interface ICompanyService { }

IEmployeeService:

public interface IEmployeeService { }

And IServiceManager:

public interface IServiceManager { ICompanyService CompanyService { get; } IEmployeeService EmployeeService { get; } }

As you can see, we are following the same pattern as with the repository contracts implementation.

Now, we can create another project, name it Service, and reference the

Service.Contracts and Contracts projects inside it:

alt text

After that, we are going to create classes that will inherit from the interfaces that reside in the Service.Contracts project.

So, let’s start with the CompanyService class:

using Contracts; using Service.Contracts; namespace Service { internal sealed class CompanyService : ICompanyService { private readonly IRepositoryManager _repository; private readonly ILoggerManager _logger; public CompanyService(IRepositoryManager repository, ILoggerManager logger)
{ _repository = repository; _logger = logger; } } }

As you can see, our class inherits from the ICompanyService interface, and we are injecting the IRepositoryManager and ILoggerManager interfaces. We are going to use IRepositoryManager to access the repository methods from each user repository class (CompanyRepository or EmployeeRepository), and ILoggerManager to access the logging methods we’ve created in the second section of this book.

To continue, let’s create a new EmployeeService class:

using Contracts; using Service.Contracts; namespace Service { internal sealed class EmployeeService : IEmployeeService { private readonly IRepositoryManager _repository; private readonly ILoggerManager _logger; public EmployeeService(IRepositoryManager repository, ILoggerManager logger) { _repository = repository; _logger = logger; } } }

Finally, we are going to create the ServiceManager class:

public sealed class ServiceManager : IServiceManager { private readonly Lazy<ICompanyService> _companyService; private readonly Lazy<IEmployeeService> _employeeService; public ServiceManager(IRepositoryManager repositoryManager, ILoggerManager logger) { _companyService = new Lazy<ICompanyService>(() => new CompanyService(repositoryManager, logger)); _employeeService = new Lazy<IEmployeeService>(() => new EmployeeService(repositoryManager, logger)); } public ICompanyService CompanyService => _companyService.Value; public IEmployeeService EmployeeService => _employeeService.Value;}

Here, as we did with the RepositoryManager class, we are utilizing the Lazy class to ensure the lazy initialization of our services.

Now, with all these in place, we have to add the reference from the Service project inside the main project. Since Service is already referencing Service.Contracts, our main project will have the same reference as well.

Now, we have to modify the ServiceExtensions class:

public static void ConfigureServiceManager(this IServiceCollection services) => services.AddScoped<IServiceManager, ServiceManager>();

And we have to add using directives:

using Service; 
using Service.Contracts;

Then, all we have to do is to modify the Program class to call this extension method:

builder.Services.ConfigureRepositoryManager(); builder.Services.ConfigureServiceManager();

3.9 Registering RepositoryContext at a Runtime

With the RepositoryContextFactory class, which implements the IDesignTimeDbContextFactory interface, we have registered our RepositoryContext class at design time. This helps us find the RepositoryContext class in another project while executing migrations.‌

But, as you could see, we have the RepositoryManager service registration, which happens at runtime, and during that registration, we must have RepositoryContext registered as well in the runtime, so we could inject it into other services (like RepositoryManager service). This might be a bit confusing, so let’s see what that means for us.

Let’s modify the ServiceExtensions class:

public static void ConfigureSqlContext(this IServiceCollection services, IConfiguration configuration) => services.AddDbContext<RepositoryContext>(opts => opts.UseSqlServer(configuration.GetConnectionString("sqlConnection")));

We are not specifying the MigrationAssembly inside the UseSqlServer method. We don’t need it in this case.

As the final step, we have to call this method in the Program class:

builder.Services.ConfigureSqlContext(builder.Configuration);

With this, we have completed our implementation, and our service layer is ready to be used in our next chapter where we are going to learn about handling GET requests in ASP.NET Core Web API.

One additional thing. From .NET 6 RC2, there is a shortcut method AddSqlServer, which can be used like this:

public static void ConfigureSqlContext(this IServiceCollection services, IConfiguration configuration) => services.AddSqlServer<RepositoryContext>((configuration.GetConnectionString("sqlConnection")));

This method replaces both AddDbContext and UseSqlServer methods and allows an easier configuration. But it doesn’t provide all of the features the AddDbContext method provides. So for more advanced options, it is recommended to use AddDbContext. We will use it throughout the rest of the project.

4 HANDLING GET REQUESTS

We’re all set to add some business logic to our application. But before we do that, let’s talk a bit about controller classes and routing because they play an important part while working with HTTP requests.‌

4.1 Controllers and Routing in WEB API

Controllers should only be responsible for handling requests, model validation, and returning responses to the frontend or some HTTP client. Keeping business logic away from controllers is a good way to keep them lightweight, and our code more readable and maintainable.‌

If you want to create the controller in the main project, you would right- click on the Controllers folder and then Add=>Controller. Then from the menu, you would choose API Controller Class and give it a name:

alt text

But, that’s not the thing we are going to do. We don’t want to create our controllers in the main project.

What we are going to do instead is create a presentation layer in our application.

The purpose of the presentation layer is to provide the entry point to our system so that consumers can interact with the data. We can implement this layer in many ways, for example creating a REST API, gRPC, etc.

However, we are going to do something different from what you are normally used to when creating Web APIs. By convention, controllers are defined in the Controllers folder inside the main project.

Why is this a problem?

Because ASP.NET Core uses Dependency Injection everywhere, we need to have a reference to all of the projects in the solution from the main project. This allows us to configure our services inside the Program class.

While this is exactly what we want to do, it introduces a big design flaw. What’s preventing our controllers from injecting anything they want inside the constructor?

So how can we impose some more strict rules about what controllers can do?

Do you remember how we split the Service layer into the Service.Contracts and Service projects? That was one piece of the puzzle.

Another part of the puzzle is the creation of a new class library project,CompanyEmployees.Presentation.

Inside that new project, we are going to install Microsoft.AspNetCore.Mvc.Core package so it has access to the ControllerBase class for our future controllers. Additionally, let’s create a single class inside the Presentation project:

public static class AssemblyReference {}

It's an empty static class that we are going to use for the assembly reference inside the main project, you will see that in a minute.

The one more thing, we have to do is to reference the Service.Contracts project inside the Presentation project.

Now, we are going to delete the Controllers folder and the WeatherForecast.cs file from the main project because we are not going to need them anymore.

Next, we have to reference the Presentation project inside the main one. As you can see, our presentation layer depends only on the service contracts, thus imposing more strict rules on our controllers.

Then, we have to modify the Program.cs file:

builder.Services.AddControllers() .AddApplicationPart(typeof(CompanyEmployees.Presentation.AssemblyReference).Assembly);

Without this code, our API wouldn’t work, and wouldn’t know where to route incoming requests. But now, our app will find all of the controllers inside of the Presentation project and configure them with the framework. They are going to be treated the same as if they were defined conventionally.

But, we don’t have our controllers yet. So, let’s navigate to the Presentation project, create a new folder named Controllers, and then a new class named CompaniesController. Since this is a class library project, we don’t have an option to create a controller as we had in the main project. Therefore, we have to create a regular class and then modify it:

using Microsoft.AspNetCore.Mvc; namespace CompanyEmployees.Presentation.Controllers { [Route("api/[controller]")] [ApiController] public class CompaniesController : ControllerBase { } }

We’ve created this controller in the same way the main project would.

Every web API controller class inherits from the ControllerBase abstract class, which provides all necessary behavior for the derived class.

Also, above the controller class we can see this part of the code:

[Route("api/[controller]")]

This attribute represents routing and we are going to talk more about routing inside Web APIs.

Web API routing routes incoming HTTP requests to the particular action method inside the Web API controller. As soon as we send our HTTP request, the MVC framework parses that request and tries to match it to an action in the controller.

There are two ways to implement routing in the project:

• Convention-based routing and
• Attribute routing

Convention-based routing is called such because it establishes a convention for the URL paths. The first part creates the mapping for the controller name, the second part creates the mapping for the action method, and the third part is used for the optional parameter. We can configure this type of routing in the Program class:

alt text

Our Web API project doesn’t configure routes this way, but if you create an MVC project this will be the default route configuration. Of course, if you are using this type of route configuration, you have to use the app.UseRouting method to add the routing middleware in the application’s pipeline.

If you inspect the Program class in our main project, you won’t find the UseRouting method because the routes are configured with the app.MapControllers method, which adds endpoints for controller actions without specifying any routes.

Attribute routing uses the attributes to map the routes directly to the action methods inside the controller. Usually, we place the base route above the controller class, as you can see in our Web API controller class. Similarly, for the specific action methods, we create their routes right above them.

While working with the Web API project, the ASP.NET Core team suggests that we shouldn’t use Convention-based Routing, but Attribute routing instead.

Different actions can be executed on the resource with the same URI, but with different HTTP Methods. In the same manner for different actions, we can use the same HTTP Method, but different URIs. Let’s explain this quickly.

For Get request, Post, or Delete, we use the same URI /api/companies but we use different HTTP Methods like GET, POST, or DELETE. But if we send a request for all companies or just one company, we are going to use the same GET method but different URIs (/api/companies for all companies and /api/companies/{companyId} for a single company).

We are going to understand this even more once we start implementing different actions in our controller.

4.2 Naming Our Resources

The resource name in the URI should always be a noun and not an action. That means if we want to create a route to get all companies, we should create this route: api/companies and not this one:‌/api/getCompanies.

The noun used in URI represents the resource and helps the consumer to understand what type of resource we are working with. So, we shouldn’t choose the noun products or orders when we work with the companies resource; the noun should always be companies. Therefore, by following this convention if our resource is employees (and we are going to work with this type of resource), the noun should be employees.

Another important part we need to pay attention to is the hierarchy between our resources. In our example, we have a Company as a principal entity and an Employee as a dependent entity. When we create a route for a dependent entity, we should follow a slightly different convention:/api/principalResource/{principalId}/dependentResource.

Because our employees can’t exist without a company, the route for the employee's resource should be /api/companies/{companyId}/employees.

With all of this in mind, we can start with the Get requests.

4.3 Getting All Companies From the Database

So let’s start.‌

The first thing we are going to do is to change the base route

from [Route("api/[controller]")] to [Route("api/companies")]. Even though the first route will work just fine, with the second example we are more specific to show that this routing should point to the CompaniesController class.

Now it is time to create the first action method to return all the companies from the database. Let’s create a definition for the GetAllCompanies method in the ICompanyRepository interface:

public interface ICompanyRepository { IEnumerable<Company> GetAllCompanies(bool trackChanges); }

For this to work, we need to add a reference from the Entities project to the Contracts project.

Now, we can continue with the interface implementation in the CompanyRepository class:

internal sealed class CompanyRepository : RepositoryBase<Company>, ICompanyRepository { public CompanyRepository(RepositoryContext repositoryContext) :base(repositoryContext) { } public IEnumerable<Company> GetAllCompanies(bool trackChanges) => FindAll(trackChanges) .OrderBy(c => c.Name) .ToList(); }

As you can see, we are calling the FindAll method from the RepositoryBase class, ordering the result with the OrderBy method, and then executing the query with the ToList method.

After the repository implementation, we have to implement a service layer.

Let’s start with the ICompanyService interface modification:

public interface ICompanyService { IEnumerable<Company> GetAllCompanies(bool trackChanges); }

Since the Company model resides in the Entities project, we have to add the Entities reference to the Service.Contracts project. At least, we have for now.

Let’s be clear right away before we proceed. Getting all the entities from the database is a bad idea. We’re going to start with the simplest method and change it later on.

Then, let’s continue with the CompanyService modification:

internal sealed class CompanyService : ICompanyService { private readonly IRepositoryManager _repository; private readonly ILoggerManager _logger; public CompanyService(IRepositoryManager repository, ILoggerManager logger) { _repository = repository; _logger = logger; } public IEnumerable<Company> GetAllCompanies(bool trackChanges) { try { var companies = _repository.Company.GetAllCompanies(trackChanges); return companies; } catch (Exception ex) { _logger.LogError($"Something went wrong in the {nameof(GetAllCompanies)} service method {ex}"); throw; } } }

We are using our repository manager to call the GetAllCompanies method from the CompanyRepository class and return all the companies from the database.

Finally, we have to return companies by using the GetAllCompanies method inside the Web API controller.

The purpose of the action methods inside the Web API controllers is not only to return results. It is the main purpose, but not the only one. We need to pay attention to the status codes of our Web API responses as well. Additionally, we are going to decorate our actions with the HTTP attributes which will mark the type of the HTTP request to that action.

So, let’s modify the CompaniesController:

[Route("api/companies")] [ApiController] public class CompaniesController : ControllerBase { private readonly IServiceManager _service; public CompaniesController(IServiceManager service) => _service = service; [HttpGet] public IActionResult GetCompanies() { try { var companies = _service.CompanyService.GetAllCompanies(trackChanges: false); return Ok(companies); } catch { return StatusCode(500, "Internal server error"); } } }

Let’s explain this code a bit.

First of all, we inject the IServiceManager interface inside the constructor. Then by decorating the GetCompanies action with

the [HttpGet] attribute, we are mapping this action to the GET request. Then, we use an injected service to call the service method that gets the data from the repository class.

The IActionResult interface supports using a variety of methods, which return not only the result but also the status codes. In this situation,the OK method returns all the companies and also the status code 200 — which stands for OK. If an exception occurs, we are going to return the internal server error with the status code 500.

Because there is no route attribute right above the action, the route for the GetCompanies action will be api/companies which is the route placed on top of our controller.

4.4 Testing the Result with Postman

To check the result, we are going to use a great tool named Postman, which helps a lot with sending requests and displaying responses. If you download our exercise files, you will find the file Bonus 2- CompanyEmployeesRequests.postman_collection.json, which contains a request collection divided for each chapter of this book. You can import them in Postman to save yourself the time of manually typing them:‌

alt text

NOTE: Please note that some GUID values will be different for your project, so you have to change them according to those values.

So let’s start the application by pressing the F5 button and check that it is now listening on the https://localhost:5001 address:

alt text

If this is not the case, you probably ran it in the IIS mode; so turn the application off and start it again, but in the CompanyEmployees mode:

alt text

Now, we can use Postman to test the result:https://localhost:5001/api/companies

alt text

Excellent, everything is working as planned. But we are missing something. We are using the Company entity to map our requests to the database and then returning it as a result to the client, and this is not a good practice. So, in the next part, we are going to learn how to improve our code with DTO classes.

4.5 DTO Classes vs. Entity Model Classes

A data transfer object (DTO) is an object that we use to transport data between the client and server applications.‌

So, as we said in a previous section of this book, it is not a good practice to return entities in the Web API response; we should instead use data transfer objects. But why is that?

Well, EF Core uses model classes to map them to the tables in the database and that is the main purpose of a model class. But as we saw, our models have navigational properties and sometimes we don’t want to map them in an API response. So, we can use DTO to remove any property or concatenate properties into a single property.

Moreover, there are situations where we want to map all the properties from a model class to the result — but still, we want to use DTO instead. The reason is if we change the database, we also have to change the properties in a model — but that doesn’t mean our clients want the result changed. So, by using DTO, the result will stay as it was before the model changes.

As we can see, keeping these objects separate (the DTO and model classes) leads to a more robust and maintainable code in our application.

Now, when we know why should we separate DTO from a model class in our code, let’s create a new project named Shared and then a new folder DataTransferObjects with the CompanyDto record inside:

namespace Shared.DataTransferObjects { public record CompanyDto(Guid Id, string Name, string FullAddress); }

Instead of a regular class, we are using a record for DTO. This specific record type is known as a Positional record.

A Record type provides us an easier way to create an immutable reference type in .NET. This means that the Record’s instance property values cannot change after its initialization. The data are passed by value and the equality between two Records is verified by comparing the value of their properties.

Records can be a valid alternative to classes when we have to send or receive data. The very purpose of a DTO is to transfer data from one part of the code to another, and immutability in many cases is useful. We use them to return data from a Web API or to represent events in our application.

This is the exact reason why we are using records for our DTOs.

In our DTO, we have removed the Employees property and we are going to use the FullAddress property to concatenate the Address and Country properties from the Company class. Furthermore, we are not using validation attributes in this record, because we are going to use this record only to return a response to the client. Therefore, validation attributes are not required.

So, the first thing we have to do is to add the reference from the Shared project to the Service.Contracts project, and remove the Entities reference. At this moment the Service.Contracts project is only referencing the Shared project.

Then, we have to modify the ICompanyService interface:

public interface ICompanyService { IEnumerable<CompanyDto> GetAllCompanies(bool trackChanges); }

And the CompanyService class:

public IEnumerable<CompanyDto> GetAllCompanies(bool trackChanges) { try { var companies = _repository.Company.GetAllCompanies(trackChanges); var companiesDto = companies.Select(c => new CompanyDto(c.Id, c.Name ?? "", string.Join(' ', c.Address, c.Country))) .ToList(); return companiesDto; } catch (Exception ex) { _logger.LogError($"Something went wrong in the {nameof(GetAllCompanies)} service method {ex}"); throw; } }

Let’s start our application and test it with the same request from Postman:https://localhost:5001/api/companies

alt text

This time we get our CompanyDto result, which is a more preferred way. But this can be improved as well. If we take a look at our mapping code in the GetCompanies action, we can see that we manually map all the properties. Sure, it is okay for a few fields — but what if we have a lot more? There is a better and cleaner way to map our classes and that is by using the Automapper.

4.6 Using AutoMapper in ASP.NET Core

AutoMapper is a library that helps us with mapping objects in our applications. By using this library, we are going to remove the code for manual mapping — thus making the action readable and maintainable.‌

So, to install AutoMapper, let’s open a Package Manager Console window, choose the Service project as a default project from the drop-down list, and run the following command:

PM> Install-Package AutoMapper.Extensions.Microsoft.DependencyInjection

After installation, we are going to register this library in the Program class:

builder.Services.AddAutoMapper(typeof(Program));

As soon as our library is registered, we are going to create a profile class, also in the main project, where we specify the source and destination objects for mapping:

public class MappingProfile : Profile { public MappingProfile() { CreateMap<Company, CompanyDto>() .ForMember(c => c.FullAddress, opt => opt.MapFrom(x => string.Join(' ', x.Address, x.Country))); } }

The MappingProfile class must inherit from the AutoMapper’s Profile class. In the constructor, we are using the CreateMap method where we specify the source object and the destination object to map to. Because we have the FullAddress property in our DTO record, which contains both the Address and the Country from the model class, we have to specify additional mapping rules with the ForMember method.

Now, we have to modify the ServiceManager class to enable DI in our service classes:

public sealed class ServiceManager : IServiceManager { private readonly Lazy<ICompanyService> _companyService; private readonly Lazy<IEmployeeService> _employeeService; public ServiceManager(IRepositoryManager repositoryManager, ILoggerManager logger, IMapper mapper) { _companyService = new Lazy<ICompanyService>(() => new CompanyService(repositoryManager, logger, mapper)); _employeeService = new Lazy<IEmployeeService>(() => new EmployeeService(repositoryManager, logger, mapper)); } public ICompanyService CompanyService => _companyService.Value; public IEmployeeService EmployeeService => _employeeService.Value; }

Of course, now we have two errors regarding our service constructors. So we need to fix that in both CompanyService and EmployeeService classes:

internal sealed class CompanyService : ICompanyService { private readonly IRepositoryManager _repository; private readonly ILoggerManager _logger; private readonly IMapper _mapper; public CompanyService(IRepositoryManager repository, ILoggerManager logger, IMapper mapper) { _repository = repository; _logger = logger; _mapper = mapper; } ... }

We should do the same in the EmployeeService class:

internal sealed class EmployeeService : IEmployeeService { private readonly IRepositoryManager _repository; private readonly ILoggerManager _logger; private readonly IMapper _mapper; public EmployeeService(IRepositoryManager repository, ILoggerManager logger, IMapper mapper) { _repository = repository; _logger = logger; _mapper = mapper; } }

Finally, we can modify the GetAllCompanies method in the CompanyService class:

public IEnumerable<CompanyDto> GetAllCompanies(bool trackChanges) { try { var companies = _repository.Company.GetAllCompanies(trackChanges); var companiesDto = _mapper.Map<IEnumerable<CompanyDto>>(companies); return companiesDto; } catch (Exception ex) { _logger.LogError($"Something went wrong in the {nameof(GetAllCompanies)} service method {ex}");
throw; } }

We are using the Map method and specify the destination and then the source object.

Excellent.

Now if we start our app and send the same request from Postman, we are going to get an error message:

alt text

This happens because AutoMapper is not able to find the specific FullAddress property as we specified in the MappingProfile class. We are intentionally showing this error for you to know what to do if it happens to you in your projects.

So to solve this, all we have to do is to modify the MappingProfile class:

public MappingProfile() { CreateMap<Company, CompanyDto>() .ForCtorParam("FullAddress", opt => opt.MapFrom(x => string.Join(' ', x.Address, x.Country))); }

This time, we are not using the ForMember method but the ForCtorParam method to specify the name of the parameter in the constructor that AutoMapper needs to map to.

Now, let’s use Postman again to send the request to test our app:https://localhost:5001/api/companies

alt text

We can see that everything is working as it is supposed to, but now with much better code.

5 GLOBAL ERROR HANDLING

Exception handling helps us deal with the unexpected behavior of our system. To handle exceptions, we use the try-catch block in our code as well as the finally keyword to clean up our resources afterward.‌

Even though there is nothing wrong with the try-catch blocks in our Actions and methods in the Web API project, we can extract all the exception handling logic into a single centralized place. By doing that, we make our actions cleaner, more readable, and the error handling process more maintainable.

In this chapter, we are going to refactor our code to use the built-in middleware for global error handling to demonstrate the benefits of this approach. Since we already talked about the middleware in ASP.NET Core (in section 1.8), this section should be easier to understand.

5.1 Handling Errors Globally with the Built-In Middleware

The UseExceptionHandler middleware is a built-in middleware that we can use to handle exceptions. So, let’s dive into the code to see this middleware in action.‌

We are going to create a new ErrorModel folder in the Entities project, and add the new class ErrorDetails in that folder:

using System.Text.Json; namespace Entities.ErrorModel { public class ErrorDetails { public int StatusCode { get; set; } public string? Message { get; set; } public override string ToString() => JsonSerializer.Serialize(this); } }

We are going to use this class for the details of our error message.

To continue, in the Extensions folder in the main project, we are going to add a new static class: ExceptionMiddlewareExtensions.cs.

Now, we need to modify it:

public static class ExceptionMiddlewareExtensions { public static void ConfigureExceptionHandler(this WebApplication app, ILoggerManager logger) { app.UseExceptionHandler(appError => { appError.Run(async context => { context.Response.StatusCode = (int)HttpStatusCode.InternalServerError; context.Response.ContentType = "application/json"; var contextFeature = context.Features.Get<IExceptionHandlerFeature>(); if (contextFeature != null) { logger.LogError($"Something went wrong: {contextFeature.Error}"); await context.Response.WriteAsync(new ErrorDetails() { StatusCode = context.Response.StatusCode, Message = "Internal Server Error.", }.ToString()); } }); }); } }

In the code above, we create an extension method, on top of the WebApplication type, and we call the UseExceptionHandler method. That method adds a middleware to the pipeline that will catch exceptions, log them, and re-execute the request in an alternate pipeline.

Inside the UseExceptionHandler method, we use the appError variable of the IApplicationBuilder type. With that variable, we call the Run method, which adds a terminal middleware delegate to the application’s pipeline. This is something we already know from section 1.8.

Then, we populate the status code and the content type of our response, log the error message and finally return the response with the custom-created object. Later on, we are going to modify this middleware even more to support our business logic in a service layer.

Of course, there are several namespaces we should add to make this work:

using Contracts; using Entities.ErrorModel; using Microsoft.AspNetCore.Diagnostics; using System.Net;

5.2 Program Class Modification

To be able to use this extension method, let’s modify the Program class:‌

var app = builder.Build(); var logger = app.Services.GetRequiredService<ILoggerManager>(); app.ConfigureExceptionHandler(logger); if (app.Environment.IsProduction()) app.UseHsts(); app.UseHttpsRedirection(); app.UseStaticFiles(); app.UseForwardedHeaders(new ForwardedHeadersOptions { ForwardedHeaders = ForwardedHeaders.All }); app.UseCors("CorsPolicy"); app.UseAuthorization(); app.MapControllers(); app.Run();

Here, we first extract the ILoggerManager service inside the logger variable. Then, we just call the ConfigureExceptionHandler method and pass that logger service. It is important to know that we have to extract the ILoggerManager service after the var app = builder.Build() code line because the Build method builds the WebApplication and registers all the services added with IOC.

Additionally, we remove the call to the UseDeveloperExceptionPage method in the development environment since we don’t need it now and it also interferes with our error handler middleware.

Finally, let’s remove the try-catch block from the GetAllCompanies service method:

public IEnumerable<CompanyDto> GetAllCompanies(bool trackChanges) { var companies = _repository.Company.GetAllCompanies(trackChanges); var companiesDto = _mapper.Map<IEnumerable<CompanyDto>>(companies); return companiesDto; }

And from our GetCompanies action:

[HttpGet] public IActionResult GetCompanies() { var companies = _service.CompanyService.GetAllCompanies(trackChanges: false); return Ok(companies); }

And there we go. Our methods are much cleaner now. More importantly, we can reuse this functionality to write more readable methods and actions in the future.

5.3 Testing the Result

To inspect this functionality, let’s add the following line to the‌ GetCompanies action, just to simulate an error:

[HttpGet] public IActionResult GetCompanies() { throw new Exception("Exception"); var companies = _service.CompanyService.GetAllCompanies(trackChanges: false); return Ok(companies); }

NOTE: Once you send the request, Visual Studio will stop the execution inside the GetCompanies action on the line where we throw an exception. This is normal behavior and all you have to do is to click the continue button to finish the request flow. Additionally, you can start your app with CTRL+F5, which will prevent Visual Studio from stopping the execution. Also, if you want to start your app with F5 but still to avoid VS execution stoppages, you can open the Tools->Options->Debugging->General option and uncheck the Enable Just My Code checkbox.

And send a request from Postman:https://localhost:5001/api/companies

alt text

We can check our log messages to make sure that logging is working as well.

6 GETTING ADDITIONAL RESOURCES

As of now, we can continue with GET requests by adding additional actions to our controller. Moreover, we are going to create one more controller for the Employee resource and implement an additional action in it.‌

6.1 Getting a Single Resource From the Database

Let’s start by modifying the ICompanyRepository interface:‌

public interface ICompanyRepository { IEnumerable<Company> GetAllCompanies(bool trackChanges); Company GetCompany(Guid companyId, bool trackChanges); }

Then, we are going to implement this interface in the CompanyRepository.cs file:

public Company GetCompany(Guid companyId, bool trackChanges) => FindByCondition(c => c.Id.Equals(companyId), trackChanges) .SingleOrDefault();

Then, we have to modify the ICompanyService interface:

public interface ICompanyService { IEnumerable<CompanyDto> GetAllCompanies(bool trackChanges); CompanyDto GetCompany(Guid companyId, bool trackChanges); }

And of course, we have to implement this interface in the CompanyService class:

public CompanyDto GetCompany(Guid id, bool trackChanges) { var company = _repository.Company.GetCompany(id, trackChanges); //Check if the company is null var companyDto = _mapper.Map<CompanyDto>(company); return companyDto; }

So, we are calling the repository method that fetches a single company from the database, maps the result to companyDto, and returns it. You can also see the comment about the null checks, which we are going to solve just in a minute.

Finally, let’s change the CompanyController class:

[HttpGet("{id:guid}")] public IActionResult GetCompany(Guid id) { var company = _service.CompanyService.GetCompany(id, trackChanges: false); return Ok(company); }

The route for this action is /api/companies/id and that’s because the /api/companies part applies from the root route (on top of the controller) and the id part is applied from the action attribute [HttpGet(“{id:guid}“)]. You can also see that we are using a route constraint (:guid part) where we explicitly state that our id parameter is of the GUID type. We can use many different constraints like int, double, long, float, datetime, bool, length, minlength, maxlength, and many others.
Let’s use Postman to send a valid request towards our API: https://localhost:5001/api/companies/3d490a70-94ce-4d15-9494-5248280c2ce3

alt text

Great. This works as expected. But, what if someone uses an invalid id parameter?

6.1.1 Handling Invalid Requests in a Service Layer‌

As you can see, in our service method, we have a comment stating that the result returned from the repository could be null, and this is something we have to handle. We want to return the NotFound response to the client but without involving our controller’s actions. We are going to keep them nice and clean as they already are.

So, what we are going to do is to create custom exceptions that we can call from the service methods and interrupt the flow. Then our error handling middleware can catch the exception, process the response, and return it to the client. This is a great way of handling invalid requests inside a service layer without having additional checks in our controllers.

That said, let’s start with a new Exceptions folder creation inside the Entities project. Since, in this case, we are going to create a not found response, let’s create a new NotFoundException class inside that folder:

public abstract class NotFoundException : Exception { protected NotFoundException(string message) : base(message) { } }

This is an abstract class, which will be a base class for all the individual not found exception classes. It inherits from the Exception class to represent the errors that happen during application execution. Since in our current case, we are handling the situation where we can’t find the company in the database, we are going to create a new CompanyNotFoundException class in the same Exceptions folder:

public sealed class CompanyNotFoundException : NotFoundException { public CompanyNotFoundException(Guid companyId) :base ($"The company with id: {companyId} doesn't exist in the database.") { } }

Right after that, we can remove the comment in the GetCompany method and throw this exception:

public CompanyDto GetCompany(Guid id, bool trackChanges) { var company = _repository.Company.GetCompany(id, trackChanges); if (company is null) throw new CompanyNotFoundException(id); var companyDto = _mapper.Map<CompanyDto>(company); return companyDto; }

Finally, we have to modify our error middleware because we don’t want to return the 500 error message to our clients for every custom error we throw from the service layer.

So, let’s modify the ExceptionMiddlewareExtensions class in the main project:

public static class ExceptionMiddlewareExtensions { public static void ConfigureExceptionHandler(this WebApplication app, ILoggerManager logger) { app.UseExceptionHandler(appError => { appError.Run(async context => { context.Response.ContentType = "application/json"; var contextFeature = context.Features.Get<IExceptionHandlerFeature>(); if (contextFeature != null) { context.Response.StatusCode = contextFeature.Error switch { NotFoundException => StatusCodes.Status404NotFound, _ => StatusCodes.Status500InternalServerError }; logger.LogError($"Something went wrong: {contextFeature.Error}"); await context.Response.WriteAsync(new ErrorDetails() { StatusCode = context.Response.StatusCode, Message = contextFeature.Error.Message, }.ToString()); } }); }); } }

We remove the hardcoded StatusCode setup and add the part where we populate it based on the type of exception we throw in our service layer. We are also dynamically populating the Message property of the ErrorDetails object that we return as the response.

Additionally, you can see the advantage of using the base abstract exception class here (NotFoundException in this case). We are not checking for the specific class implementation but the base type. This allows us to have multiple not found classes that inherit from the NotFoundException class and this middleware will know that we want to return the NotFound response to the client.

Excellent. Now, we can start the app and send the invalid request:https://localhost:5001/api/companies/3d490a70-94ce-4d15-9494-5248280c2ce2

alt text

We can see the status code we require and also the response object with proper StatusCode and Message properties. Also, if you inspect the log message, you will see that we are logging a correct message.

With this approach, we have perfect control of all the exceptional cases in our app. We have that control due to global error handler implementation. For now, we only handle the invalid id sent from the client, but we will handle more exceptional cases in the rest of the project.

In our tests for a published app, the regular request sent from Postman took 7ms and the exceptional one took 14ms. So you can see how fast the response is.

Of course, we are using exceptions only for these exceptional cases (Company not found, Employee not found...) and not throwing them all over the application. So, if you follow the same strategy, you will not face any performance issues.

Lastly, if you have an application where you have to throw custom exceptions more often and maybe impact your performance, we are going to provide an alternative to exceptions in the first bonus chapter of this book (Chapter 32).

6.2 Parent/Child Relationships in Web API

Up until now, we have been working only with the company, which is a parent (principal) entity in our API. But for each company, we have a related employee (dependent entity). Every employee must be related to a certain company and we are going to create our URIs in that manner.‌

That said, let’s create a new controller in the Presentation project and name it EmployeesController:

[Route("api/companies/{companyId}/employees")] [ApiController] public class EmployeesController : ControllerBase { private readonly IServiceManager _service; public EmployeesController(IServiceManager service) => _service = service; }

We are familiar with this code, but our main route is a bit different. As we said, a single employee can’t exist without a company entity and this is exactly what we are exposing through this URI. To get an employee or employees from the database, we have to specify the companyId parameter, and that is something all actions will have in common. For that reason, we have specified this route as our root route.

Before we create an action to fetch all the employees per company, we have to modify the IEmployeeRepository interface:

public interface IEmployeeRepository { IEnumerable<Employee> GetEmployees(Guid companyId, bool trackChanges); }

After interface modification, we are going to modify the EmployeeRepository class:

public IEnumerable<Employee> GetEmployees(Guid companyId, bool trackChanges) => FindByCondition(e => e.CompanyId.Equals(companyId), trackChanges) .OrderBy(e => e.Name).ToList();

Then, before we start adding code to the service layer, we are going to create a new DTO. Let’s name it EmployeeDto and add it to the Shared/DataTransferObjects folder:

public record EmployeeDto(Guid Id, string Name, int Age, string Position);

Since we want to return this DTO to the client, we have to create a mapping rule inside the MappingProfile class:

public MappingProfile() { CreateMap<Company, CompanyDto>() .ForCtorParam("FullAddress", opt => opt.MapFrom(x => string.Join(' ', x.Address, x.Country))); CreateMap<Employee, EmployeeDto>(); }

Now, we can modify the IEmployeeService interface:

public interface IEmployeeService { IEnumerable<EmployeeDto> GetEmployees(Guid companyId, bool trackChanges); }

And of course, we have to implement this interface in the EmployeeService class:

public IEnumerable<EmployeeDto> GetEmployees(Guid companyId, bool trackChanges) { var company = _repository.Company.GetCompany(companyId, trackChanges); if (company is null) throw new CompanyNotFoundException(companyId);
var employeesFromDb = _repository.Employee.GetEmployees(companyId, trackChanges); var employeesDto = _mapper.Map<IEnumerable<EmployeeDto>>(employeesFromDb); return employeesDto; }

Here, we first fetch the company entity from the database. If it doesn’t exist, we return the NotFound response to the client. If it does, we fetch all the employees for that company, map them to the collection of EmployeeDto and return it to the caller.

Finally, let’s modify the Employees controller:

[HttpGet] public IActionResult GetEmployeesForCompany(Guid companyId) { var employees = _service.EmployeeService.GetEmployees(companyId, trackChanges: false); return Ok(employees); }

This code is pretty straightforward — nothing we haven’t seen so far — but we need to explain just one thing. As you can see, we have the companyId parameter in our action and this parameter will be mapped from the main route. For that reason, we didn’t place it in the [HttpGet] attribute as we did with the GetCompany action.

That done, we can send a request with a valid companyId:https://localhost:5001/api/companies/c9d4c053-49b6-410c-bc78-2d54a9991870/employees

alt text

And with an invalid companyId:https://localhost:5001/api/companies/c9d4c053-49b6-410c-bc78-2d54a9991873/employees

alt text

Excellent. Let’s continue by fetching a single employee.

6.3 Getting a Single Employee for Company

So, as we did in previous sections, let’s start with the‌ IEmployeeRepository interface modification:

public interface IEmployeeRepository { IEnumerable<Employee> GetEmployees(Guid companyId, bool trackChanges); Employee GetEmployee(Guid companyId, Guid id, bool trackChanges); }

Now, let’s implement this method in the EmployeeRepository class:

public Employee GetEmployee(Guid companyId, Guid id, bool trackChanges) => FindByCondition(e => e.CompanyId.Equals(companyId) && e.Id.Equals(id), trackChanges) .SingleOrDefault();

Next, let’s add another exception class in the Entities/Exceptions folder:

public class EmployeeNotFoundException : NotFoundException { public EmployeeNotFoundException(Guid employeeId) : base($"Employee with id: {employeeId} doesn't exist in the database.") { } }

We will soon see why do we need this class.

To continue, we have to modify the IEmployeeService interface:

public interface IEmployeeService { IEnumerable<EmployeeDto> GetEmployees(Guid companyId, bool trackChanges); EmployeeDto GetEmployee(Guid companyId, Guid id, bool trackChanges); }

And implement this new method in the EmployeeService class:

public EmployeeDto GetEmployee(Guid companyId, Guid id, bool trackChanges) { var company = _repository.Company.GetCompany(companyId, trackChanges); if (company is null) throw new CompanyNotFoundException(companyId); var employeeDb = _repository.Employee.GetEmployee(companyId, id, trackChanges); if (employeeDb is null) throw new EmployeeNotFoundException(id); var employee = _mapper.Map<EmployeeDto>(employeeDb); return employee; }

This is also a pretty clear code and we can see the reason for creating a new exception class.

Finally, let’s modify the EmployeeController class:

[HttpGet("{id:guid}")] public IActionResult GetEmployeeForCompany(Guid companyId, Guid id) { var employee = _service.EmployeeService.GetEmployee(companyId, id, trackChanges: false); return Ok(employee);}

Excellent. You can see how clear our action is.

We can test this action by using already created requests from the Bonus 2-CompanyEmployeesRequests.postman_collection.json file placed in the folder with the exercise files:https://localhost:5001/api/companies/c9d4c053-49b6-410c-bc78-2d54a9991870/employees/86dba8c0-d178-41e7-938c-ed49778fb52a

alt text

When we send the request with an invalid company or employee id:https://localhost:5001/api/companies/c9d4c053-49b6-410c-bc78-2d54a9991870/employees/86dba8c0-d178-41e7-938c-ed49778fb52c

alt text

alt text

Our responses are pretty self-explanatory, which makes for a good user experience.

Until now, we have received only JSON formatted responses from our API. But what if we want to support some other format, like XML for example?

Well, in the next chapter we are going to learn more about Content Negotiation and enabling different formats for our responses.

7 CONTENT NEGOTIATION

Content negotiation is one of the quality-of-life improvements we can add to our REST API to make it more user-friendly and flexible. And when we design an API, isn’t that what we want to achieve in the first place?‌

Content negotiation is an HTTP feature that has been around for a while, but for one reason or another, it is often a bit underused.

In short, content negotiation lets you choose or rather “negotiate” the content you want to get in a response to the REST API request.

7.1 What Do We Get Out of the Box?

By default, ASP.NET Core Web API returns a JSON formatted result.‌

We can confirm that by looking at the response from the GetCompanies

action:https://localhost:5001/api/companies

alt text

We can clearly see that the default result when calling GET on /api/companies returns the JSON result. We have also used the Accept header (as you can see in the picture above) to try forcing the server to return other media types like plain text and XML.

But that doesn’t work. Why?

Because we need to configure server formatters to format a response the way we want it.

Let’s see how to do that.

7.2 Changing the Default Configuration of Our Project

A server does not explicitly specify where it formats a response to JSON.‌ But you can override it by changing configuration options through the AddControllers method.

We can add the following options to enable the server to format the XML response when the client tries negotiating for it:

builder.Services.ConfigureCors(); builder.Services.ConfigureIISIntegration(); builder.Services.ConfigureLoggerService(); builder.Services.ConfigureRepositoryManager(); builder.Services.ConfigureServiceManager(); builder.Services.ConfigureSqlContext(builder.Configuration); builder.Services.AddAutoMapper(typeof(Program)); builder.Services.AddControllers(config => { config.RespectBrowserAcceptHeader = true; }).AddXmlDataContractSerializerFormatters() .AddApplicationPart(typeof(CompanyEmployees.Presentation.AssemblyReference).Assembly);

First things first, we must tell a server to respect the Accept header. After that, we just add the AddXmlDataContractSerializerFormatters method to support XML formatters.

Now that we have our server configured, let’s test the content negotiation once more.

7.3 Testing Content Negotiation

Let’s see what happens now if we fire the same request through Postman:‌https://localhost:5001/api/companies

alt text

We get an error because XmlSerializer cannot easily serialize our positional record type. There are two solutions to this. The first one is marking our CompanyDto record with the [Serializable] attribute:

[Serializable] 
public record CompanyDto(Guid Id, string Name, string FullAddress);

Now, we can send the same request again:

alt text

This time, we are getting our XML response but, as you can see,properties have some strange names. That’s because the compiler behind the scenes generates the record as a class with fields named like that (name_BackingField) and the XML serializer just serializes those fields with the same names.

If we don’t want these property names in our response, but the regular ones, we can implement a second solution. Let’s modify our record with the init only property setters:

public record CompanyDto { public Guid Id { get; init; } public string? Name { get; init; } public string? FullAddress { get; init; }

This object is still immutable and init-only properties protect the state of the object from mutation once initialization is finished.

Additionally, we have to make one more change in the MappingProfile class:

public MappingProfile() { CreateMap<Company, CompanyDto>() .ForMember(c => c.FullAddress, opt => opt.MapFrom(x => string.Join(' ', x.Address, x.Country))); CreateMap<Employee, EmployeeDto>(); }

We are returning this mapping rule to a previous state since now, we do have properties in our object.

Now, we can send the same request again:

alt text

There is our XML response.

Now by changing the Accept header from text/xml to text/json, we can get differently formatted responses — and that is quite awesome, wouldn’t you agree?

Okay, that was nice and easy.

But what if despite all this flexibility a client requests a media type that a server doesn’t know how to format?

7.4 Restricting Media Types

Currently, it – the server - will default to a JSON type.‌

But we can restrict this behavior by adding one line to the configuration:

builder.Services.AddControllers(config => { config.RespectBrowserAcceptHeader = true; config.ReturnHttpNotAcceptable = true; }).AddXmlDataContractSerializerFormatters() .AddApplicationPart(typeof(CompanyEmployees.Presentation.AssemblyReference).Assembly);

We added the ReturnHttpNotAcceptable = true option, which tells the server that if the client tries to negotiate for the media type the server doesn’t support, it should return the 406 Not Acceptable status code.

This will make our application more restrictive and force the API consumer to request only the types the server supports. The 406 status code is created for this purpose.

Now, let’s try fetching the text/css media type using Postman to see what happens:https://localhost:5001/api/companies

alt text

And as expected, there is no response body and all we get is a nice 406 Not Acceptable status code.

So far so good.

7.5 More About Formatters

If we want our API to support content negotiation for a type that is not “in‌ the box,” we need to have a mechanism to do this.

So, how can we do that?

ASP.NET Core supports the creation of custom formatters. Their purpose is to give us the flexibility to create our formatter for any media types we need to support.

We can make the custom formatter by using the following method:

• Create an output formatter class that inherits the TextOutputFormatter class.

• Create an input formatter class that inherits the TextInputformatter class.

• Add input and output classes to the InputFormatters and OutputFormatters collections the same way we did for the XML formatter.

Now let’s have some fun and implement a custom CSV formatter for our example.

7.6 Implementing a Custom Formatter

Since we are only interested in formatting responses, we need to implement only an output formatter. We would need an input formatter only if a request body contained a corresponding type.‌

The idea is to format a response to return the list of companies in a CSV format.

Let’s add a CsvOutputFormatter class to our main project:

public class CsvOutputFormatter : TextOutputFormatter { public CsvOutputFormatter() { SupportedMediaTypes.Add(MediaTypeHeaderValue.Parse("text/csv")); SupportedEncodings.Add(Encoding.UTF8); SupportedEncodings.Add(Encoding.Unicode); } protected override bool CanWriteType(Type? type) { if (typeof(CompanyDto).IsAssignableFrom(type) || typeof(IEnumerable<CompanyDto>).IsAssignableFrom(type)) { return base.CanWriteType(type); } return false; } public override async Task WriteResponseBodyAsync(OutputFormatterWriteContext context, Encoding selectedEncoding) { var response = context.HttpContext.Response; var buffer = new StringBuilder(); if (context.Object is IEnumerable<CompanyDto>) { foreach (var company in (IEnumerable<CompanyDto>)context.Object) { FormatCsv(buffer, company); } } else { FormatCsv(buffer, (CompanyDto)context.Object); }

await response.WriteAsync(buffer.ToString()); } private static void FormatCsv(StringBuilder buffer, CompanyDto company) { buffer.AppendLine($"{company.Id},\"{company.Name},\"{company.FullAddress}\""); } }

There are a few things to note here:

• In the constructor, we define which media type this formatter should parse as well as encodings.

• The CanWriteType method is overridden, and it indicates whether or not the CompanyDto type can be written by this serializer.

• The WriteResponseBodyAsync method constructs the response.

• And finally, we have the FormatCsv method that formats a response the way we want it.

The class is pretty straightforward to implement, and the main thing that you should focus on is the FormatCsv method logic.

Now we just need to add the newly made formatter to the list of OutputFormatters in the ServicesExtensions class:

public static IMvcBuilder AddCustomCSVFormatter(this IMvcBuilder builder) => builder.AddMvcOptions(config => config.OutputFormatters.Add(new CsvOutputFormatter()));

And to call it in the AddControllers:

builder.Services.AddControllers(config => { config.RespectBrowserAcceptHeader = true; config.ReturnHttpNotAcceptable = true; }).AddXmlDataContractSerializerFormatters() .AddCustomCSVFormatter() .AddApplicationPart(typeof(CompanyEmployees.Presentation.AssemblyReference).Assembly);

Let’s run this and see if it works. This time we will put text/csv as the value for the Accept header: https://localhost:5001/api/companies

alt text

Well, what do you know, it works!

In this chapter, we finished working with GET requests in our project and we are ready to move on to the POST PUT and DELETE requests. We have a lot more ground to cover, so let’s get down to business.

8 METHOD SAFETY AND METHOD IDEMPOTENCY

Before we start with the Create, Update, and Delete actions, we should explain two important principles in the HTTP standard. Those standards are Method Safety and Method Idempotency.‌

We can consider a method a safe one if it doesn’t change the resource representation. So, in other words, the resource shouldn’t be changed after our method is executed.

If we can call a method multiple times with the same result, we can consider that method idempotent. So in other words, the side effects of calling it once are the same as calling it multiple times.

Let’s see how this applies to HTTP methods:

HTTP Method Is it Safe? Is it Idempotent?
GET Yes Yes
OPTIONS Yes Yes
HEAD Yes Yes
POST No No
DELETE No Yes
PUT No Yes
PATCH No No

As you can see, the GET, OPTIONS, and HEAD methods are both safe and idempotent, because when we call those methods they will not change the resource representation. Furthermore, we can call these methods multiple times, but they will return the same result every time.

The POST method is neither safe nor idempotent. It causes changes in the resource representation because it creates them. Also, if we call the POST method multiple times, it will create a new resource every time.

The DELETE method is not safe because it removes the resource, but it is idempotent because if we delete the same resource multiple times, we will get the same result as if we have deleted it only once.

PUT is not safe either. When we update our resource, it changes. But it is idempotent because no matter how many times we update the same resource with the same request it will have the same representation as if we have updated it only once.

Finally, the PATCH method is neither safe nor idempotent.

Now that we’ve learned about these principles, we can continue with our application by implementing the rest of the HTTP methods (we have already implemented GET). We can always use this table to decide which method to use for which use case.

9 CREATING RESOURCES

In this section, we are going to show you how to use the POST HTTP method to create resources in the database.‌

So, let’s start.

9.1 Handling POST Requests

Firstly, let’s modify the decoration attribute for the GetCompany action in the Companies controller:‌

[HttpGet("{id:guid}", Name = "CompanyById")]

With this modification, we are setting the name for the action. This name will come in handy in the action method for creating a new company.

We have a DTO class for the output (the GET methods), but right now we need the one for the input as well. So, let’s create a new record in the Shared/DataTransferObjects folder:

public record CompanyForCreationDto(string Name, string Address, string Country);

We can see that this DTO record is almost the same as the Company record but without the Id property. We don’t need that property when we create an entity.

We should pay attention to one more thing. In some projects, the input and output DTO classes are the same, but we still recommend separating them for easier maintenance and refactoring of our code. Furthermore, when we start talking about validation, we don’t want to validate the output objects — but we definitely want to validate the input ones.

With all of that said and done, let’s continue by modifying the ICompanyRepository interface:

public interface ICompanyRepository { IEnumerable<Company> GetAllCompanies(bool trackChanges);
Company GetCompany(Guid companyId, bool trackChanges); void CreateCompany(Company company); }

After the interface modification, we are going to implement that interface:

public void CreateCompany(Company company) => Create(company);

We don’t explicitly generate a new Id for our company; this would be done by EF Core. All we do is to set the state of the company to Added.

Next, we want to modify the ICompanyService interface:

public interface ICompanyService { IEnumerable<CompanyDto> GetAllCompanies(bool trackChanges); CompanyDto GetCompany(Guid companyId, bool trackChanges); CompanyDto CreateCompany(CompanyForCreationDto company); }

And of course, we have to implement this method in the CompanyService class:

public CompanyDto CreateCompany(CompanyForCreationDto company) { var companyEntity = _mapper.Map<Company>(company); _repository.Company.CreateCompany(companyEntity); _repository.Save(); var companyToReturn = _mapper.Map<CompanyDto>(companyEntity); return companyToReturn; }

Here, we map the company for creation to the company entity, call the repository method for creation, and call the Save() method to save the entity to the database. After that, we map the company entity to the company DTO object to return it to the controller.

But we don’t have the mapping rule for this so we have to create another mapping rule for the Company and CompanyForCreationDto objects.Let’s do this in the MappingProfile class:

public MappingProfile() { CreateMap<Company, CompanyDto>() .ForMember(c => c.FullAddress,
opt => opt.MapFrom(x => string.Join(' ', x.Address, x.Country))); CreateMap<Employee, EmployeeDto>(); CreateMap<CompanyForCreationDto, Company>(); }

Our POST action will accept a parameter of the type CompanyForCreationDto, and as you can see our service method accepts the parameter of the same type as well, but we need the Company object to send it to the repository layer for creation. Therefore, we have to create this mapping rule.

Last, let’s modify the controller:

[HttpPost] public IActionResult CreateCompany([FromBody] CompanyForCreationDto company) { if (company is null) return BadRequest("CompanyForCreationDto object is null"); var createdCompany = _service.CompanyService.CreateCompany(company); return CreatedAtRoute("CompanyById", new { id = createdCompany.Id }, createdCompany); }

Let’s use Postman to send the request and examine the result:https://localhost:5001/api/companies

alt text

9.2 Code Explanation

Let’s talk a little bit about this code. The interface and the repository parts are pretty clear, so we won’t talk about that. We have already explained the code in the service method. But the code in the controller contains several things worth mentioning.‌

If you take a look at the request URI, you’ll see that we use the same one as for the GetCompanies action: api/companies — but this time we are using the POST request.

The CreateCompany method has its own [HttpPost] decoration attribute, which restricts it to POST requests. Furthermore, notice the company parameter which comes from the client. We are not collecting it from the URI but the request body. Thus the usage of

the [FromBody] attribute. Also, the company object is a complex type; therefore, we have to use [FromBody].

If we wanted to, we could explicitly mark the action to take this parameter from the URI by decorating it with the [FromUri] attribute, though we wouldn’t recommend that at all because of security reasons and the complexity of the request.

Because the company parameter comes from the client, it could happen that it can’t be deserialized. As a result, we have to validate it against the reference type’s default value, which is null.

The last thing to mention is this part of the code:

CreatedAtRoute("CompanyById", new { id = companyToReturn.Id }, companyToReturn);

CreatedAtRoute will return a status code 201, which stands for Created. Also, it will populate the body of the response with the new company object as well as the Location attribute within the
response header with the address to retrieve that company. We need to provide the name of the action, where we can retrieve the created entity.

If we take a look at the headers part of our response, we are going to see a link to retrieve the created company:

alt text

Finally, from the previous example, we can confirm that the POST method is neither safe nor idempotent. We saw that when we send the POST request, it is going to create a new resource in the database — thus changing the resource representation. Furthermore, if we try to send this request a couple of times, we will get a new object for every request (it will have a different Id for sure).

Excellent.

There is still one more thing we need to explain.

9.2.1 Validation from the ApiController Attribute‌

In this section, we are going to talk about the [ApiController] attribute that we can find right below the [Route] attribute in our controller:

[Route("api/companies")] [ApiController] public class CompaniesController : ControllerBase {

But, before we start with the explanation, let’s place a breakpoint in the CreateCompany action, right on the if (company is null) check.
Then, let’s use Postman to send an invalid POST request:https://localhost:5001/api/companies

alt text

We are going to talk about Validation in chapter 13, but for now, we have to explain a couple of things.

First of all, we have our response - a Bad Request in Postman, and we have error messages that state what’s wrong with our request. But, we never hit that breakpoint that we’ve placed inside the CreateCompany action.

Why is that?

Well, the [ApiController] attribute is applied to a controller class to enable the following opinionated, API-specific behaviors:

• Attribute routing requirement

• Automatic HTTP 400 responses

• Binding source parameter inference

• Multipart/form-data request inference

• Problem details for error status codes

As you can see, it handles the HTTP 400 responses, and in our case, since the request’s body is null, the [ApiController] attribute handles that and returns the 400 (BadReqeust) response before the request even hits the CreateCompany action.

This is useful behavior, but it prevents us from sending our custom responses with different messages and status codes to the client. This will be very important once we get to the Validation.

So to enable our custom responses from the actions, we are going to add this code into the Program class right above the AddControllers method:

builder.Services.Configure<ApiBehaviorOptions>(options => { options.SuppressModelStateInvalidFilter = true; });

With this, we are suppressing a default model state validation that is implemented due to the existence of the [ApiController] attribute in all API controllers. So this means that we can solve the same problem differently, by commenting out or removing the [ApiController] attribute only, without additional code for suppressing validation. It's all up to you. But we like keeping it in our controllers because, as you could’ve seen, it provides additional functionalities other than just 400 – Bad Request responses.

Now, once we start the app and send the same request, we will hit that breakpoint and see our response in Postman.

Nicely done.

Now, we can remove that breakpoint and continue with learning about the creation of child resources.

9.3 Creating a Child Resource

While creating our company, we created the DTO object required for the CreateCompany action. So, for employee creation, we are going to do the same thing:‌

public record EmployeeForCreationDto(string Name, int Age, string Position);

We don’t have the Id property because we are going to create that Id on the server-side. But additionally, we don’t have the CompanyId because we accept that parameter through the route:[Route("api/companies/{companyId}/employees")]

The next step is to modify the IEmployeeRepository interface:

public interface IEmployeeRepository { IEnumerable<Employee> GetEmployees(Guid companyId, bool trackChanges); Employee GetEmployee(Guid companyId, Guid id, bool trackChanges); void CreateEmployeeForCompany(Guid companyId, Employee employee); }

Of course, we have to implement this interface:

public void CreateEmployeeForCompany(Guid companyId, Employee employee) { employee.CompanyId = companyId; Create(employee); }

Because we are going to accept the employee DTO object in our action and send it to a service method, but we also have to send an employee object to this repository method, we have to create an additional mapping rule in the MappingProfile class:

CreateMap<EmployeeForCreationDto, Employee>();

The next thing we have to do is IEmployeeService modification:

public interface IEmployeeService { IEnumerable<EmployeeDto> GetEmployees(Guid companyId, bool trackChanges); EmployeeDto GetEmployee(Guid companyId, Guid id, bool trackChanges); EmployeeDto CreateEmployeeForCompany(Guid companyId, EmployeeForCreationDto employeeForCreation, bool trackChanges); }

And implement this new method in EmployeeService:

public EmployeeDto CreateEmployeeForCompany(Guid companyId, EmployeeForCreationDto employeeForCreation, bool trackChanges) { var company = _repository.Company.GetCompany(companyId, trackChanges); if (company is null) throw new CompanyNotFoundException(companyId); var employeeEntity = _mapper.Map<Employee>(employeeForCreation); _repository.Employee.CreateEmployeeForCompany(companyId, employeeEntity); _repository.Save(); var employeeToReturn = _mapper.Map<EmployeeDto>(employeeEntity); return employeeToReturn; }

We have to check whether that company exists in the database because there is no point in creating an employee for a company that does not exist. After that, we map the DTO to an entity, call the repository methods to create a new employee, map back the entity to the DTO, and return it to the caller.

Now, we can add a new action in the EmployeesController:

[HttpPost] public IActionResult CreateEmployeeForCompany(Guid companyId, [FromBody] EmployeeForCreationDto employee) { if (employee is null) return BadRequest("EmployeeForCreationDto object is null"); var employeeToReturn = _service.EmployeeService.CreateEmployeeForCompany(companyId, employee, trackChanges: false); return CreatedAtRoute("GetEmployeeForCompany", new { companyId, id = employeeToReturn.Id }, employeeToReturn); }

As we can see, the main difference between this action and the CreateCompany action (if we exclude the fact that we are working with different DTOs) is the return statement, which now has two parameters for the anonymous object.

For this to work, we have to modify the HTTP attribute above the GetEmployeeForCompany action:

[HttpGet("{id:guid}", Name = "GetEmployeeForCompany")]

Let’s give this a try:https://localhost:5001/api/companies/14759d51-e9c1-4afc-f9bf-08d98898c9c3/employees

alt text

Excellent. A new employee was created.

If we take a look at the Headers tab, we'll see a link to fetch our newly created employee. If you copy that link and send another request with it, you will get this employee for sure:

alt text

9.4 Creating Children Resources Together with a Parent

There are situations where we want to create a parent resource with its children. Rather than using multiple requests for every single child, we want to do this in the same request with the parent resource.‌

We are going to show you how to do this.

The first thing we are going to do is extend the CompanyForCreationDto class:

public record CompanyForCreationDto(string Name, string Address, string Country, IEnumerable<EmployeeForCreationDto> Employees);

We are not going to change the action logic inside the controller nor the repository/service logic; everything is great there. That’s all. Let’s test it:https://localhost:5001/api/companies

alt text

You can see that this company was created successfully.

Now we can copy the location link from the Headers tab, paste it in another Postman tab, and just add the /employees part:

alt text

We have confirmed that the employees were created as well.

9.5 Creating a Collection of Resources

Until now, we have been creating a single resource whether it was Company or Employee. But it is quite normal to create a collection of resources, and in this section that is something we are going to work with.‌

If we take a look at the CreateCompany action, for example, we can see that the return part points to the CompanyById route (the GetCompany action). That said, we don’t have the GET action for the collection creating action to point to. So, before we start with the POST collection action, we are going to create the GetCompanyCollection action in the Companies controller.

But first, let's modify the ICompanyRepository interface:

IEnumerable<Company> GetByIds(IEnumerable<Guid> ids, bool trackChanges);

Then we have to change the CompanyRepository class:

public IEnumerable<Company> GetByIds(IEnumerable<Guid> ids, bool trackChanges) => FindByCondition(x => ids.Contains(x.Id), trackChanges) .ToList();

After that, we are going to modify ICompanyService:

public interface ICompanyService { IEnumerable<CompanyDto> GetAllCompanies(bool trackChanges); CompanyDto GetCompany(Guid companyId, bool trackChanges); CompanyDto CreateCompany(CompanyForCreationDto company); IEnumerable<CompanyDto> GetByIds(IEnumerable<Guid> ids, bool trackChanges); }

And implement this in CompanyService:

public IEnumerable<CompanyDto> GetByIds(IEnumerable<Guid> ids, bool trackChanges) { if (ids is null) throw new IdParametersBadRequestException(); var companyEntities = _repository.Company.GetByIds(ids, trackChanges); if (ids.Count() != companyEntities.Count()) throw new CollectionByIdsBadRequestException(); var companiesToReturn = _mapper.Map<IEnumerable<CompanyDto>>(companyEntities); return companiesToReturn; }

Here, we check if ids parameter is null and if it is we stop the execution flow and return a bad request response to the client. If it’s not null, we fetch all the companies for each id in the ids collection. If the count of ids and companies mismatch, we return another bad request response to the client. Finally, we are executing the mapping action and returning the result to the caller.

Of course, we don’t have these two exception classes yet, so let’s create them.

Since we are returning a bad request result, we are going to create a new abstract class in the Entities/Exceptions folder:

public abstract class BadRequestException : Exception { protected BadRequestException(string message) :base(message) { } }

Then, in the same folder, let’s create two new specific exception classes:

public sealed class IdParametersBadRequestException : BadRequestException { public IdParametersBadRequestException() :base("Parameter ids is null") { } } public sealed class CollectionByIdsBadRequestException : BadRequestException { public CollectionByIdsBadRequestException() :base("Collection count mismatch comparing to ids.") { } }

At this point, we’ve removed two errors from the GetByIds method. But, to show the correct response to the client, we have to modify the ConfigureExceptionHandler class – the part where we populate the StatusCode property:

context.Response.StatusCode = contextFeature.Error switch { NotFoundException => StatusCodes.Status404NotFound, BadRequestException => StatusCodes.Status400BadRequest, _ => StatusCodes.Status500InternalServerError };

After that, we can add a new action in the controller:

[HttpGet("collection/({ids})", Name = "CompanyCollection")] public IActionResult GetCompanyCollection(IEnumerable<Guid> ids) { var companies = _service.CompanyService.GetByIds(ids, trackChanges: false); return Ok(companies); }

And that's it. This action is pretty straightforward, so let's continue towards POST implementation.

Let’s modify the ICompanyService interface first:

public interface ICompanyService { IEnumerable<CompanyDto> GetAllCompanies(bool trackChanges); CompanyDto GetCompany(Guid companyId, bool trackChanges); CompanyDto CreateCompany(CompanyForCreationDto company); IEnumerable<CompanyDto> GetByIds(IEnumerable<Guid> ids, bool trackChanges); (IEnumerable<CompanyDto> companies, string ids) CreateCompanyCollection (IEnumerable<CompanyForCreationDto> companyCollection); }

So, this new method will accept a collection of the CompanyForCreationDto type as a parameter, and return a Tuple with two fields (companies and ids) as a result.

That said, let’s implement it in the CompanyService class:

public (IEnumerable<CompanyDto> companies, string ids) CreateCompanyCollection (IEnumerable<CompanyForCreationDto> companyCollection) { if (companyCollection is null) throw new CompanyCollectionBadRequest(); var companyEntities = _mapper.Map<IEnumerable<Company>>(companyCollection); foreach (var company in companyEntities) { _repository.Company.CreateCompany(company); } _repository.Save(); var companyCollectionToReturn = _mapper.Map<IEnumerable<CompanyDto>>(companyEntities); var ids = string.Join(",", companyCollectionToReturn.Select(c => c.Id)); return (companies: companyCollectionToReturn, ids: ids); }

So, we check if our collection is null and if it is, we return a bad request. If it isn’t, then we map that collection and save all the collection elements to the database. Finally, we map the company collection back, take all the ids as a comma-separated string, and return the Tuple with these two fields as a result to the caller.

Again, we can see that we don’t have the exception class, so let’s just create it:

public sealed class CompanyCollectionBadRequest : BadRequestException { public CompanyCollectionBadRequest() :base("Company collection sent from a client is null.")
{ } }

Finally, we can add a new action in the CompaniesController:

[HttpPost("collection")] public IActionResult CreateCompanyCollection([FromBody] IEnumerable<CompanyForCreationDto> companyCollection) { var result = _service.CompanyService.CreateCompanyCollection(companyCollection); return CreatedAtRoute("CompanyCollection", new { result.ids }, result.companies); }

We receive the companyCollection parameter from the client, send it to the service method, and return a result with a comma-separated string and our newly created companies.

Now you may ask, why are we sending a comma-separated string when we expect a collection of ids in the GetCompanyCollection action?

Well, we can’t just pass a list of ids in the CreatedAtRoute method because there is no support for the Header Location creation with the list. You may try it, but we're pretty sure you would get the location like this:

alt text

We can test our create action now with a bad request:https://localhost:5001/api/companies/collection

alt text

We can see that the request is handled properly and we have a correct response.

Now, let’s send a valid request:https://localhost:5001/api/companies/collection

alt text

Excellent. Let’s check the header tab:

alt text

We can see a valid location link. So, we can copy it and try to fetch our newly created companies:

alt text

But we are getting the 415 Unsupported Media Type message. This is because our API can’t bind the string type parameter to the IEnumerable argument in the GetCompanyCollection action.

Well, we can solve this with a custom model binding.

9.6 Model Binding in API

Let’s create the new folder ModelBinders in the Presentation project and inside the new class ArrayModelBinder:‌

public class ArrayModelBinder : IModelBinder { public Task BindModelAsync(ModelBindingContext bindingContext) { if(!bindingContext.ModelMetadata.IsEnumerableType) {
bindingContext.Result = ModelBindingResult.Failed(); return Task.CompletedTask; } var providedValue = bindingContext.ValueProvider .GetValue(bindingContext.ModelName) .ToString(); if(string.IsNullOrEmpty(providedValue)) { bindingContext.Result = ModelBindingResult.Success(null); return Task.CompletedTask; } var genericType = bindingContext.ModelType.GetTypeInfo().GenericTypeArguments[0]; var converter = TypeDescriptor.GetConverter(genericType); var objectArray = providedValue.Split(new[] { "," }, StringSplitOptions.RemoveEmptyEntries) .Select(x => converter.ConvertFromString(x.Trim())) .ToArray(); var guidArray = Array.CreateInstance(genericType, objectArray.Length); objectArray.CopyTo(guidArray, 0); bindingContext.Model = guidArray; bindingContext.Result = ModelBindingResult.Success(bindingContext.Model); return Task.CompletedTask; } }   

At first glance, this code might be hard to comprehend, but once we explain it, it will be easier to understand.

We are creating a model binder for the IEnumerable type. Therefore, we have to check if our parameter is the same type.

Next, we extract the value (a comma-separated string of GUIDs) with the ValueProvider.GetValue() expression. Because it is a type string, we just check whether it is null or empty. If it is, we return null as a result because we have a null check in our action in the controller. If it is not, we move on.

In the genericType variable, with the reflection help, we store the type the IEnumerable consists of. In our case, it is GUID. With the converter variable, we create a converter to a GUID type. As you can see, we didn’t just force the GUID type in this model binder; instead, we inspected what is the nested type of the IEnumerable parameter and then created a converter for that exact type, thus making this binder generic.

After that, we create an array of type object (objectArray) that consist of all the GUID values we sent to the API and then create an array of GUID types (guidArray), copy all the values from the objectArray to the guidArray, and assign it to the bindingContext.

These are the required using directives:

using Microsoft.AspNetCore.Mvc.ModelBinding; using System.ComponentModel; using System.Reflection;

And that is it. Now, we have just to make a slight modification in the GetCompanyCollection action:

public IActionResult GetCompanyCollection([ModelBinder(BinderType = typeof(ArrayModelBinder))]IEnumerable<Guid> ids)

This is the required namespace:

using CompanyEmployees.Presentation.ModelBinders;

Visual Studio will provide two different namespaces to resolve the error, so be sure to pick the right one.

Excellent.

Our ArrayModelBinder will be triggered before an action executes. It will convert the sent string parameter to the IEnumerable type, and then the action will be executed:

https://localhost:5001/api/companies/collection/(582ea192-6fb7-44ff-a2a1-08d988ca3ca9,a216fbbe-ebbd-4e09-a2a2-08d988ca3ca9)

alt text

Well done.

We are ready to continue towards DELETE actions.

10 WORKING WITH DELETE REQUESTS

Let’s start this section by deleting a child resource first. So, let’s modify the IEmployeeRepository interface:‌

public interface IEmployeeRepository { IEnumerable<Employee> GetEmployees(Guid companyId, bool trackChanges); Employee GetEmployee(Guid companyId, Guid id, bool trackChanges); void CreateEmployeeForCompany(Guid companyId, Employee employee); void DeleteEmployee(Employee employee); }

The next step for us is to modify the EmployeeRepository class:

public void DeleteEmployee(Employee employee) => Delete(employee);

After that, we have to modify the IEmployeeService interface:

public interface IEmployeeService { IEnumerable<EmployeeDto> GetEmployees(Guid companyId, bool trackChanges); EmployeeDto GetEmployee(Guid companyId, Guid id, bool trackChanges); EmployeeDto CreateEmployeeForCompany(Guid companyId, EmployeeForCreationDto employeeForCreation, bool trackChanges); void DeleteEmployeeForCompany(Guid companyId, Guid id, bool trackChanges); }

And of course, the EmployeeService class:

public void DeleteEmployeeForCompany(Guid companyId, Guid id, bool trackChanges) { var company = _repository.Company.GetCompany(companyId, trackChanges); if (company is null) throw new CompanyNotFoundException(companyId); var employeeForCompany = _repository.Employee.GetEmployee(companyId, id, trackChanges); if (employeeForCompany is null) throw new EmployeeNotFoundException(id); _repository.Employee.DeleteEmployee(employeeForCompany); _repository.Save(); }

Pretty straightforward method implementation where we fetch the company and if it doesn’t exist, we return the Not Found response. If it exists, we fetch the employee for that company and execute the same check, where if it’s true, we return another not found response. Lastly, we delete the employee from the database.

Finally, we can add a delete action to the controller class:

[HttpDelete("{id:guid}")] public IActionResult DeleteEmployeeForCompany(Guid companyId, Guid id) { _service.EmployeeService.DeleteEmployeeForCompany(companyId, id, trackChanges: false); return NoContent(); }

There is nothing new with this action. We collect the companyId from the root route and the employee’s id from the passed argument. Call the service method and return the NoContent() method, which returns the status code 204 No Content.

Let’s test this:https://localhost:5001/api/companies/14759d51-e9c1-4afc-f9bf-08d98898c9c3/employees/e06cfcc6-e353-4bd8-0870-08d988af0956

alt text

Excellent. It works great.

You can try to get that employee from the database, but you will get 404 for sure:https://localhost:5001/api/companies/14759d51-e9c1-4afc-f9bf-08d98898c9c3/employees/e06cfcc6-e353-4bd8-0870-08d988af0956

alt text

We can see that the DELETE request isn’t safe because it deletes the resource, thus changing the resource representation. But if we try to send this delete request one or even more times, we would get the same 404 result because the resource doesn’t exist anymore. That’s what makes the DELETE request idempotent.

10.1 Deleting a Parent Resource with its Children

With Entity Framework Core, this action is pretty simple. With the basic configuration, cascade deleting is enabled, which means deleting a parent resource will automatically delete all of its children. We can confirm that from the migration file:‌

alt text

So, all we have to do is to create a logic for deleting the parent resource.

Well, let’s do that following the same steps as in a previous example:

public interface ICompanyRepository { IEnumerable<Company> GetAllCompanies(bool trackChanges); Company GetCompany(Guid companyId, bool trackChanges); void CreateCompany(Company company); IEnumerable<Company> GetByIds(IEnumerable<Guid> ids, bool trackChanges); void DeleteCompany(Company company); }

Then let’s modify the repository class:

public void DeleteCompany(Company company) => Delete(company);

Then we have to modify the service interface:

public interface ICompanyService { IEnumerable<CompanyDto> GetAllCompanies(bool trackChanges); CompanyDto GetCompany(Guid companyId, bool trackChanges); CompanyDto CreateCompany(CompanyForCreationDto company); IEnumerable<CompanyDto> GetByIds(IEnumerable<Guid> ids, bool trackChanges); (IEnumerable<CompanyDto> companies, string ids) CreateCompanyCollection (IEnumerable<CompanyForCreationDto> companyCollection); void DeleteCompany(Guid companyId, bool trackChanges); }

And the service class:

public void DeleteCompany(Guid companyId, bool trackChanges) { var company = _repository.Company.GetCompany(companyId, trackChanges); if (company is null) throw new CompanyNotFoundException(companyId); _repository.Company.DeleteCompany(company); _repository.Save(); }

Finally, let’s modify our controller:

[HttpDelete("{id:guid}")] public IActionResult DeleteCompany(Guid id) { _service.CompanyService.DeleteCompany(id, trackChanges: false); return NoContent(); }

And let’s test our action:https://localhost:5001/api/companies/0AD5B971-FF51-414D-AF01-34187E407557

alt text

It works.

You can check in your database that this company alongside its children doesn’t exist anymore.

There we go. We have finished working with DELETE requests and we are ready to continue to the PUT requests.

11 WORKING WITH PUT REQUESTS

In this section, we are going to show you how to update a resource using the PUT request. We are going to update a child resource first and then we are going to show you how to execute insert while updating a parent resource.‌

11.1 Updating Employee

In the previous sections, we first changed our interface, then the repository/service classes, and finally the controller. But for the update, this doesn’t have to be the case.‌

Let’s go step by step.

The first thing we are going to do is to create another DTO record for update purposes:

public record EmployeeForUpdateDto(string Name, int Age, string Position);

We do not require the Id property because it will be accepted through the URI, like with the DELETE requests. Additionally, this DTO contains the same properties as the DTO for creation, but there is a conceptual difference between those two DTO classes. One is for updating and the other is for creating. Furthermore, once we get to the validation part, we will understand the additional difference between those two.

Because we have an additional DTO record, we require an additional mapping rule:

CreateMap<EmployeeForUpdateDto, Employee>();

After adding the mapping rule, we can modify the IEmployeeService interface:

public interface IEmployeeService { IEnumerable<EmployeeDto> GetEmployees(Guid companyId, bool trackChanges); EmployeeDto GetEmployee(Guid companyId, Guid id, bool trackChanges);
EmployeeDto CreateEmployeeForCompany(Guid companyId, EmployeeForCreationDto employeeForCreation, bool trackChanges); void DeleteEmployeeForCompany(Guid companyId, Guid id, bool trackChanges); void UpdateEmployeeForCompany(Guid companyId, Guid id, EmployeeForUpdateDto employeeForUpdate, bool compTrackChanges, bool empTrackChanges); }

We are declaring a method that contains both id parameters – one for the company and one for employee, the employeeForUpdate object sent from the client, and two track changes parameters, again, one for the company and one for the employee. We are doing that because we won't track changes while fetching the company entity, but we will track changes while fetching the employee.

That said, let’s modify the EmployeeService class:

public void UpdateEmployeeForCompany(Guid companyId, Guid id, EmployeeForUpdateDto employeeForUpdate, bool compTrackChanges, bool empTrackChanges) { var company = _repository.Company.GetCompany(companyId, compTrackChanges); if (company is null) throw new CompanyNotFoundException(companyId); var employeeEntity = _repository.Employee.GetEmployee(companyId, id, empTrackChanges); if (employeeEntity is null) throw new EmployeeNotFoundException(id); _mapper.Map(employeeForUpdate, employeeEntity); _repository.Save(); }

So first, we fetch the company from the database. If it doesn’t exist, we interrupt the flow and send the response to the client. After that, we do the same thing for the employee. But there is one difference here. Pay attention to the way we fetch the company and the way we fetch the employeeEntity. Do you see the difference?

As we’ve already said: the trackChanges parameter will be set to true for the employeeEntity. That’s because we want EF Core to track changes on this entity. This means that as soon as we change any property in this entity, EF Core will set the state of that entity to Modified.

As you can see, we are mapping from the employeeForUpdate object (we will change just the age property in a request) to the employeeEntity — thus changing the state of the employeeEntity object to Modified.

Because our entity has a modified state, it is enough to call the Save method without any additional update actions. As soon as we call the Save method, our entity is going to be updated in the database.

Now, when we have all of these, let’s modify the EmployeesController:

[HttpPut("{id:guid}")] public IActionResult UpdateEmployeeForCompany(Guid companyId, Guid id, [FromBody] EmployeeForUpdateDto employee) { if (employee is null) return BadRequest("EmployeeForUpdateDto object is null"); _service.EmployeeService.UpdateEmployeeForCompany(companyId, id, employee, compTrackChanges: false, empTrackChanges: true); return NoContent(); }

We are using the PUT attribute with the id parameter to annotate this action. That means that our route for this action is going to be: api/companies/{companyId}/employees/{id}.

Then, we check if the employee object is null, and if it is, we return a BadRequest response.

After that, we just call the update method from the service layer and pass false for the company track changes and true for the employee track changes.

Finally, we return the 204 NoContent status.
We can test our action:https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees/80ABBCA8-664D-4B20-B5DE-024705497D4A

alt text

And it works; we get the 204 No Content status.

We can check our executed query through EF Core to confirm that only the Age column is updated:

alt text

Excellent.

You can send the same request with the invalid company id or employee id. In both cases, you should get a 404 response, which is a valid response to this kind of situation.

NOTE: We’ve changed only the Age property, but we have sent all the other properties with unchanged values as well. Therefore, Age is only updated in the database. But if we send the object with just the Age property, other properties will be set to their default values and the whole object will be updated — not just the Age column. That’s because the PUT is a request for a full update. This is very important to know.

11.1.1 About the Update Method from the RepositoryBase Class‌

Right now, you might be asking: “Why do we have the Update method in the RepositoryBase class if we are not using it?”

The update action we just executed is a connected update (an update where we use the same context object to fetch the entity and to update it). But sometimes we can work with disconnected updates. This kind of update action uses different context objects to execute fetch and update actions or sometimes we can receive an object from a client with the Id property set as well, so we don’t have to fetch it from the database. In that situation, all we have to do is to inform EF Core to track changes on that entity and to set its state to modified. We can do both actions with the Update method from our RepositoryBase class. So, you see, having that method is crucial as well.

One note, though. If we use the Update method from our repository, even if we change just the Age property, all properties will be updated in the database.

11.2 Inserting Resources while Updating One

While updating a parent resource, we can create child resources as well without too much effort. EF Core helps us a lot with that process. Let’s see how.‌

The first thing we are going to do is to create a DTO record for update:

public record CompanyForUpdateDto(string Name, string Address, string Country, IEnumerable<EmployeeForCreationDto> Employees);

After this, let’s create a new mapping rule:

CreateMap<CompanyForUpdateDto, Company>();

Then, let’s move on to the interface modification:

public interface ICompanyService {
IEnumerable<CompanyDto> GetAllCompanies(bool trackChanges); CompanyDto GetCompany(Guid companyId, bool trackChanges); CompanyDto CreateCompany(CompanyForCreationDto company); IEnumerable<CompanyDto> GetByIds(IEnumerable<Guid> ids, bool trackChanges); (IEnumerable<CompanyDto> companies, string ids) CreateCompanyCollection (IEnumerable<CompanyForCreationDto> companyCollection); void DeleteCompany(Guid companyId, bool trackChanges); void UpdateCompany(Guid companyid, CompanyForUpdateDto companyForUpdate, bool trackChanges); }    

And of course, the service class modification:

public void UpdateCompany(Guid companyId, CompanyForUpdateDto companyForUpdate, bool trackChanges) { var companyEntity = _repository.Company.GetCompany(companyId, trackChanges); if (companyEntity is null) throw new CompanyNotFoundException(companyId); _mapper.Map(companyForUpdate, companyEntity); _repository.Save(); }

So again, we fetch our company entity from the database, and if it is null, we just return the NotFound response. But if it’s not null, we map the companyForUpdate DTO to companyEntity and call the Save method.

Right now, we can modify our controller:

[HttpPut("{id:guid}")] public IActionResult UpdateCompany(Guid id, [FromBody] CompanyForUpdateDto company) { if (company is null) return BadRequest("CompanyForUpdateDto object is null"); _service.CompanyService.UpdateCompany(id, company, trackChanges: true); return NoContent(); }

That’s it. You can see that this action is almost the same as the employee update action.

Let’s test this now:
https://localhost:5001/api/companies/3d490a70-94ce-4d15-9494-5248280c2ce3i

alt text

We modify the name of the company and attach an employee as well. As a result, we can see 204, which means that the entity has been updated. But what about that new employee?

Let’s inspect our query:

alt text

You can see that we have created the employee entity in the database. So, EF Core does that job for us because we track the company entity. As soon as mapping occurs, EF Core sets the state for the company entity to modified and for all the employees to added. After we call the Save method, the Name property is going to be modified and the employee entity is going to be created in the database.

We are finished with the PUT requests, so let’s continue with PATCH.

12 WORKING WITH PATCH REQUESTS

In the previous chapter, we worked with the PUT request to fully update our resource. But if we want to update our resource only partially, we should use PATCH.‌

The partial update isn’t the only difference between PATCH and PUT. The request body is different as well. For the Company PATCH request, for example, we should use [FromBody]JsonPatchDocument and not [FromBody]Company as we did with the PUT requests.

Additionally, for the PUT request’s media type, we have used application/json — but for the PATCH request’s media type, we should use application/json-patch+json. Even though the first one would be accepted in ASP.NET Core for the PATCH request, the recommendation by REST standards is to use the second one.

Let’s see what the PATCH request body looks like:

[ { "op": "replace", "path": "/name", "value": "new name" }, { "op": "remove", "path": "/name" } ]

The square brackets represent an array of operations. Every operation is placed between curly brackets. So, in this specific example, we have two operations: Replace and Remove represented by the op property. The path property represents the object’s property that we want to modify and the value property represents a new value.

In this specific example, for the first operation, we replace the value of the name property with a new name. In the second example, we remove the name property, thus setting its value to default.

There are six different operations for a PATCH request:

OPERATION REQUEST BODY EXPLANATION
Add { "op": "add", "path": "/name", "value": "new value" } Assigns a new value to a required property.
Remove { "op": "remove","path": "/name"} Sets a default value to a required property.
Replace { "op": "replace", "path": "/name", "value": "new value" } Replaces a value of a required property to a new value.
Copy {"op": "copy","from": "/name","path": "/title"} Copies the value from a property in the “from” part to the property in the “path” part.
Move { "op": "move", "from": "/name", "path": "/title" } Moves the value from a property in the “from” part to a property in the “path” part.
Test {"op": "test","path": "/name","value": "new value"} Tests if a property has a specified value.

After all this theory, we are ready to dive into the coding part.

12.1 Applying PATCH to the Employee Entity

Before we start with the code modification, we have to install two required libraries:‌

• The Microsoft.AspNetCore.JsonPatch library, in the Presentation project, to support the usage of JsonPatchDocument in our controller and

• The Microsoft.AspNetCore.Mvc.NewtonsoftJson library, in the main project, to support request body conversion to a PatchDocument once we send our request.

As you can see, we are still using the NewtonsoftJson library to support the PatchDocument conversion. The official statement from Microsoft is that they are not going to replace it with System.Text.Json: “The main reason is that this will require a huge investment from us, with not a very high value-add for the majority of our customers.”.

By using AddNewtonsoftJson, we are replacing the System.Text.Json formatters for all JSON content. We don’t want to do that so, we are going ton add a simple workaround in the Program class:

NewtonsoftJsonPatchInputFormatter GetJsonPatchInputFormatter() => new ServiceCollection().AddLogging().AddMvc().AddNewtonsoftJson() .Services.BuildServiceProvider() .GetRequiredService<IOptions<MvcOptions>>().Value.InputFormatters .OfType<NewtonsoftJsonPatchInputFormatter>().First();

By adding a method like this in the Program class, we are creating a local function. This function configures support for JSON Patch using Newtonsoft.Json while leaving the other formatters unchanged.

For this to work, we have to include two more namespaces in the class:

using Microsoft.AspNetCore.Mvc.Formatters; using Microsoft.Extensions.Options;

After that, we have to modify the AddControllers method:

builder.Services.AddControllers(config => { config.RespectBrowserAcceptHeader = true; config.ReturnHttpNotAcceptable = true; config.InputFormatters.Insert(0, GetJsonPatchInputFormatter()); }).AddXmlDataContractSerializerFormatters()

We are placing our JsonPatchInputFormatter at the index 0 in the InputFormatters list.

We will require a mapping from the Employee type to the EmployeeForUpdateDto type. Therefore, we have to create a mapping rule for that.

If we take a look at the MappingProfile class, we will see that we have a mapping from the EmployeeForUpdateDto to the Employee type:

CreateMap<EmployeeForUpdateDto, Employee>();

But we need it another way. To do so, we are not going to create an additional rule; we can just use the ReverseMap method to help us in the process:

CreateMap<EmployeeForUpdateDto, Employee>().ReverseMap();

The ReverseMap method is also going to configure this rule to execute reverse mapping if we ask for it.

After that, we are going to add two new method contracts to the IEmployeeService interface:

(EmployeeForUpdateDto employeeToPatch, Employee employeeEntity) GetEmployeeForPatch( Guid companyId, Guid id, bool compTrackChanges, bool empTrackChanges); void SaveChangesForPatch(EmployeeForUpdateDto employeeToPatch, Employee employeeEntity);

Of course, for this to work, we have to add the reference to the Entities project.

Then, we have to implement these two methods in the EmployeeService class:

public (EmployeeForUpdateDto employeeToPatch, Employee employeeEntity) GetEmployeeForPatch (Guid companyId, Guid id, bool compTrackChanges, bool empTrackChanges) { var company = _repository.Company.GetCompany(companyId, compTrackChanges); if (company is null) throw new CompanyNotFoundException(companyId); var employeeEntity = _repository.Employee.GetEmployee(companyId, id, empTrackChanges); if (employeeEntity is null) throw new EmployeeNotFoundException(companyId); var employeeToPatch = _mapper.Map<EmployeeForUpdateDto>(employeeEntity); return (employeeToPatch, employeeEntity); } public void SaveChangesForPatch(EmployeeForUpdateDto employeeToPatch, Employee employeeEntity) { _mapper.Map(employeeToPatch, employeeEntity); _repository.Save(); }

In the first method, we are trying to fetch both the company and employee from the database and if we can’t find either of them, we stop the execution flow and return the NotFound response to the client. Then, we map the employee entity to the EmployeeForUpdateDto type and return both objects (employeeToPatch and employeeEntity) inside the Tuple to the controller.

The second method just maps from emplyeeToPatch to employeeEntity and calls the repository's Save method.

Now, we can modify our controller:

[HttpPatch("{id:guid}")] public IActionResult PartiallyUpdateEmployeeForCompany(Guid companyId, Guid id, [FromBody] JsonPatchDocument<EmployeeForUpdateDto> patchDoc) { if (patchDoc is null) return BadRequest("patchDoc object sent from client is null."); var result = _service.EmployeeService.GetEmployeeForPatch(companyId, id, compTrackChanges: false, empTrackChanges: true); patchDoc.ApplyTo(result.employeeToPatch); _service.EmployeeService.SaveChangesForPatch(result.employeeToPatch, result.employeeEntity); return NoContent(); }

You can see that our action signature is different from the PUT actions. We are accepting the JsonPatchDocument from the request body. After that, we have a familiar code where we check the patchDoc for null value and if it is, we return a BadRequest. Then we call the service method where we map from the Employee type to the EmployeeForUpdateDto type; we need to do that because the patchDoc variable can apply only to the EmployeeForUpdateDto type. After apply is executed, we call another service method to map again to the Employee type (from employeeToPatch to employeeEntity) and save changes in the database. In the end, we return NoContent.

Don’t forget to include an additional namespace:

using Microsoft.AspNetCore.JsonPatch;

Now, we can send a couple of requests to test this code:

Let’s first send the replace operation:

https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees/80ABBCA8-664D-4B20-B5DE-024705497D4A

alt text

It works; we get the 204 No Content message. Let’s check the same employee:

https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees/80ABBCA8-664D-4B20-B5DE-024705497D4A

alt text

And we see the Age property has been changed.

Let’s send a remove operation in a request:
https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees/80ABBCA8-664D-4B20-B5DE-024705497D4A

alt text

This works as well. Now, if we check our employee, its age is going to be set to 0 (the default value for the int type):

alt text

Finally, let’s return a value of 28 for the Age property:

https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees/80ABBCA8-664D-4B20-B5DE-024705497D4A

alt text

Let’s check the employee now:

alt text

Excellent.

Everything works as expected.

13 VALIDATION

While writing API actions, we have a set of rules that we need to check. If we take a look at the Company class, we can see different data annotation attributes above our properties:‌

alt text

Those attributes serve the purpose to validate our model object while creating or updating resources in the database. But we are not making use of them yet.

In this chapter, we are going to show you how to validate our model objects and how to return an appropriate response to the client if the model is not valid. So, we need to validate the input and not the output of our controller actions. This means that we are going to apply this validation to the POST, PUT, and PATCH requests, but not for the GET request.

13.1 ModelState, Rerun Validation, and Built-in Attributes

To validate against validation rules applied by Data Annotation attributes, we are going to use the concept of ModelState. It is a dictionary containing the state of the model and model binding validation.‌

It is important to know that model validation occurs after model binding and reports errors where the data, sent from the client, doesn’t meet our validation criteria. Both model validation and data binding occur before our request reaches an action inside a controller. We are going to use the ModelState.IsValid expression to check for those validation rules.

By default, we don’t have to use the ModelState.IsValid expression in Web API projects since, as we explained in section 9.2.1, controllers are decorated with the [ApiController] attribute. But, as we could’ve seen, it defaults all the model state errors to 400 – BadRequest and doesn’t allow us to return our custom error messages with a different status code. So, we suppressed it in the Program class.

The response status code, when validation fails, should be 422 Unprocessable Entity. That means that the server understood the content type of the request and the syntax of the request entity is correct, but it was unable to process validation rules applied on the entity inside the request body. If we didn’t suppress the model validation from the [ApiController] attribute, we wouldn’t be able to return this status code (422) since, as we said, it would default to 400.

13.1.1 Rerun Validation‌

In some cases, we want to repeat our validation. This can happen if, after the initial validation, we compute a value in our code, and assign it to the property of an already validated object.

If this is the case, and we want to run the validation again, we can use the ModelStateDictionary.ClearValidationState method to clear the validation specific to the model that we’ve already validated, and then use the TryValidateModel method:

[HttpPost] public IActionResult POST([FromBody] Book book) { if (!ModelState.IsValid) return UnprocessableEntity(ModelState);
var newPrice = book.Price - 10; book.Price = newPrice; ModelState.ClearValidationState(nameof(Book)); if (!TryValidateModel(book, nameof(Book))) return UnprocessableEntity(ModelState); _service.CreateBook(book); return CreatedAtRoute("BookById", new { id = book.Id }, book); }

This is just a simple example but it explains how we can revalidate our model object.

13.1.2 Built-in Attributes‌

Validation attributes let us specify validation rules for model properties. At the beginning of this chapter, we have marked some validation attributes. Those attributes (Required and MaxLength) are part of built-in attributes. And of course, there are more than two built-in attributes. These are the most used ones:

ATTRIBUTE USAGE
[ValidateNever] Indicates that property or parameter should be excluded from validation.
[Compare] We use it for the properties comparison.
[EmailAddress] Validates the email format of the property.
[Phone] Validates the phone format of the property.
[Range] Validates that the property falls within a specified range.
[RegularExpression] Validates that the property value matches a specified regular expression.
[Required] We use it to prevent a null value for the property.
[StringLength] Validates that a string property value doesn't exceed a specified length limit.

If you want to see a complete list of built-in attributes, you can visit this page. https://learn.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations?view=net-6.0

13.2 Custom Attributes and IValidatableObject

There are scenarios where built-in attributes are not enough and we have to provide some custom logic. For that, we can create a custom attribute by using the ValidationAttribute class, or we can use the IValidatableObject interface.‌

So, let’s see an example of how we can create a custom attribute:

public class ScienceBookAttribute : ValidationAttribute { public BookGenre Genre { get; set; } public string Error => $"The genre of the book must be {BookGenre.Science}"; public ScienceBookAttribute(BookGenre genre) { Genre= genre; } protected override ValidationResult? IsValid(object? value, ValidationContext validationContext) { var book = (Book)validationContext.ObjectInstance; if (!book.Genre.Equals(Genre.ToString())) return new ValidationResult(Error); return ValidationResult.Success; } }

Once this attribute is called, we are going to pass the genre parameter inside the constructor. Then, we have to override the IsValid method. There we extract the object we want to validate and inspect if the Genre property matches our value sent through the constructor. If it’s not we return the Error property as a validation result. Otherwise, we return success.

To call this custom attribute, we can do something like this:

public class Book { public int Id { get; set; } [Required] public string? Name { get; set; } [Range(10, int.MaxValue)] public int Price { get; set; }
[ScienceBook(BookGenre.Science)] public string? Genre { get; set; } }

Now we can use the IValidatableObject interface:

public class Book : IValidatableObject { public int Id { get; set; } [Required] public string? Name { get; set; } [Range(10, int.MaxValue)] public int Price { get; set; } public string? Genre { get; set; } public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) { var errorMessage = $"The genre of the book must be {BookGenre.Science}"; if (!Genre.Equals(BookGenre.Science.ToString())) yield return new ValidationResult(errorMessage, new[] { nameof(Genre) }); } }

This validation happens in the model class, where we have to implement the Validate method. The code inside that method is pretty straightforward. Also, pay attention that we don’t have to apply any validation attribute on top of the Genre property.

As we’ve seen from the previous examples, we can create a custom attribute in a separate class and even make it generic so it could be reused for other model objects. This is not the case with the IValidatableObject interface. It is used inside the model class and of course, the validation logic can’t be reused.

So, this could be something you can think about when deciding which one to use.

After all of this theory and code samples, we are ready to implement model validation in our code.

13.3 Validation while Creating Resource

Let’s send another request for the CreateEmployee action, but this time with the invalid request body:‌
https://localhost:5001/api/companies/582ea192-6fb7-44ff-a2a1-08d988ca3ca9/employees

alt text

And we get the 500 Internal Server Error, which is a generic message when something unhandled happens in our code. But this is not good. This means that the server made an error, which is not the case. In this case, we, as a consumer, sent the wrong model to the API — thus the error message should be different.

To fix this, let’s modify our EmployeeForCreationDto record because that’s what we deserialize the request body to:

public record EmployeeForCreationDto( [Required(ErrorMessage = "Employee name is a required field.")] [MaxLength(30, ErrorMessage = "Maximum length for the Name is 30 characters.")] string Name, [Required(ErrorMessage = "Age is a required field.")] int Age, [Required(ErrorMessage = "Position is a required field.")] [MaxLength(20, ErrorMessage = "Maximum length for the Position is 20 characters.")] string Position );

This is how we can apply validation attributes in our positional records. But, in our opinion, positional records start losing readability once the attributes are applied, and for that reason, we like using init setters if we have to apply validation attributes. So, we are going to do exactly that and modify this position record:

public record EmployeeForCreationDto { [Required(ErrorMessage = "Employee name is a required field.")] [MaxLength(30, ErrorMessage = "Maximum length for the Name is 30 characters.")] public string? Name { get; init; } [Required(ErrorMessage = "Age is a required field.")] public int Age { get; init; } [Required(ErrorMessage = "Position is a required field.")] [MaxLength(20, ErrorMessage = "Maximum length for the Position is 20 characters.")] public string? Position { get; init; } }

Now, we have to modify our action:

[HttpPost] public IActionResult CreateEmployeeForCompany(Guid companyId, [FromBody] EmployeeForCreationDto employee) { if (employee is null) return BadRequest("EmployeeForCreationDto object is null"); if (!ModelState.IsValid) return UnprocessableEntity(ModelState); var employeeToReturn = _service.EmployeeService.CreateEmployeeForCompany(companyId, employee, trackChanges: false); return CreatedAtRoute("GetEmployeeForCompany", new { companyId, id = employeeToReturn.Id }, employeeToReturn); }

As mentioned before in the part about the ModelState dictionary, all we have to do is to call the IsValid method and return the UnprocessableEntity response by providing our ModelState.

And that is all.

Let’s send our request one more time:

https://localhost:5001/api/companies/582ea192-6fb7-44ff-a2a1-08d988ca3ca9/employees

alt text

Let’s send an additional request to test the max length rule:

https://localhost:5001/api/companies/582ea192-6fb7-44ff-a2a1-08d988ca3ca9/employees

alt text

Excellent. It works as expected.

The same actions can be applied for the CreateCompany action and CompanyForCreationDto class — and if you check the source code for this chapter, you will find it implemented.

13.3.1 Validating Int Type‌

Let’s create one more request with the request body without the age property:

https://localhost:5001/api/companies/582ea192-6fb7-44ff-a2a1-08d988ca3ca9/employees

alt text

We can see that the age property hasn’t been sent, but in the response body, we don’t see the error message for the age property next to other error messages. That is because the age is of type int and if we don’t send that property, it would be set to a default value, which is 0.

So, on the server-side, validation for the Age property will pass, because it is not null.

To prevent this type of behavior, we have to modify the data annotation attribute on top of the Age property in the EmployeeForCreationDto class:

[Range(18, int.MaxValue, ErrorMessage = "Age is required and it can't be lower than 18")] public int Age { get; set; }

Now, let’s try to send the same request one more time:
https://localhost:5001/api/companies/582ea192-6fb7-44ff-a2a1-08d988ca3ca9/employees

alt text

Now, we have the Age error message in our response.

If we want, we can add the custom error messages in our action: ModelState.AddModelError(string key, string errorMessage)

With this expression, the additional error message will be included with all the other messages.

13.4 Validation for PUT Requests

The validation for PUT requests shouldn’t be different from POST requests (except in some cases), but there are still things we have to do to at least optimize our code.‌

But let’s go step by step.

First, let’s add Data Annotation Attributes to the EmployeeForUpdateDto record:

public record EmployeeForUpdateDto { [Required(ErrorMessage = "Employee name is a required field.")]
[MaxLength(30, ErrorMessage = "Maximum length for the Name is 30 characters.")] public string Name? { get; init; } [Range(18, int.MaxValue, ErrorMessage = "Age is required and it can't be lower than 18")] public int Age { get; init; } [Required(ErrorMessage = "Position is a required field.")] [MaxLength(20, ErrorMessage = "Maximum length for the Position is 20 characters.")] public string? Position { get; init; } }

Once we have done this, we realize we have a small problem. If we compare this class with the DTO class for creation, we are going to see that they are the same. Of course, we don’t want to repeat ourselves, thus we are going to add some modifications.

Let’s create a new record in the DataTransferObjects folder:

public abstract record EmployeeForManipulationDto { [Required(ErrorMessage = "Employee name is a required field.")] [MaxLength(30, ErrorMessage = "Maximum length for the Name is 30 characters.")] public string? Name { get; init; } [Range(18, int.MaxValue, ErrorMessage = "Age is required and it can't be lower than 18")] public int Age { get; init; } [Required(ErrorMessage = "Position is a required field.")] [MaxLength(20, ErrorMessage = "Maximum length for the Position is 20 characters.")] public string? Position { get; init; } }

We create this record as an abstract record because we want our creation and update DTO records to inherit from it:

public record EmployeeForCreationDto : EmployeeForManipulationDto; public record EmployeeForUpdateDto : EmployeeForManipulationDto;

Now, we can modify the UpdateEmployeeForCompany action by adding the model validation right after the null check:

if (employee is null) return BadRequest("EmployeeForUpdateDto object is null"); if (!ModelState.IsValid) return UnprocessableEntity(ModelState);

The same process can be applied to the Company DTO records and actions. You can find it implemented in the source code for this chapter.

Let’s test this:
https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees/80ABBCA8-664D-4B20-B5DE-024705497D4A

alt text

Great.

Everything works well.

13.5 Validation for PATCH Requests

The validation for PATCH requests is a bit different from the previous ones. We are using the ModelState concept again, but this time we have to place it in the ApplyTo method first:‌

patchDoc.ApplyTo(employeeToPatch, ModelState);

But once we do this, we are going to get an error. That’s because the current ApplyTo method comes from the JsonPatch namespace, and we need the method with the same name but from the NewtonsoftJson namespace.

Since we have the Microsoft.AspNetCore.Mvc.NewtonsoftJson package installed in the main project, we are going to remove it from there and install it in the Presentation project.

If we navigate to the ApplyTo method declaration we can find two extension methods:

public static class JsonPatchExtensions { public static void ApplyTo<T>(this JsonPatchDocument<T> patchDoc, T objectToApplyTo, ModelStateDictionary modelState) where T : class... public static void ApplyTo<T>(this JsonPatchDocument<T> patchDoc, T objectToApplyTo, ModelStateDictionary modelState, string prefix) where T : class... }

We are using the first one.

After the package installation, the error in the action will disappear.

Now, right below thee ApplyTo method, we can add our familiar validation logic:

patchDoc.ApplyTo(result.employeeToPatch, ModelState); if (!ModelState.IsValid) return UnprocessableEntity(ModelState); _service.EmployeeService.SaveChangesForPatch(...);

Let’s test this now:

https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees/80ABBCA8-664D-4B20-B5DE-024705497D4A

alt text

You can see that it works as it is supposed to.

But, we have a small problem now. What if we try to send a remove operation, but for the valid path:

alt text

We can see it passes, but this is not good. If you can remember, we said that the remove operation will set the value for the included property to its default value, which is 0. But in the EmployeeForUpdateDto class, we have a Range attribute that doesn’t allow that value to be below 18. So, where is the problem?

Let’s illustrate this for you:

alt text

As you can see, we are validating patchDoc which is completely valid at this moment, but we save employeeEntity to the database. So, we need some additional validation to prevent an invalid employeeEntity from being saved to the database:

patchDoc.ApplyTo(result.employeeToPatch, ModelState); TryValidateModel(result.employeeToPatch); if (!ModelState.IsValid) return UnprocessableEntity(ModelState);

We can use the TryValidateModel method to validate the already patched employeeToPatch instance. This will trigger validation and every error will make ModelState invalid. After that, we execute a familiar validation check.

Now, we can test this again:

https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees/80ABBCA8-664D- 4B20-B5DE-024705497D4A

alt text

And we get 422, which is the expected status code.

14 ASYNCHRONOUS CODE

In this chapter, we are going to convert synchronous code to asynchronous inside ASP.NET Core. First, we are going to learn a bit about asynchronous programming and why should we write async code. Then we are going to use our code from the previous chapters and rewrite it in an async manner.‌

We are going to modify the code, step by step, to show you how easy is to convert synchronous code to asynchronous code. Hopefully, this will help you understand how asynchronous code works and how to write it from scratch in your applications.

14.1 What is Asynchronous Programming?

Async programming is a parallel programming technique that allows the working process to run separately from the main application thread.‌

By using async programming, we can avoid performance bottlenecks and enhance the responsiveness of our application.

How so?

Because we are not sending requests to the server and blocking it while waiting for the responses anymore (as long as it takes). Now, when we send a request to the server, the thread pool delegates a thread to that request. Eventually, that thread finishes its job and returns to the thread pool freeing itself for the next request. At some point, the data will be fetched from the database and the result needs to be sent to the requester. At that time, the thread pool provides another thread to handle that work. Once the work is done, a thread is going back to the thread pool.

It is very important to understand that if we send a request to an endpoint and it takes the application three or more seconds to process that request, we probably won’t be able to execute this request any faster in async mode. It is going to take the same amount of time as the sync request.

Let’s imagine that our thread pool has two threads and we have used one thread with a first request. Now, the second request arrives and we have to use the second thread from a thread pool. At this point, our thread pool is out of threads. If a third request arrives now it has to wait for any of the first two requests to complete and return assigned threads to a thread pool. Only then the thread pool can assign that returned thread to a new request:

alt text

As a result of a request waiting for an available thread, our client experiences a slow down for sure. Additionally, if the client has to wait too long, they will receive an error response usually the service is unavailable (503). But this is not the only problem. Since the client expects the list of entities from the database, we know that it is an I/O operation. So, if we have a lot of records in the database and it takes three seconds for the database to return a result to the API, our thread is doing nothing except waiting for the task to complete. So basically, we are blocking that thread and making it three seconds unavailable for any additional requests that arrive at our API.

With asynchronous requests, the situation is completely different.

When a request arrives at our API, we still need a thread from a thread pool. So, that leaves us with only one thread left. But because this action is now asynchronous, as soon as our request reaches the I/O point where the database has to process the result for three seconds, the thread is returned to a thread pool. Now we again have two available threads and we can use them for any additional request. After the three seconds when the database returns the result to the API, the thread pool assigns the thread again to handle that response:

alt text

Now that we've cleared that out, we can learn how to implement asynchronous code in .NET Core and .NET 5+.

14.2 Async, Await Keywords and Return Types

The async and await keywords play a crucial part in asynchronous programming. We use the async keyword in the method declaration and its purpose is to enable the await keyword within that method. So yes,‌we can’t use the await keyword without previously adding the async keyword in the method declaration. Also, using only the async keyword doesn’t make your method asynchronous, just the opposite, that method is still synchronous.

The await keyword performs an asynchronous wait on its argument. It does that in several steps. The first thing it does is to check whether the operation is already complete. If it is, it will continue the method execution synchronously. Otherwise, the await keyword is going to pause the async method execution and return an incomplete task. Once the operation completes, a few seconds later, the async method can continue with the execution.

Let’s see this with a simple example:

public async Task<IEnumerable<Company>> GetCompanies() { _logger.LogInfo("Inside the GetCompanies method."); var companies = await _repoContext.Companies.ToListAsync(); return companies; }

So, even though our method is marked with the async keyword, it will start its execution synchronously. Once we log the required information synchronously, we continue to the next code line. We extract all the companies from the database and to do that, we use the await keyword. If our database requires some time to process the result and return it, the await keyword is going to pause the GetCompanies method execution and return an incomplete task. During that time the tread will be returned to a thread pool making itself available for another request. After the database operation completes the async method will resume executing and will return the list of companies.

From this example, we see the async method execution flow. But the question is how the await keyword knows if the operation is completed or not. Well, this is where Task comes into play.

14.2.1 Return Types of the Asynchronous Methods‌

In asynchronous programming, we have three return types:

Task<TResult>, for an async method that returns a value.

• Task, for an async method that does not return a value.

• void, which we can use for an event handler.

What does this mean?

Well, we can look at this through synchronous programming glasses. If our sync method returns an int, then in the async mode it should return Task<int> — or if the sync method

returns IEnumerable<string>, then the async method should return Task<IEnumerable<string>>.

But if our sync method returns no value (has a void for the return type), then our async method should return Task. This means that we can use the await keyword inside that method, but without the return keyword.

You may wonder now, why not return Task all the time? Well, we should use void only for the asynchronous event handlers which require a void return type. Other than that, we should always return a Task.

From C# 7.0 onward, we can specify any other return type if that type includes a GetAwaiter method.

It is very important to understand that the Task represents an execution of the asynchronous method and not the result. The Task has several properties that indicate whether the operation was completed successfully or not (Status, IsCompleted, IsCanceled, IsFaulted). With these properties, we can track the flow of our async operations. So, this is the answer to our question. With Task, we can track whether the operation is completed or not. This is also called TAP (Task-based Asynchronous Pattern).

Now, when we have all the information, let’s do some refactoring in our completely synchronous code.

14.2.2 The IRepositoryBase Interface and the RepositoryBase Class Explanation‌

We won’t be changing the mentioned interface and class. That’s because we want to leave a possibility for the repository user classes to have either sync or async method execution. Sometimes, the async code could become slower than the sync one because EF Core’s async commands take slightly longer to execute (due to extra code for handling the threading), so leaving this option is always a good choice.

It is general advice to use async code wherever it is possible, but if we notice that our async code runes slower, we should switch back to the sync one.

14.3 Modifying the ICompanyRepository Interface and the CompanyRepository Class

In the Contracts project, we can find the ICompanyRepository interface with all the synchronous method signatures which we should change.‌

So, let’s do that:

public interface ICompanyRepository { Task<IEnumerable<Company>> GetAllCompaniesAsync(bool trackChanges); Task<Company> GetCompanyAsync(Guid companyId, bool trackChanges); void CreateCompany(Company company); Task<IEnumerable<Company>> GetByIdsAsync(IEnumerable<Guid> ids, bool trackChanges); void DeleteCompany(Company company); }

The Create and Delete method signatures are left synchronous. That’s because, in these methods, we are not making any changes in the database. All we're doing is changing the state of the entity to Added and Deleted.

So, in accordance with the interface changes, let’s modify our

CompanyRepository.cs class, which we can find in the Repository project:

public async Task<IEnumerable<Company>> GetAllCompaniesAsync(bool trackChanges) => await FindAll(trackChanges) .OrderBy(c => c.Name) .ToListAsync(); public async Task<Company> GetCompanyAsync(Guid companyId, bool trackChanges) => await FindByCondition(c => c.Id.Equals(companyId), trackChanges) .SingleOrDefaultAsync(); public void CreateCompany(Company company) => Create(company); public async Task<IEnumerable<Company>> GetByIdsAsync(IEnumerable<Guid> ids, bool trackChanges) => await FindByCondition(x => ids.Contains(x.Id), trackChanges) .ToListAsync(); public void DeleteCompany(Company company) => Delete(company);

We only have to change these methods in our repository class.

14.4 IRepositoryManager and RepositoryManager Changes

If we inspect the mentioned interface and the class, we will see the Save method, which calls the EF Core’s SaveChanges method. We have to change that as well:‌

public interface IRepositoryManager { ICompanyRepository Company { get; } IEmployeeRepository Employee { get; } Task SaveAsync(); }

And the RepositoryManager class modification:

public async Task SaveAsync() => await _repositoryContext.SaveChangesAsync();

Because the SaveAsync(), ToListAsync()... methods are awaitable, we may use the await keyword; thus, our methods need to have the async keyword and Task as a return type.

Using the await keyword is not mandatory, though. Of course, if we don’t use it, our SaveAsync() method will execute synchronously — and that is not our goal here.

14.5 Updating the Service layer

Again, we have to start with the interface modification:‌

public interface ICompanyService { Task<IEnumerable<CompanyDto>> GetAllCompaniesAsync(bool trackChanges); Task<CompanyDto> GetCompanyAsync(Guid companyId, bool trackChanges); Task<CompanyDto> CreateCompanyAsync(CompanyForCreationDto company); Task<IEnumerable<CompanyDto>> GetByIdsAsync(IEnumerable<Guid> ids, bool trackChanges); Task<(IEnumerable<CompanyDto> companies, string ids)> CreateCompanyCollectionAsync (IEnumerable<CompanyForCreationDto> companyCollection); Task DeleteCompanyAsync(Guid companyId, bool trackChanges); Task UpdateCompanyAsync(Guid companyid, CompanyForUpdateDto companyForUpdate, bool trackChanges); }

And then, let’s modify the class methods one by one.

GetAllCompanies:

public async Task<IEnumerable<CompanyDto>> GetAllCompaniesAsync(bool trackChanges) { var companies = await _repository.Company.GetAllCompaniesAsync(trackChanges); var companiesDto = _mapper.Map<IEnumerable<CompanyDto>>(companies); return companiesDto; }

GetCompany:

public async Task<CompanyDto> GetCompanyAsync(Guid id, bool trackChanges) { var company = await _repository.Company.GetCompanyAsync(id, trackChanges); if (company is null) throw new CompanyNotFoundException(id); var companyDto = _mapper.Map<CompanyDto>(company); return companyDto; }

CreateCompany:

public async Task<CompanyDto> CreateCompanyAsync(CompanyForCreationDto company) {
var companyEntity = _mapper.Map<Company>(company); _repository.Company.CreateCompany(companyEntity); await _repository.SaveAsync(); var companyToReturn = _mapper.Map<CompanyDto>(companyEntity); return companyToReturn; }

GetByIds:

public async Task<IEnumerable<CompanyDto>> GetByIdsAsync(IEnumerable<Guid> ids, bool trackChanges) { if (ids is null) throw new IdParametersBadRequestException(); var companyEntities = await _repository.Company.GetByIdsAsync(ids, trackChanges); if (ids.Count() != companyEntities.Count()) throw new CollectionByIdsBadRequestException(); var companiesToReturn = _mapper.Map<IEnumerable<CompanyDto>>(companyEntities); return companiesToReturn; }

CreateCompanyCollection:

public async Task<(IEnumerable<CompanyDto> companies, string ids)> CreateCompanyCollectionAsync (IEnumerable<CompanyForCreationDto> companyCollection) { if (companyCollection is null) throw new CompanyCollectionBadRequest(); var companyEntities = _mapper.Map<IEnumerable<Company>>(companyCollection); foreach (var company in companyEntities) { _repository.Company.CreateCompany(company); } await _repository.SaveAsync(); var companyCollectionToReturn = _mapper.Map<IEnumerable<CompanyDto>>(companyEntities); var ids = string.Join(",", companyCollectionToReturn.Select(c => c.Id)); return (companies: companyCollectionToReturn, ids: ids); }

DeleteCompany:

public async Task DeleteCompanyAsync(Guid companyId, bool trackChanges) {
var company = await _repository.Company.GetCompanyAsync(companyId, trackChanges); if (company is null) throw new CompanyNotFoundException(companyId); _repository.Company.DeleteCompany(company); await _repository.SaveAsync(); }

UpdateCompany:

public async Task UpdateCompanyAsync(Guid companyId, CompanyForUpdateDto companyForUpdate, bool trackChanges) { var companyEntity = await _repository.Company.GetCompanyAsync(companyId, trackChanges); if (companyEntity is null) throw new CompanyNotFoundException(companyId); _mapper.Map(companyForUpdate, companyEntity); await _repository.SaveAsync(); }

That’s all the changes we have to make in the CompanyService class.

Now we can move on to the controller modification.

14.6 Controller Modification

Finally, we need to modify all of our actions in‌

the CompaniesController to work asynchronously.

So, let’s first start with the GetCompanies method:

[HttpGet] public async Task<IActionResult> GetCompanies() { var companies = await _service.CompanyService.GetAllCompaniesAsync(trackChanges: false); return Ok(companies); }

We haven’t changed much in this action. We’ve just changed the return type and added the async keyword to the method signature. In the method body, we can now await the GetAllCompaniesAsync() method. And that is pretty much what we should do in all the actions in our controller.

NOTE: We’ve changed all the method names in the repository and service layers by adding the Async suffix. But, we didn’t do that in the controller’s action. The main reason for that is when a user calls a method from your service or repository layers they can see right-away from the method name whether the method is synchronous or asynchronous. Also, your layers are not limited only to sync or async methods, you can have two methods that do the same thing but one in a sync manner and another in an async manner. In that case, you want to have a name distinction between those methods. For the controller’s actions this is not the case. We are not targeting our actions by their names but by their routes. So, the name of the action doesn’t really add any value as it does for the method names.

So to continue, let’s modify all the other actions.

GetCompany:

[HttpGet("{id:guid}", Name = "CompanyById")] public async Task<IActionResult> GetCompany(Guid id) { var company = await _service.CompanyService.GetCompanyAsync(id, trackChanges: false); return Ok(company); }

GetCompanyCollection:

[HttpGet("collection/({ids})", Name = "CompanyCollection")] public async Task<IActionResult> GetCompanyCollection ([ModelBinder(BinderType = typeof(ArrayModelBinder))]IEnumerable<Guid> ids) { var companies = await _service.CompanyService.GetByIdsAsync(ids, trackChanges: false); return Ok(companies); }

CreateCompany:

[HttpPost]
public async Task<IActionResult> CreateCompany([FromBody] CompanyForCreationDto company) { if (company is null) return BadRequest("CompanyForCreationDto object is null"); if (!ModelState.IsValid) return UnprocessableEntity(ModelState); var createdCompany = await _service.CompanyService.CreateCompanyAsync(company); return CreatedAtRoute("CompanyById", new { id = createdCompany.Id }, createdCompany); }

CreateCompanyCollection:

[HttpPost("collection")] public async Task<IActionResult> CreateCompanyCollection ([FromBody] IEnumerable<CompanyForCreationDto> companyCollection) { var result = await _service.CompanyService.CreateCompanyCollectionAsync(companyCollection); return CreatedAtRoute("CompanyCollection", new { result.ids }, result.companies); }

DeleteCompany:

[HttpDelete("{id:guid}")] public async Task<IActionResult> DeleteCompany(Guid id) { await _service.CompanyService.DeleteCompanyAsync(id, trackChanges: false); return NoContent(); }

UpdateCompany:

[HttpPut("{id:guid}")] public async Task<IActionResult> UpdateCompany(Guid id, [FromBody] CompanyForUpdateDto company) { if (company is null) return BadRequest("CompanyForUpdateDto object is null"); await _service.CompanyService.UpdateCompanyAsync(id, company, trackChanges: true); return NoContent(); }

Excellent. Now we are talking async.

Of course, we have the Employee entity as well and all of these steps have to be implemented for the EmployeeRepository class, IEmployeeRepository interface, and EmployeesController.

You can always refer to the source code for this chapter if you have any trouble implementing the async code for the Employee entity.

After the async implementation in the Employee classes, you can try to send different requests (from any chapter) to test your async actions. All of them should work as before, without errors, but this time in an asynchronous manner.

14.7 Continuation in Asynchronous Programming

The await keyword does three things:‌

• It helps us extract the result from the async operation – we already learned about that

• Validates the success of the operation

• Provides the Continuation for executing the rest of the code in the async method

So, in our GetCompanyAsync service method, all the code after awaiting an async operation is executed inside the continuation if the async operation was successful.

When we talk about continuation, it can be confusing because you can read in multiple resources about the SynchronizationContext and capturing the current context to enable this continuation. When we await a task, a request context is captured when await decides to pause the method execution. Once the method is ready to resume its execution, the application takes a thread from a thread pool, assigns it to the context (SynchonizationContext), and resumes the execution. But this is the case for ASP.NET applications.

We don’t have the SynchronizationContext in ASP.NET Core applications. ASP.NET Core avoids capturing and queuing the context, all it does is take the thread from a thread pool and assign it to the request. So, a lot less background works for the application to do.

One more thing. We are not limited to a single continuation. This means that in a single method, we can use multiple await keywords.

14.8 Common Pitfalls

In our GetAllCompaniesAsync repository method if we didn’t know any better, we could’ve been tempted to use the Result property instead of the await keyword:‌

public async Task<IEnumerable<Company>> GetAllCompaniesAsync(bool trackChanges) => FindAll(trackChanges) .OrderBy(c => c.Name) .ToListAsync() .Result;

We can see that the Result property returns the result we require:

// Summary: // Gets the result value of this System.Threading.Tasks.Task`1. // // Returns: // The result value of this System.Threading.Tasks.Task`1, which // is of the same type as the task's type parameter. public TResult Result { get... }

But don’t use the Result property.

With this code, we are going to block the thread and potentially cause a deadlock in the application, which is the exact thing we are trying to avoid using the async and await keywords. It applies the same to the Wait method that we can call on a Task.

So, that’s it regarding the asynchronous implementation in our project. We’ve learned a lot of useful things from this section and we can move on to the next one – Action filters.

15 ACTION FILTERS

Filters in .NET offer a great way to hook into the MVC action invocation pipeline. Therefore, we can use filters to extract code that can be reused and make our actions cleaner and maintainable. Some filters are already provided by .NET like the authorization filter, and there are the custom ones that we can create ourselves.‌

There are different filter types:

• Authorization filters – They run first to determine whether a user is authorized for the current request.

• Resource filters – They run right after the authorization filters and are very useful for caching and performance.

• Action filters – They run right before and after action method execution.

• Exception filters – They are used to handle exceptions before the response body is populated.

• Result filters – They run before and after the execution of the action methods result.

In this chapter, we are going to talk about Action filters and how to use them to create a cleaner and reusable code in our Web API.

15.1 Action Filters Implementation

To create an Action filter, we need to create a class that inherits either from the IActionFilter interface, the IAsyncActionFilter interface, or the ActionFilterAttribute class — which is the implementation of IActionFilter, IAsyncActionFilter, and a few different interfaces as well:‌

public abstract class ActionFilterAttribute : Attribute, IActionFilter, IFilterMetadata, IAsyncActionFilter, IResultFilter, IAsyncResultFilter, IOrderedFilter

To implement the synchronous Action filter that runs before and after action method execution, we need to implement the OnActionExecuting and OnActionExecuted methods:

namespace ActionFilters.Filters { public class ActionFilterExample : IActionFilter { public void OnActionExecuting(ActionExecutingContext context) { // our code before action executes } public void OnActionExecuted(ActionExecutedContext context) { // our code after action executes } } }

We can do the same thing with an asynchronous filter by inheriting from IAsyncActionFilter, but we only have one method to implement — the OnActionExecutionAsync:

namespace ActionFilters.Filters { public class AsyncActionFilterExample : IAsyncActionFilter { public async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next) { // execute any code before the action executes var result = await next(); // execute any code after the action executes } } }

15.2 The Scope of Action Filters

Like the other types of filters, the action filter can be added to different scope levels: Global, Action, and Controller.‌

If we want to use our filter globally, we need to register it inside the AddControllers() method in the Program class:

builder.Services.AddControllers(config => { config.Filters.Add(new GlobalFilterExample()); });

But if we want to use our filter as a service type on the Action or Controller level, we need to register it, but as a service in the IoC container:

builder.Services.AddScoped<ActionFilterExample>(); builder.Services.AddScoped<ControllerFilterExample>();

Finally, to use a filter registered on the Action or Controller level, we need to place it on top of the Controller or Action as a ServiceType:

namespace AspNetCore.Controllers { [ServiceFilter(typeof(ControllerFilterExample))] [Route("api/[controller]")] [ApiController] public class TestController : ControllerBase { [HttpGet] [ServiceFilter(typeof(ActionFilterExample))] public IEnumerable<string> Get() { return new string[] { "example", "data" }; } }

15.3 Order of Invocation

The order in which our filters are executed is as follows:‌

alt text

Of course, we can change the order of invocation by adding the Order property to the invocation statement:

namespace AspNetCore.Controllers { [ServiceFilter(typeof(ControllerFilterExample), Order = 2)] [Route("api/[controller]")] [ApiController] public class TestController : ControllerBase { [HttpGet] [ServiceFilter(typeof(ActionFilterExample), Order = 1)] public IEnumerable<string> Get() { return new string[] { "example", "data" }; } } }

Or something like this on top of the same action:

[HttpGet]
[ServiceFilter(typeof(ActionFilterExample), Order = 2)] [ServiceFilter(typeof(ActionFilterExample2), Order = 1)] public IEnumerable<string> Get() { return new string[] { "example", "data" }; }

15.4 Improving the Code with Action Filters

Our actions are clean and readable without try-catch blocks due to global exception handling and a service layer implementation, but we can improve them even further.‌

So, let’s start with the validation code from the POST and PUT actions.

15.5 Validation with Action Filters

If we take a look at our POST and PUT actions, we can notice the repeated code in which we validate our Company model:‌

if (company is null) return BadRequest("CompanyForUpdateDto object is null"); if (!ModelState.IsValid) return UnprocessableEntity(ModelState);

We can extract that code into a custom Action Filter class, thus making this code reusable and the action cleaner.

So, let’s do that.

Let’s create a new folder in our solution explorer, and name

it ActionFilters. Then inside that folder, we are going to create a new class ValidationFilterAttribute:

public class ValidationFilterAttribute : IActionFilter { public ValidationFilterAttribute() {} public void OnActionExecuting(ActionExecutingContext context) { } public void OnActionExecuted(ActionExecutedContext context){} }

Now we are going to modify the OnActionExecuting method:

public void OnActionExecuting(ActionExecutingContext context) { var action = context.RouteData.Values["action"]; var controller = context.RouteData.Values["controller"]; var param = context.ActionArguments .SingleOrDefault(x => x.Value.ToString().Contains("Dto")).Value; if (param is null) { context.Result = new BadRequestObjectResult($"Object is null. Controller: {controller}, action: {action}"); return; } if (!context.ModelState.IsValid) context.Result = new UnprocessableEntityObjectResult(context.ModelState); }

We are using the context parameter to retrieve different values that we need inside this method. With the RouteData.Values dictionary, we can get the values produced by routes on the current routing path. Since we need the name of the action and the controller, we extract them from the Values dictionary.

Additionally, we use the ActionArguments dictionary to extract the DTO parameter that we send to the POST and PUT actions. If that parameter is null, we set the Result property of the context object to a new instance of the BadRequestObjectReturnResult class. If the model is invalid, we create a new instance of the UnprocessableEntityObjectResult class and pass ModelState.

Next, let’s register this action filter in the Program class above the AddControllers method:

builder.Services.AddScoped<ValidationFilterAttribute>();

Finally, let’s remove the mentioned validation code from our actions and call this action filter as a service.

POST:

[HttpPost] [ServiceFilter(typeof(ValidationFilterAttribute))] public async Task<IActionResult> CreateCompany([FromBody] CompanyForCreationDto company) {
var createdCompany = await _service.CompanyService.CreateCompanyAsync(company); return CreatedAtRoute("CompanyById", new { id = createdCompany.Id }, createdCompany); }

PUT:

[HttpPut("{id:guid}")] [ServiceFilter(typeof(ValidationFilterAttribute))] public async Task<IActionResult> UpdateCompany(Guid id, [FromBody] CompanyForUpdateDto company) { await _service.CompanyService.UpdateCompanyAsync(id, company, trackChanges: true); return NoContent(); }

Excellent.

This code is much cleaner and more readable now without the validation part. Furthermore, the validation part is now reusable for the POST and PUT actions for both the Company and Employee DTO objects.

If we send a POST request, for example, with the invalid model we will get the required response:https://localhost:5001/api/companies

alt text

We can apply this action filter to the POST and PUT actions in the EmployeesController the same way we did in the CompaniesController and test it as well:
https://localhost:5001/api/companies/53a1237a-3ed3-4462-b9f0-5a7bb1056a33/employees

alt text

15.6 Refactoring the Service Layer

Because we are already working on making our code reusable in our actions, we can review our classes from the service layer.‌

Let’s inspect the CompanyServrice class first.

Inside the class, we can find three methods (GetCompanyAsync, DeleteCompanyAsync, and UpdateCompanyAsync) where we repeat the same code:

var company = await _repository.Company.GetCompanyAsync(id, trackChanges); if (company is null) throw new CompanyNotFoundException(id);

This is something we can extract in a private method in the same class:

private async Task<Company> GetCompanyAndCheckIfItExists(Guid id, bool trackChanges) { var company = await _repository.Company.GetCompanyAsync(id, trackChanges); if (company is null) throw new CompanyNotFoundException(id); return company; }

And then we can modify these methods.

GetCompanyAsync:

public async Task<CompanyDto> GetCompanyAsync(Guid id, bool trackChanges) { var company = await GetCompanyAndCheckIfItExists(id, trackChanges); var companyDto = _mapper.Map<CompanyDto>(company); return companyDto; }

DeleteCompanyAsync:

public async Task DeleteCompanyAsync(Guid companyId, bool trackChanges) { var company = await GetCompanyAndCheckIfItExists(companyId, trackChanges); _repository.Company.DeleteCompany(company); await _repository.SaveAsync(); }

UpdateCompanyAsync:

public async Task UpdateCompanyAsync(Guid companyId, CompanyForUpdateDto companyForUpdate, bool trackChanges) { var company = await GetCompanyAndCheckIfItExists(companyId, trackChanges); _mapper.Map(companyForUpdate, company); await _repository.SaveAsync(); }

Now, this looks much better without code repetition.

Furthermore, we can find code repetition in almost all the methods inside the EmployeeService class:

var company = await _repository.Company.GetCompanyAsync(companyId, trackChanges); if (company is null) throw new CompanyNotFoundException(companyId); var employeeDb = await _repository.Employee.GetEmployeeAsync(companyId, id, trackChanges); if (employeeDb is null) throw new EmployeeNotFoundException(id); 

In some methods, we can find just the first check and in several others, we can find both of them.

So, let’s extract these checks into two separate methods:

private async Task CheckIfCompanyExists(Guid companyId, bool trackChanges) { var company = await _repository.Company.GetCompanyAsync(companyId, trackChanges); if (company is null)
throw new CompanyNotFoundException(companyId); } private async Task<Employee> GetEmployeeForCompanyAndCheckIfItExists (Guid companyId, Guid id, bool trackChanges) { var employeeDb = await _repository.Employee.GetEmployeeAsync(companyId, id, trackChanges); if (employeeDb is null) throw new EmployeeNotFoundException(id); return employeeDb; }

With these two extracted methods in place, we can refactor all the other methods in the class.

GetEmployeesAsync:

public async Task<IEnumerable<EmployeeDto>> GetEmployeesAsync(Guid companyId, bool trackChanges) { await CheckIfCompanyExists(companyId, trackChanges); var employeesFromDb = await _repository.Employee.GetEmployeesAsync(companyId, trackChanges); var employeesDto = _mapper.Map<IEnumerable<EmployeeDto>>(employeesFromDb); return employeesDto; }

GetEmployeeAsync:

public async Task<EmployeeDto> GetEmployeeAsync(Guid companyId, Guid id, bool trackChanges) { await CheckIfCompanyExists(companyId, trackChanges); var employeeDb = await GetEmployeeForCompanyAndCheckIfItExists(companyId, id, trackChanges); var employee = _mapper.Map<EmployeeDto>(employeeDb); return employee; }

CreateEmployeeForCompanyAsync:

public async Task<EmployeeDto> CreateEmployeeForCompanyAsync(Guid companyId, EmployeeForCreationDto employeeForCreation, bool trackChanges) { await CheckIfCompanyExists(companyId, trackChanges); var employeeEntity = _mapper.Map<Employee>(employeeForCreation); _repository.Employee.CreateEmployeeForCompany(companyId, employeeEntity); await _repository.SaveAsync();
var employeeToReturn = _mapper.Map<EmployeeDto>(employeeEntity); return employeeToReturn; }

DeleteEmployeeForCompanyAsync:

public async Task DeleteEmployeeForCompanyAsync(Guid companyId, Guid id, bool trackChanges) { await CheckIfCompanyExists(companyId, trackChanges); var employeeDb = await GetEmployeeForCompanyAndCheckIfItExists(companyId, id, trackChanges); _repository.Employee.DeleteEmployee(employeeDb); await _repository.SaveAsync(); }

UpdateEmployeeForCompanyAsync:

public async Task UpdateEmployeeForCompanyAsync(Guid companyId, Guid id, EmployeeForUpdateDto employeeForUpdate, bool compTrackChanges, bool empTrackChanges) { await CheckIfCompanyExists(companyId, compTrackChanges); var employeeDb = await GetEmployeeForCompanyAndCheckIfItExists(companyId, id, empTrackChanges); _mapper.Map(employeeForUpdate, employeeDb); await _repository.SaveAsync(); }

GetEmployeeForPatchAsync:

public async Task<(EmployeeForUpdateDto employeeToPatch, Employee employeeEntity)> GetEmployeeForPatchAsync (Guid companyId, Guid id, bool compTrackChanges, bool empTrackChanges) { await CheckIfCompanyExists(companyId, compTrackChanges); var employeeDb = await GetEmployeeForCompanyAndCheckIfItExists(companyId, id, empTrackChanges); var employeeToPatch = _mapper.Map<EmployeeForUpdateDto>(employeeDb); return (employeeToPatch: employeeToPatch, employeeEntity: employeeDb); }

Now, all of the methods are cleaner and easier to maintain since our validation code is in a single place, and if we need to modify these validations, there’s only one place we need to change.

Additionally, if you want you can create a new class and extract these methods, register that class as a service, inject it into our service classes and use the validation methods. It is up to you how you want to do it.

So, we have seen how to use action filters to clear our action methods and also how to extract methods to make our service cleaner and easier to maintain.

With that out of the way, we can continue to Paging.

16 PAGING

We have covered a lot of interesting features while creating our Web API project, but there are still things to do.‌

So, in this chapter, we’re going to learn how to implement paging in ASP.NET Core Web API. It is one of the most important concepts in building RESTful APIs.

If we inspect the GetEmployeesForCompany action in the EmployeesController, we can see that we return all the employees for the single company.

But we don’t want to return a collection of all resources when querying our API. That can cause performance issues and it’s in no way optimized for public or private APIs. It can cause massive slowdowns and even application crashes in severe cases.

Of course, we should learn a little more about Paging before we dive into code implementation.

16.1 What is Paging?

Paging refers to getting partial results from an API. Imagine having millions of results in the database and having your application try to return all of them at once.‌

Not only would that be an extremely ineffective way of returning the results, but it could also possibly have devastating effects on the application itself or the hardware it runs on. Moreover, every client has limited memory resources and it needs to restrict the number of shown results.

Thus, we need a way to return a set number of results to the client in order to avoid these consequences. Let’s see how we can do that.

16.2 Paging Implementation

Mind you, we don’t want to change the base repository logic or implement‌ any business logic in the controller.

What we want to achieve is something like this: https://localhost:5001/api/companies/companyId/employees?pa geNumber=2&pageSize=2. This should return the second set of two employees we have in our database.

We also want to constrain our API not to return all the employees even if someone calls https://localhost:5001/api/companies/companyId/employees.

Let's start with the controller modification by modifying the GetEmployeesForCompany action:

[HttpGet] public async Task<IActionResult> GetEmployeesForCompany(Guid companyId, [FromQuery] EmployeeParameters employeeParameters) { var employees = await _service.EmployeeService.GetEmployeesAsync(companyId, trackChanges: false); return Ok(employees); }

A few things to take note of here:

• We’re using [FromQuery] to point out that we’ll be using query parameters to define which page and how many employees we are requesting.

• The EmployeeParameters class is the container for the actual parameters for the Employee entity.

We also need to actually create the EmployeeParameters class. So, let’s first create a RequestFeatures folder in the Shared project and then inside, create the required classes.

First the RequestParameters class:

public abstract class RequestParameters
{ const int maxPageSize = 50; public int PageNumber { get; set; } = 1; private int _pageSize = 10; public int PageSize { get { return _pageSize; } set { _pageSize = (value > maxPageSize) ? maxPageSize : value; } }

And then the EmployeeParameters class:

public class EmployeeParameters : RequestParameters { }

We create an abstract class to hold the common properties for all the entities in our project, and a single EmployeeParameters class that will hold the specific parameters. It is empty now, but soon it won’t be.

In the abstract class, we are using the maxPageSize constant to restrict our API to a maximum of 50 rows per page. We have two public properties – PageNumber and PageSize. If not set by the caller, PageNumber will be set to 1, and PageSize to 10.

Now we can return to the controller and import a using directive for the EmployeeParameters class:

using Shared.RequestFeatures;

After that change, let’s implement the most important part — the repository logic. We need to modify the GetEmployeesAsync method in the IEmployeeRepository interface and the EmployeeRepository class.

So, first the interface modification:

public interface IEmployeeRepository { Task<IEnumerable<Employee>> GetEmployeesAsync(Guid companyId,
EmployeeParameters employeeParameters, bool trackChanges); Task<Employee> GetEmployeeAsync(Guid companyId, Guid id, bool trackChanges); void CreateEmployeeForCompany(Guid companyId, Employee employee); void DeleteEmployee(Employee employee); }

As Visual Studio suggests, we have to add the reference to the Shared project.

After that, let’s modify the repository logic:

public async Task<IEnumerable<Employee>> GetEmployeesAsync(Guid companyId, EmployeeParameters employeeParameters, bool trackChanges) => await FindByCondition(e => e.CompanyId.Equals(companyId), trackChanges) .OrderBy(e => e.Name) .Skip((employeeParameters.PageNumber - 1) * employeeParameters.PageSize) .Take(employeeParameters.PageSize) .ToListAsync();

Okay, the easiest way to explain this is by example.

Say we need to get the results for the third page of our website, counting 20 as the number of results we want. That would mean we want to skip the first ((3 – 1) * 20) = 40 results, then take the next 20 and return them to the caller.

Does that make sense?

Since we call this repository method in our service layer, we have to modify it as well.

So, let’s start with the IEmployeeService modification:

public interface IEmployeeService { Task<IEnumerable<EmployeeDto>> GetEmployeesAsync(Guid companyId, EmployeeParameters employeeParameters, bool trackChanges); ... }

In this interface, we only have to modify the GetEmployeesAsync method by adding a new parameter.

After that, let’s modify the EmployeeService class:

public async Task<IEnumerable<EmployeeDto>> GetEmployeesAsync(Guid companyId, EmployeeParameters employeeParameters, bool trackChanges) {await CheckIfCompanyExists(companyId, trackChanges); var employeesFromDb = await _repository.Employee .GetEmployeesAsync(companyId, employeeParameters, trackChanges); var employeesDto = _mapper.Map<IEnumerable<EmployeeDto>>(employeesFromDb); return employeesDto; }

Nothing too complicated here. We just accept an additional parameter and pass it to the repository method.

Finally, we have to modify the GetEmployeesForCompany action and fix that error by adding another argument to the GetEmployeesAsync method call:

[HttpGet] public async Task<IActionResult> GetEmployeesForCompany(Guid companyId, [FromQuery] EmployeeParameters employeeParameters) { var employees = await _service.EmployeeService.GetEmployeesAsync(companyId, employeeParameters, trackChanges: false); return Ok(employees); }

16.3 Concrete Query

Before we continue, we should create additional employees for the company with the id: C9D4C053-49B6-410C-BC78-2D54A9991870. We are doing this because we have only a small number of employees per company and we need more of them for our example. You can use a predefined request in Part16 in Postman, and just change the request body with the following objects:‌

{"name": "Mihael Worth","age": 30,"position": "Marketing expert"} {"name": "John Spike","age": 32,"position": "Marketing expert II"} {"name": "Nina Hawk","age": 26,"position": "Marketing expert II"}
{"name": "Mihael Fins","age": 30,"position": "Marketing expert" } {"name": "Martha Grown","age": 35, "position": "Marketing expert II"} {"name": "Kirk Metha","age": 30,"position": "Marketing expert" }

Now we should have eight employees for this company, and we can try a request like this:

https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees?pageNumber=2&pageSize=2

So, we request page two with two employees:

https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78- 2D54A9991870/employees?pageNumber=2&pageSize=2

alt text

If that’s what you got, you’re on the right track.
We can check our result in the database:

alt text

And we can see that we have the correct data returned.

Now, what can we do to improve this solution?

16.4 Improving the Solution

Since we’re returning just a subset of results to the caller, we might as‌ well have a PagedList instead of List.

PagedList will inherit from the List class and will add some more to it. We can also move the skip/take logic to the PagedList since it makes more sense.

So, let’s first create a new MetaData class in the Shared/RequestFeatures folder:

public class MetaData { public int CurrentPage { get; set; } public int TotalPages { get; set; } public int PageSize { get; set; } public int TotalCount { get; set; } public bool HasPrevious => CurrentPage > 1; public bool HasNext => CurrentPage < TotalPages; }

Then, we are going to implement the PagedList class in the same folder:

public class PagedList<T> : List<T> { public MetaData MetaData { get; set; } public PagedList(List<T> items, int count, int pageNumber, int pageSize) { MetaData = new MetaData { TotalCount = count, PageSize = pageSize, CurrentPage = pageNumber, TotalPages = (int)Math.Ceiling(count / (double)pageSize) }; AddRange(items); } public static PagedList<T> ToPagedList(IEnumerable<T> source, int pageNumber, int pageSize) { var count = source.Count(); var items = source.Skip((pageNumber - 1) * pageSize) .Take(pageSize).ToList(); return new PagedList<T>(items, count, pageNumber, pageSize); } }

As you can see, we’ve transferred the skip/take logic to the static method inside of the PagedList class. And in the MetaData class, we’ve added a few more properties that will come in handy as metadata for our response.

HasPrevious is true if the CurrentPage is larger than 1, and HasNext is calculated if the CurrentPage is smaller than the number of total pages. TotalPages is calculated by dividing the number of items by the page size and then rounding it to the larger number since a page needs to exist even if there is only one item on it.

Now that we’ve cleared that up, let’s change our EmployeeRepository and EmployeesController accordingly.

Let’s start with the interface modification:

Task<PagedList<Employee>> GetEmployeesAsync(Guid companyId, EmployeeParameters employeeParameters, bool trackChanges);

Then, let’s change the repository class:

public async Task<PagedList<Employee>> GetEmployeesAsync(Guid companyId, EmployeeParameters employeeParameters, bool trackChanges) { var employees = await FindByCondition(e => e.CompanyId.Equals(companyId), trackChanges) .OrderBy(e => e.Name) .ToListAsync(); return PagedList<Employee> .ToPagedList(employees, employeeParameters.PageNumber, employeeParameters.PageSize); }

After that, we are going to modify the IEmplyeeService interface:

Task<(IEnumerable<EmployeeDto> employees, MetaData metaData)> GetEmployeesAsync(Guid companyId, EmployeeParameters employeeParameters, bool trackChanges);

Now our method returns a Tuple containing two fields – employees and metadata.

So, let’s implement that in the EmployeeService class:

public async Task<(IEnumerable<EmployeeDto> employees, MetaData metaData)> GetEmployeesAsync (Guid companyId, EmployeeParameters employeeParameters, bool trackChanges) { await CheckIfCompanyExists(companyId, trackChanges); var employeesWithMetaData = await _repository.Employee .GetEmployeesAsync(companyId, employeeParameters, trackChanges); var employeesDto = _mapper.Map<IEnumerable<EmployeeDto>>(employeesWithMetaData); return (employees: employeesDto, metaData: employeesWithMetaData.MetaData); }

We change the method signature and the name of the employeesFromDb variable to employeesWithMetaData since this name is now more suitable. After the mapping action, we construct a Tuple and return it to the caller.

Finally, let’s modify the controller:

[HttpGet] public async Task<IActionResult> GetEmployeesForCompany(Guid companyId, [FromQuery] EmployeeParameters employeeParameters) { var pagedResult = await _service.EmployeeService.GetEmployeesAsync(companyId, employeeParameters, trackChanges: false); Response.Headers.Add("X-Pagination", JsonSerializer.Serialize(pagedResult.metaData)); return Ok(pagedResult.employees); }

The new thing in this action is that we modify the response header and add our metadata as the X-Pagination header. For this, we need the System.Text.Json namespace.

Now, if we send the same request we did earlier, we are going to get the same result:

https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees?pageNumber=2&pageSize=2

alt text

But now we have some additional useful information in the X-Pagination response header:

alt text

As you can see, all of our metadata is here. We can use this information when building any kind of frontend pagination to our benefit. You can play around with different requests to see how it works in other scenarios.

We could also use this data to generate links to the previous and next pagination page on the backend, but that is part of the HATEOAS and is out of the scope of this chapter.

16.4.1 Additional Advice‌

This solution works great with a small amount of data, but with bigger tables with millions of rows, we can improve it by modifying the GetEmployeesAsync repository method:

public async Task<PagedList<Employee>> GetEmployeesAsync(Guid companyId, EmployeeParameters employeeParameters, bool trackChanges) { var employees = await FindByCondition(e => e.CompanyId.Equals(companyId), trackChanges) .OrderBy(e => e.Name) .Skip((employeeParameters.PageNumber - 1) * employeeParameters.PageSize) .Take(employeeParameters.PageSize) .ToListAsync(); var count = await FindByCondition(e => e.CompanyId.Equals(companyId), trackChanges).CountAsync(); return new PagedList<Employee>(employees, count, employeeParameters.PageNumber, employeeParameters.PageSize); }

Even though we have an additional call to the database with the CountAsync method, this solution was tested upon millions of rows and was much faster than the previous one. Because our table has few rows, we will continue using the previous solution, but feel free to switch to this one if you want.

Also, to enable the client application to read the new X-Pagination header that we’ve added in our action, we have to modify the CORS configuration:

public static void ConfigureCors(this IServiceCollection services) => services.AddCors(options => { options.AddPolicy("CorsPolicy", builder => builder.AllowAnyOrigin() .AllowAnyMethod() .AllowAnyHeader() .WithExposedHeaders("X-Pagination")); });

17 FILTERING

In this chapter, we are going to cover filtering in ASP.NET Core Web API. We’ll learn what filtering is, how it’s different from searching, and how to implement it in a real-world project.‌

While not critical as paging, filtering is still an important part of a flexible REST API, so we need to know how to implement it in our API projects.

Filtering helps us get the exact result set we want instead of all the results without any criteria.

17.1 What is Filtering?

Filtering is a mechanism to retrieve results by providing some kind of criterion. We can write many kinds of filters to get results by type of class property, value range, date range, or anything else.‌

When implementing filtering, you are always restricted by the predefined set of options you can set in your request. For example, you can send a date value to request an employee, but you won’t have much success.

On the front end, filtering is usually implemented as checkboxes, radio buttons, or dropdowns. This kind of implementation limits you to only those options that are available to create a valid filter.

Take for example a car-selling website. When filtering the cars you want, you would ideally want to select:

• Car manufacturer as a category from a list or a dropdown

• Car model from a list or a dropdown

• Is it new or used with radio buttons

• The city where the seller is as a dropdown

• The price of the car is an input field (numeric)

• ......

You get the point. So, the request would look something like this:

https://bestcarswebsite.com/sale?manufacturer=ford&model=expedition&state=used&city=washington&price_from=30000&price_to=50000

Or even like this:
https://bestcarswebsite.com/sale/filter?data[manufacturer]=ford&[mod el]=expedition&[state]=used&[city]=washington&[price_from]=30000&[price_to]=50000

Now that we know what filtering is, let’s see how it’s different from searching.

17.2 How is Filtering Different from Searching?

When searching for results, we usually have only one input and that’s the‌ one you use to search for anything within a website.

So in other words, you send a string to the API and the API is responsible for using that string to find any results that match it.

On our car website, we would use the search field to find the “Ford Expedition” car model and we would get all the results that match the car name “Ford Expedition.” Thus, this search would return every “Ford Expedition” car available.

We can also improve the search by implementing search terms like Google does, for example. If the user enters the Ford Expedition without quotes in the search field, we would return both what’s relevant to Ford and Expedition. But if the user puts quotes around it, we would search the entire term “Ford Expedition” in our database.

It makes a better user experience. Example:
https://bestcarswebsite.com/sale/search?name=fordfocus

Using search doesn’t mean we can’t use filters with it. It makes perfect sense to use filtering and searching together, so we need to take that into account when writing our source code.

But enough theory.

Let’s implement some filters.

17.3 How to Implement Filtering in ASP.NET Core Web API

We have the Age property in our Employee class. Let’s say we want to find out which employees are between the ages of 26 and 29. We also want to be able to enter just the starting age — and not the ending one — and vice versa.‌

We would need a query like this one:
https://localhost:5001/api/companies/companyId/employees?minAge=26&maxAge=29

But, we want to be able to do this too:
https://localhost:5001/api/companies/companyId/employees?minAge=26

Or like this:
https://localhost:5001/api/companies/companyId/employees?maxAge=29

Okay, we have a specification. Let’s see how to implement it.

We’ve already implemented paging in our controller, so we have the necessary infrastructure to extend it with the filtering functionality. We’ve used the EmployeeParameters class, which inherits from the RequestParameters class, to define the query parameters for our paging request.

Let’s extend the EmployeeParameters class:

public class EmployeeParameters : RequestParameters { public uint MinAge { get; set; } public uint MaxAge { get; set; } = int.MaxValue; public bool ValidAgeRange => MaxAge > MinAge; }

We’ve added two unsigned int properties (to avoid negative year values):MinAge and MaxAge.

Since the default uint value is 0, we don’t need to explicitly define it; 0 is okay in this case. For MaxAge, we want to set it to the max int value. If we don’t get it through the query params, we have something to work with. It doesn’t matter if someone sets the age to 300 through the params; it won’t affect the results.

We’ve also added a simple validation property – ValidAgeRange. Its purpose is to tell us if the max-age is indeed greater than the min-age. If it’s not, we want to let the API user know that he/she is doing something wrong.

Okay, now that we have our parameters ready, we can modify the GetEmployeesAsync service method by adding a validation check as a first statement:

public async Task<(IEnumerable<EmployeeDto> employees, MetaData metaData)> GetEmployeesAsync (Guid companyId, EmployeeParameters employeeParameters, bool trackChanges) { if (!employeeParameters.ValidAgeRange) throw new MaxAgeRangeBadRequestException(); await CheckIfCompanyExists(companyId, trackChanges); var employeesWithMetaData = await _repository.Employee .GetEmployeesAsync(companyId, employeeParameters, trackChanges); var employeesDto = _mapper.Map<IEnumerable<EmployeeDto>>(employeesWithMetaData); return (employees: employeesDto, metaData: employeesWithMetaData.MetaData); }

We’ve added our validation check and a BadRequest response if the validation fails.

But we don’t have this custom exception class so, we have to create it in the Entities/Exceptions class:

public sealed class MaxAgeRangeBadRequestException : BadRequestException { public MaxAgeRangeBadRequestException() :base("Max age can't be less than min age.") { } }

That should do it.

After the service class modification and creation of our custom exception class, let’s get to the implementation in our EmployeeRepository class:

public async Task<PagedList<Employee>> GetEmployeesAsync(Guid companyId, EmployeeParameters employeeParameters, bool trackChanges) { var employees = await FindByCondition(e => e.CompanyId.Equals(companyId) && (e.Age >= employeeParameters.MinAge && e.Age <= employeeParameters.MaxAge), trackChanges) .OrderBy(e => e.Name) .ToListAsync(); return PagedList<Employee> .ToPagedList(employees, employeeParameters.PageNumber, employeeParameters.PageSize); }

Actually, at this point, the implementation is rather simple too.

We are using the FindByCondition method to find all the employees with an Age between the MaxAge and the MinAge.

Let’s try it out.

17.4 Sending and Testing a Query

Let’s send a first request with only a MinAge parameter:‌
https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees?minAge=32

alt text

Next, let’s send one with only a MaxAge parameter:
https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees?maxAge=26

alt text

After that, we can combine those two:
https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78- 2D54A9991870/employees?minAge=26&maxAge=30

alt text

And finally, we can test the filter with the paging:
https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees?pageNumber=1&pageSize=4&minAge=32&maxAge=35

alt text

Excellent. The filter is implemented and we can move on to the searching part.

18 SEARCHING

In this chapter, we’re going to tackle the topic of searching in ASP.NET Core Web API. Searching is one of those functionalities that can make or break your API, and the level of difficulty when implementing it can vary greatly depending on your specifications.‌

If you need to implement a basic searching feature where you are just trying to search one field in the database, you can easily implement it. On the other hand, if it’s a multi-column, multi-term search, you would probably be better off with some of the great search libraries out there like Lucene.NET which are already optimized and proven.

18.1 What is Searching?

There is no doubt in our minds that you’ve seen a search field on almost every website on the internet. It’s easy to find something when we are familiar with the website structure or when a website is not that large.‌

But if we want to find the most relevant topic for us, we don’t know what we’re going to find, or maybe we’re first-time visitors to a large website, we’re probably going to use a search field.

In our simple project, one use case of a search would be to find an employee by name.

Let’s see how we can achieve that.

18.2 Implementing Searching in Our Application

Since we’re going to implement the most basic search in our project, the implementation won’t be complex at all. We have all we need infrastructure-wise since we already covered paging and filtering. We’ll just extend our implementation a bit.‌

What we want to achieve is something like this:

https://localhost:5001/api/companies/companyId/employees?searchTerm=MihaelFins

This should return just one result: Mihael Fins. Of course, the search needs to work together with filtering and paging, so that’s one of the things we’ll need to keep in mind too.

Like we did with filtering, we’re going to extend our EmployeeParameters class first since we’re going to send our search query as a query parameter:

public class EmployeeParameters : RequestParameters { public uint MinAge { get; set; } public uint MaxAge { get; set; } = int.MaxValue; public bool ValidAgeRange => MaxAge > MinAge; public string? SearchTerm { get; set; } }

Simple as that.

Now we can write queries with searchTerm=”name” in them.

The next thing we need to do is actually implement the search functionality in our EmployeeRepository class:

public async Task<PagedList<Employee>> GetEmployeesAsync(Guid companyId, EmployeeParameters employeeParameters, bool trackChanges) { var employees = await FindByCondition(e => e.CompanyId.Equals(companyId), trackChanges) .FilterEmployees(employeeParameters.MinAge, employeeParameters.MaxAge) .Search(employeeParameters.SearchTerm) .OrderBy(e => e.Name) .ToListAsync(); return PagedList<Employee> .ToPagedList(employees, employeeParameters.PageNumber, employeeParameters.PageSize); }

We have made two changes here. The first is modifying the filter logic and the second is adding the Search method for the searching functionality.

But these methods (FilterEmployees and Search) are not created yet, so let’s create them.

In the Repository project, we are going to create the new folder Extensions and inside of that folder the new class RepositoryEmployeeExtensions:

public static class RepositoryEmployeeExtensions { public static IQueryable<Employee> FilterEmployees(this IQueryable<Employee> employees, uint minAge, uint maxAge) => employees.Where(e => (e.Age >= minAge && e.Age <= maxAge)); public static IQueryable<Employee> Search(this IQueryable<Employee> employees, string searchTerm) { if (string.IsNullOrWhiteSpace(searchTerm)) return employees; var lowerCaseTerm = searchTerm.Trim().ToLower(); return employees.Where(e => e.Name.ToLower().Contains(lowerCaseTerm)); } }

So, we are just creating our extension methods to update our query until it is executed in the repository. Now, all we have to do is add a using directive to the EmployeeRepository class:

using Repository.Extensions;

That’s it for our implementation. As you can see, it isn’t that hard since it is the most basic search and we already had an infrastructure set.

18.3 Testing Our Implementation

Let’s send a first request with the value Mihael Fins for the search term:‌

https://localhost:5001/api/companies/c9d4c053-49b6-410c-bc78-2d54a9991870/employees?searchTerm=MihaelFins

alt text

This is working great.

Now, let’s find all employees that contain the letters “ae”:
https://localhost:5001/api/companies/c9d4c053-49b6-410c-bc78-2d54a9991870/employees?searchTerm=ae

alt text

Great. One more request with the paging and filtering:

https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees?pageNumber=1&pageSize=4&minAge=32&maxAge=35&searchTerm=MA

alt text

And this works as well.

That’s it! We’ve successfully implemented and tested our search functionality.

If we check the Headers tab for each request, we will find valid x- pagination as well.

19 SORTING

In this chapter, we’re going to talk about sorting in ASP.NET Core Web API. Sorting is a commonly used mechanism that every API should implement. Implementing it in ASP.NET Core is not difficult due to the flexibility of LINQ and good integration with EF Core.‌

So, let’s talk a bit about sorting.

19.1 What is Sorting?

Sorting, in this case, refers to ordering our results in a preferred way using our query string parameters. We are not talking about sorting algorithms nor are we going into the how’s of implementing a sorting algorithm.‌

What we’re interested in, however, is how do we make our API sort our results the way we want it to.

Let’s say we want our API to sort employees by their name in ascending order, and then by their age.

To do that, our API call needs to look something like this:

https://localhost:5001/api/companies/companyId/employees?orderBy=name,age desc

Our API needs to consider all the parameters and sort our results accordingly. In our case, this means sorting results by their name; then, if there are employees with the same name, sorting them by the age property.

So, these are our employees for the IT_Solutions Ltd company:

alt text

For the sake of demonstrating this example (sorting by name and then by age), we are going to add one more Jana McLeaf to our database with the age of 27. You can add whatever you want to test the results:

https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees

alt text

Great, now we have the required data to test our functionality properly.

And of course, like with all other functionalities we have implemented so far (paging, filtering, and searching), we need to implement this to work well with everything else. We should be able to get the paginated, filtered, and sorted data, for example.

Let’s see one way to go around implementing this.

19.2 How to Implement Sorting in ASP.NET Core Web API

As with everything else so far, first, we need to extend our RequestParameters class to be able to send requests with the orderBy clause in them:‌

public class RequestParameters { const int maxPageSize = 50; public int PageNumber { get; set; } = 1; private int _pageSize = 10; public int PageSize { get { return _pageSize; } set { _pageSize = (value > maxPageSize) ? maxPageSize : value; } } public string? OrderBy { get; set; } }

As you can see, the only thing we’ve added is the OrderBy property and we added it to the RequestParameters class because we can reuse it for other entities. We want to sort our results by name, even if it hasn’t been stated explicitly in the request.

That said, let’s modify the EmployeeParameters class to enable the default sorting condition for Employee if none was stated:

public class EmployeeParameters : RequestParameters { public EmployeeParameters() => OrderBy = "name"; public uint MinAge { get; set; } public uint MaxAge { get; set; } = int.MaxValue; public bool ValidAgeRange => MaxAge > MinAge; public string? SearchTerm { get; set; } }

Next, we’re going to dive right into the implementation of our sorting mechanism, or rather, our ordering mechanism.

One thing to note is that we’ll be using the System.Linq.Dynamic.Core NuGet package to dynamically create our OrderBy query on the fly. So, feel free to install it in the Repository project and add a using directive in the RepositoryEmployeeExtensions class:

using System.Linq.Dynamic.Core;

Now, we can add the new extension method Sort in our RepositoryEmployeeExtensions class:

public static IQueryable<Employee> Sort(this IQueryable<Employee> employees, string orderByQueryString) { if (string.IsNullOrWhiteSpace(orderByQueryString)) return employees.OrderBy(e => e.Name); var orderParams = orderByQueryString.Trim().Split(','); var propertyInfos = typeof(Employee).GetProperties(BindingFlags.Public | BindingFlags.Instance); var orderQueryBuilder = new StringBuilder(); foreach (var param in orderParams) { if (string.IsNullOrWhiteSpace(param)) continue; var propertyFromQueryName = param.Split(" ")[0]; var objectProperty = propertyInfos.FirstOrDefault(pi => pi.Name.Equals(propertyFromQueryName, StringComparison.InvariantCultureIgnoreCase)); if (objectProperty == null) continue; var direction = param.EndsWith(" desc") ? "descending" : "ascending"; orderQueryBuilder.Append($"{objectProperty.Name.ToString()} {direction}, "); } var orderQuery = orderQueryBuilder.ToString().TrimEnd(',', ' '); if (string.IsNullOrWhiteSpace(orderQuery)) return employees.OrderBy(e => e.Name); return employees.OrderBy(orderQuery); }

Okay, there are a lot of things going on here, so let’s take it step by step and see what exactly we've done.

19.3 Implementation – Step by Step

First, let start with the method definition. It has two arguments — one for the list of employees as IQueryable and the other for the ordering query. If we send a request like this one:
https://localhost:5001/api/companies/companyId/employees?or derBy=name,age desc,

our orderByQueryString will be name,age desc.‌

We begin by executing some basic check against the orderByQueryString. If it is null or empty, we just return the same collection ordered by name.

if (string.IsNullOrWhiteSpace(orderByQueryString)) 
    return employees.OrderBy(e => e.Name);

Next, we are splitting our query string to get the individual fields:

var orderParams = orderByQueryString.Trim().Split(',');

We’re also using a bit of reflection to prepare the list of PropertyInfo objects that represent the properties of our Employee class. We need them to be able to check if the field received through the query string exists in the Employee class:

var propertyInfos = typeof(Employee).GetProperties(BindingFlags.Public | BindingFlags.Instance);

That prepared, we can actually run through all the parameters and check for their existence:

if (string.IsNullOrWhiteSpace(param)) continue; var propertyFromQueryName = param.Split(" ")[0]; var objectProperty = propertyInfos.FirstOrDefault(pi => pi.Name.Equals(propertyFromQueryName, StringComparison.InvariantCultureIgnoreCase));

If we don’t find such a property, we skip the step in the foreach loop and go to the next parameter in the list:

if (objectProperty == null) 
    continue;

If we do find the property, we return it and additionally check if our parameter contains “desc” at the end of the string. We use that to decide how we should order our property:

var direction = param.EndsWith(" desc") ? "descending" : "ascending";

We use the StringBuilder to build our query with each loop:

orderQueryBuilder.Append($"{objectProperty.Name.ToString()} {direction}, ");

Now that we’ve looped through all the fields, we are just removing excess commas and doing one last check to see if our query indeed has something in it:

var orderQuery = orderQueryBuilder.ToString().TrimEnd(',', ' '); if (string.IsNullOrWhiteSpace(orderQuery)) return employees.OrderBy(e => e.Name);

Finally, we can order our query:

return employees.OrderBy(orderQuery);

At this point, the orderQuery variable should contain the “Name ascending, DateOfBirth descending” string. That means it will order our results first by Name in ascending order, and then by DateOfBirth in descending order.

The standard LINQ query for this would be:

employees.OrderBy(e => e.Name).ThenByDescending(o => o.Age);

This is a neat little trick to form a query when you don’t know in advance how you should sort.

Once we have done this, all we have to do is to modify the GetEmployeesAsync repository method:

public async Task<PagedList<Employee>> GetEmployeesAsync(Guid companyId, EmployeeParameters employeeParameters, bool trackChanges) { var employees = await FindByCondition(e => e.CompanyId.Equals(companyId), trackChanges) .FilterEmployees(employeeParameters.MinAge, employeeParameters.MaxAge).Search(employeeParameters.SearchTerm) .Sort(employeeParameters.OrderBy) .ToListAsync(); return PagedList<Employee> .ToPagedList(employees, employeeParameters.PageNumber, employeeParameters.PageSize); }

And that’s it! We can test this functionality now.

19.4 Testing Our Implementation

First, let’s try out the query we’ve been using as an example:‌

https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78- 2D54A9991870/employees?orderBy=name,age desc

And this is the result:

alt text

We can see that this list is sorted by Name ascending. Since we have two Jana’s, they were sorted by Age descending.

We have prepared additional requests which you can use to test this functionality with Postman. So, feel free to do it.

19.5 Improving the Sorting Functionality

Right now, sorting only works with the Employee entity, but what about the Company? It is obvious that we have to change something in our implementation if we don’t want to repeat our code while implementing sorting for the Company entity.‌

That said, let’s modify the Sort extension method:

public static IQueryable<Employee> Sort(this IQueryable<Employee> employees, string orderByQueryString) { if (string.IsNullOrWhiteSpace(orderByQueryString)) return employees.OrderBy(e => e.Name); var orderQuery = OrderQueryBuilder.CreateOrderQuery<Employee>(orderByQueryString); if (string.IsNullOrWhiteSpace(orderQuery)) return employees.OrderBy(e => e.Name); return employees.OrderBy(orderQuery); }

So, we are extracting a logic that can be reused in the CreateOrderQuery method. But of course, we have to create that method.

Let’s create a Utility folder in the Extensions folder with the new class OrderQueryBuilder:

alt text

Now, let’s modify that class:

public static class OrderQueryBuilder { public static string CreateOrderQuery<T>(string orderByQueryString) { var orderParams = orderByQueryString.Trim().Split(','); var propertyInfos = typeof(T).GetProperties(BindingFlags.Public | BindingFlags.Instance); var orderQueryBuilder = new StringBuilder();foreach (var param in orderParams) { if (string.IsNullOrWhiteSpace(param)) continue; var propertyFromQueryName = param.Split(" ")[0]; var objectProperty = propertyInfos.FirstOrDefault(pi => pi.Name.Equals(propertyFromQueryName, StringComparison.InvariantCultureIgnoreCase)); if (objectProperty == null) continue; var direction = param.EndsWith(" desc") ? "descending" : "ascending"; orderQueryBuilder.Append($"{objectProperty.Name.ToString()} {direction}, "); } var orderQuery = orderQueryBuilder.ToString().TrimEnd(',', ' '); return orderQuery; } }

And there we go. Not too many changes, but we did a great job here. You can test this solution with the prepared requests in Postman and you'll get the same result for sure:

alt text

But now, this functionality is reusable.

20 DATA SHAPING

In this chapter, we are going to talk about a neat concept called data shaping and how to implement it in ASP.NET Core Web API. To achieve that, we are going to use similar tools to the previous section. Data shaping is not something that every API needs, but it can be very useful in some cases.‌

Let’s start by learning what data shaping is exactly.

20.1 What is Data Shaping?

Data shaping is a great way to reduce the amount of traffic sent from the API to the client. It enables the consumer of the API to select (shape) the data by choosing the fields through the query string.‌

What this means is something like:
https://localhost:5001/api/companies/companyId/employees?fi elds=name,age

By giving the consumer a way to select just the fields it needs, we can potentially reduce the stress on the API. On the other hand, this is not something every API needs, so we need to think carefully and decide whether we should implement its implementation because it has a bit of reflection in it.

And we know for a fact that reflection takes its toll and slows our application down.

Finally, as always, data shaping should work well together with the concepts we’ve covered so far – paging, filtering, searching, and sorting.

First, we are going to implement an employee-specific solution to data shaping. Then we are going to make it more generic, so it can be used by any entity or any API.

Let’s get to work.

20.2 How to Implement Data Shaping

First things first, we need to extend our RequestParameters class since we are going to add a new feature to our query string and we want it to be available for any entity:‌

public string? Fields { get; set; }

We’ve added the Fields property and now we can use fields as a query string parameter.

Let’s continue by creating a new interface in the Contracts project:

public interface IDataShaper<T> { IEnumerable<ExpandoObject> ShapeData(IEnumerable<T> entities, string fieldsString); ExpandoObject ShapeData(T entity, string fieldsString); }

The IDataShaper defines two methods that should be implemented — one for the single entity and one for the collection of entities. Both are named ShapeData, but they have different signatures.

Notice how we use the ExpandoObject from System.Dynamic namespace as a return type. We need to do that to shape our data the way we want it.

To implement this interface, we are going to create a new DataShaping folder in the Service project and add a new DataShaper class:

public class DataShaper<T> : IDataShaper<T> where T : class { public PropertyInfo[] Properties { get; set; } public DataShaper() { Properties = typeof(T).GetProperties(BindingFlags.Public | BindingFlags.Instance); } public IEnumerable<ExpandoObject> ShapeData(IEnumerable<T> entities, string fieldsString) {var requiredProperties = GetRequiredProperties(fieldsString); return FetchData(entities, requiredProperties); } public ExpandoObject ShapeData(T entity, string fieldsString) { var requiredProperties = GetRequiredProperties(fieldsString); return FetchDataForEntity(entity, requiredProperties); } private IEnumerable<PropertyInfo> GetRequiredProperties(string fieldsString) { var requiredProperties = new List<PropertyInfo>(); if (!string.IsNullOrWhiteSpace(fieldsString)) { var fields = fieldsString.Split(',', StringSplitOptions.RemoveEmptyEntries); foreach (var field in fields) { var property = Properties .FirstOrDefault(pi => pi.Name.Equals(field.Trim(), StringComparison.InvariantCultureIgnoreCase)); if (property == null) continue; requiredProperties.Add(property); } } else { requiredProperties = Properties.ToList(); } return requiredProperties; }private IEnumerable<ExpandoObject> FetchData(IEnumerable<T> entities, IEnumerable<PropertyInfo> requiredProperties) { var shapedData = new List<ExpandoObject>(); foreach (var entity in entities) { var shapedObject = FetchDataForEntity(entity, requiredProperties); shapedData.Add(shapedObject); } return shapedData; } private ExpandoObject FetchDataForEntity(T entity, IEnumerable<PropertyInfo> requiredProperties) { var shapedObject = new ExpandoObject();foreach (var property in requiredProperties) { var objectPropertyValue = property.GetValue(entity); shapedObject.TryAdd(property.Name, objectPropertyValue); } return shapedObject; } }

We need these namespaces to be included as well:

using Contracts; 
using System.Dynamic; 
using System.Reflection;

There is quite a lot of code in our class, so let’s break it down.

20.3 Step-by-Step Implementation

We have one public property in this class – Properties. It’s an array of PropertyInfo’s that we’re going to pull out of the input type, whatever it is‌ — Company or Employee in our case:

public PropertyInfo[] Properties { get; set; } public DataShaper() { Properties = typeof(T).GetProperties(BindingFlags.Public | BindingFlags.Instance); }

So, here it is. In the constructor, we get all the properties of an input class.

Next, we have the implementation of our two public ShapeData methods:

public IEnumerable<ExpandoObject> ShapeData(IEnumerable<T> entities, string fieldsString) { var requiredProperties = GetRequiredProperties(fieldsString); return FetchData(entities, requiredProperties); } public ExpandoObject ShapeData(T entity, string fieldsString) { var requiredProperties = GetRequiredProperties(fieldsString); return FetchDataForEntity(entity, requiredProperties); }

Both methods rely on the GetRequiredProperties method to parse the input string that contains the fields we want to fetch.

The GetRequiredProperties method does the magic. It parses the input string and returns just the properties we need to return to the controller:

private IEnumerable<PropertyInfo> GetRequiredProperties(string fieldsString) { var requiredProperties = new List<PropertyInfo>(); if (!string.IsNullOrWhiteSpace(fieldsString)) { var fields = fieldsString.Split(',', StringSplitOptions.RemoveEmptyEntries); foreach (var field in fields) { var property = Properties .FirstOrDefault(pi => pi.Name.Equals(field.Trim(), StringComparison.InvariantCultureIgnoreCase)); if (property == null) continue; requiredProperties.Add(property); } } else { requiredProperties = Properties.ToList(); } return requiredProperties; }

There’s nothing special about it. If the fieldsString is not empty, we split it and check if the fields match the properties in our entity. If they do, we add them to the list of required properties.

On the other hand, if the fieldsString is empty, all properties are required.

Now, FetchData and FetchDataForEntity are the private methods to extract the values from these required properties we’ve prepared.

The FetchDataForEntity method does it for a single entity:

private ExpandoObject FetchDataForEntity(T entity, IEnumerable<PropertyInfo> requiredProperties) { var shapedObject = new ExpandoObject();foreach (var property in requiredProperties) { var objectPropertyValue = property.GetValue(entity); shapedObject.TryAdd(property.Name, objectPropertyValue); } return shapedObject; }

Here, we loop through the requiredProperties parameter. Then, using a bit of reflection, we extract the values and add them to our ExpandoObject. ExpandoObject implements IDictionary<string,object>, so we can use the TryAdd method to add our property using its name as a key and the value as a value for the dictionary.

This way, we dynamically add just the properties we need to our dynamic object.

The FetchData method is just an implementation for multiple objects. It utilizes the FetchDataForEntity method we’ve just implemented:

private IEnumerable<ExpandoObject> FetchData(IEnumerable<T> entities, IEnumerable<PropertyInfo> requiredProperties) { var shapedData = new List<ExpandoObject>(); foreach (var entity in entities) { var shapedObject = FetchDataForEntity(entity, requiredProperties); shapedData.Add(shapedObject); } return shapedData; }

To continue, let’s register the DataShaper class in the IServiceCollection in the Program class:

builder.Services.AddScoped<IDataShaper<EmployeeDto>, DataShaper<EmployeeDto>>();

During the service registration, we provide the type to work with.

Because we want to use the DataShaper class inside the service classes, we have to modify the constructor of the ServiceManager class first:

public ServiceManager(IRepositoryManager repositoryManager, ILoggerManager logger, IMapper mapper, IDataShaper<EmployeeDto> dataShaper) { _companyService = new Lazy<ICompanyService>(() => new CompanyService(repositoryManager, logger, mapper)); _employeeService = new Lazy<IEmployeeService>(() => new EmployeeService(repositoryManager, logger, mapper, dataShaper)); }

We are going to use it only in the EmployeeService class.

Next, let’s add one more field and modify the constructor in the EmployeeService class:

... 
private readonly IDataShaper<EmployeeDto> _dataShaper; public EmployeeService(IRepositoryManager repository, ILoggerManager logger, IMapper mapper, IDataShaper<EmployeeDto> dataShaper) { _repository = repository; _logger = logger; _mapper = mapper; _dataShaper = dataShaper; }

Let’s also modify the GetEmployeesAsync method of the same class:

public async Task<(IEnumerable<ExpandoObject> employees, MetaData metaData)> GetEmployeesAsync (Guid companyId, EmployeeParameters employeeParameters, bool trackChanges) { if (!employeeParameters.ValidAgeRange) throw new MaxAgeRangeBadRequestException(); await CheckIfCompanyExists(companyId, trackChanges); var employeesWithMetaData = await _repository.Employee .GetEmployeesAsync(companyId, employeeParameters, trackChanges); var employeesDto = _mapper.Map<IEnumerable<EmployeeDto>>(employeesWithMetaData); var shapedData = _dataShaper.ShapeData(employeesDto, employeeParameters.Fields); return (employees: shapedData, metaData: employeesWithMetaData.MetaData); }

We have changed the method signature so, we have to modify the interface as well:

Task<(IEnumerable<ExpandoObject> employees, MetaData metaData)> GetEmployeesAsync(Guid companyId, EmployeeParameters employeeParameters, bool trackChanges);

Now, we can test our solution:
https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees?fields=name,age

alt text

It works great.

Let’s also test this solution by combining all the functionalities that we’ve implemented in the previous chapters:

https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees?pageNumber=1&pageSize=4&minAge=26&maxAge=32&searchTerm=A&orderBy=name desc&fields=name,age

alt text

Excellent. Everything is working like a charm.

20.4 Resolving XML Serialization Problems

Let’s send the same request one more time, but this time with the‌ different accept header (text/xml):

alt text

It works — but it looks pretty ugly and unreadable. But that’s how the XmlDataContractSerializerOutputFormatter serializes our ExpandoObject by default.

We can fix that, but the logic is out of the scope of this book. Of course, we have implemented the solution in our source code. So, if you want, you can use it in your project.

All you have to do is to create the Entity class and copy the content from our Entity class that resides in the Entities/Models folder.

After that, just modify the IDataShaper interface and the DataShaper class by using the Entity type instead of the ExpandoObject type. Also, you have to do the same thing for the IEmployeeService interface and the EmployeeService class. Again, you can check our implementation if you have any problems.

After all those changes, once we send the same request, we are going to see a much better result:

https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees?pageNumber=1&pageSize=4&minAge=26&maxAge=32&searchTerm=A&orderBy=name desc&fields=name,age

alt text

If XML serialization is not important to you, you can keep using ExpandoObject — but if you want a nicely formatted XML response, this is the way to go.

To sum up, data shaping is an exciting and neat little feature that can make our APIs flexible and reduce our network traffic. If we have a high- volume traffic API, data shaping should work just fine. On the other hand, it’s not a feature that we should use lightly because it utilizes reflection and dynamic typing to get things done.

As with all other functionalities, we need to be careful when and if we should implement data shaping. Performance tests might come in handy even if we do implement it.

21 SUPPORTING HATEOAS

In this section, we are going to talk about one of the most important concepts in building RESTful APIs — HATEOAS and learn how to implement HATEOAS in ASP.NET Core Web API. This part relies heavily on the concepts we've implemented so far in paging, filtering, searching, sorting, and especially data shaping and builds upon the foundations we've put down in these parts.‌

21.1 What is HATEOAS and Why is it so Important?

HATEOAS (Hypermedia as the Engine of Application State) is a very important REST constraint. Without it, a REST API cannot be considered RESTful and many of the benefits we get by implementing a REST architecture are unavailable.‌

Hypermedia refers to any kind of content that contains links to media types such as documents, images, videos, etc.

REST architecture allows us to generate hypermedia links in our responses dynamically and thus make navigation much easier. To put this into perspective, think about a website that uses hyperlinks to help you navigate to different parts of it. You can achieve the same effect with HATEOAS in your REST API.

Imagine a website that has a home page and you land on it, but there are no links anywhere. You need to scrape the website or find some other way to navigate it to get to the content you want. We're not saying that the website is the same as a REST API, but you get the point.

The power of being able to explore an API on your own can be very useful.

Let's see how that works.

21.1.1 Typical Response with HATEOAS Implemented
Once we implement HATEOAS in our API, we are going to have this type of response:‌

alt text

As you can see, we got the list of our employees and for each employee all the actions we can perform on them. And so on...

So, it's a nice way to make an API self-discoverable and evolvable.

21.1.2 What is a Link?‌

According to RFC5988, a link is "a typed connection between two resources that are identified by Internationalised Resource Identifiers (IRIs)". Simply put, we use links to traverse the internet or rather the resources on the internet.

Our responses contain an array of links, which consist of a few properties according to the RFC:

• href - represents a target URI.

• rel - represents a link relation type, which means it describes how the current context is related to the target resource.

• method - we need an HTTP method to know how to distinguish the same target URIs.

21.1.3 Pros/Cons of Implementing HATEOAS

So, what are all the benefits we can expect when implementing HATEOAS?

HATEOAS is not trivial to implement, but the rewards we reap are worth it. Here are the things we can expect to get when we implement HATEOAS:

• API becomes self-discoverable and explorable.

• A client can use the links to implement its logic, it becomes much easier, and any changes that happen in the API structure are directly reflected onto the client.

• The server drives the application state and URL structure and not vice versa.

• The link relations can be used to point to the developer’s documentation.

• Versioning through hyperlinks becomes easier.

• Reduced invalid state transaction calls.

• API is evolvable without breaking all the clients.

We can do so much with HATEOAS. But since it's not easy to implement all these features, we should keep in mind the scope of our API and if we need all this. There is a great difference between a high-volume public API and some internal API that is needed to communicate between parts of the same system.

That is more than enough theory for now. Let's get to work and see what the concrete implementation of HATEOAS looks like.

21.2 Adding Links in the Project

Let’s begin with the concept we know so far, and that’s the link. In the Entities project, we are going to create the LinkModels folder and inside a new Link class:‌

public class Link { public string? Href { get; set; } public string? Rel { get; set; } public string? Method { get; set; } public Link() { } public Link(string href, string rel, string method) { Href = href; Rel = rel; Method = method; } }

Note that we have an empty constructor, too. We'll need that for XML serialization purposes, so keep it that way.

Next, we need to create a class that will contain all of our links — LinkResourceBase:

public class LinkResourceBase { public LinkResourceBase() {} public List<Link> Links { get; set; } = new List<Link>(); }

And finally, since our response needs to describe the root of the controller, we need a wrapper for our links:

public class LinkCollectionWrapper<T> : LinkResourceBase { public List<T> Value { get; set; } = new List<T>(); public LinkCollectionWrapper() { } public LinkCollectionWrapper(List<T> value) => Value = value; }

This class might not make too much sense right now, but stay with us and it will become clear later down the road. For now, let's just assume we wrapped our links in another class for response representation purposes.

Since our response will contain links too, we need to extend the XML serialization rules so that our XML response returns the properly formatted links. Without this, we would get something like:

<Links>System.Collections.Generic.List1[Entites.Models.Link]` . So, in the Entities/Models/Entity class, we need to extend the WriteLinksToXml method to support links:

private void WriteLinksToXml(string key, object value, XmlWriter writer) { writer.WriteStartElement(key); if (value.GetType() == typeof(List<Link>)) { foreach (var val in value as List<Link>) { writer.WriteStartElement(nameof(Link)); WriteLinksToXml(nameof(val.Href), val.Href, writer); WriteLinksToXml(nameof(val.Method), val.Method, writer); WriteLinksToXml(nameof(val.Rel), val.Rel, writer); writer.WriteEndElement(); } } else { writer.WriteString(value.ToString()); } writer.WriteEndElement(); }

So, we check if the type is List. If it is, we iterate through all the links and call the method recursively for each of the properties: href, method, and rel.

That's all we need for now. We have a solid foundation to implement HATEOAS in our project.

21.3 Additional Project Changes

When we generate links, HATEOAS strongly relies on having the ids available to construct the links for the response. Data shaping, on the‌ other hand, enables us to return only the fields we want. So, if we want only the name and age fields, the id field won’t be added. To solve that, we have to apply some changes.

The first thing we are going to do is to add a ShapedEntity class in the Entities/Models folder:

public class ShapedEntity { public ShapedEntity() { Entity = new Entity(); } public Guid Id { get; set; } public Entity Entity { get; set; } }

With this class, we expose the Entity and the Id property as well.

Now, we have to modify the IDataShaper interface and the DataShaper class by replacing all Entity usage with ShapedEntity.

In addition to that, we need to extend the FetchDataForEntity method in the DataShaper class to get the id separately:

private ShapedEntity FetchDataForEntity(T entity, IEnumerable<PropertyInfo> requiredProperties) { var shapedObject = new ShapedEntity(); foreach (var property in requiredProperties) { var objectPropertyValue = property.GetValue(entity); shapedObject.Entity.TryAdd(property.Name, objectPropertyValue); } var objectProperty = entity.GetType().GetProperty("Id"); shapedObject.Id = (Guid)objectProperty.GetValue(entity); return shapedObject; }

Finally, let’s add the LinkResponse class in the LinkModels folder; that will help us with the response once we start with the HATEOAS implementation:

public class LinkResponse
{ public bool HasLinks { get; set; } public List<Entity> ShapedEntities { get; set; } public LinkCollectionWrapper<Entity> LinkedEntities { get; set; } public LinkResponse() { LinkedEntities = new LinkCollectionWrapper<Entity>(); ShapedEntities = new List<Entity>(); } }

With this class, we are going to know whether our response has links. If it does, we are going to use the LinkedEntities property. Otherwise, we are going to use the ShapedEntities property.

21.4 Adding Custom Media Types

What we want to do is to enable links in our response only if it is explicitly asked for. To do that, we are going to introduce custom media types.‌

Before we start, let’s see how we can create a custom media type. A custom media type should look something like this: application/vnd.codemaze.hateoas+json. To compare it to the typical json media type which we use by default: application/json.

So let’s break down the different parts of a custom media type:

• vnd – vendor prefix; it’s always there.

• codemaze – vendor identifier; we’ve chosen codemaze, because why not?

• hateoas – media type name.

• json – suffix; we can use it to describe if we want json or an XML response, for example.

Now, let’s implement that in our application.

21.4.1 Registering Custom Media Types

First, we want to register our new custom media types in the middleware. Otherwise, we’ll just get a 406 Not Acceptable message.

Let’s add a new extension method to our ServiceExtensions:

public static void AddCustomMediaTypes(this IServiceCollection services) { services.Configure<MvcOptions>(config => { var systemTextJsonOutputFormatter = config.OutputFormatters .OfType<SystemTextJsonOutputFormatter>()?.FirstOrDefault(); if (systemTextJsonOutputFormatter != null) { systemTextJsonOutputFormatter.SupportedMediaTypes .Add("application/vnd.codemaze.hateoas+json"); } var xmlOutputFormatter = config.OutputFormatters .OfType<XmlDataContractSerializerOutputFormatter>()? .FirstOrDefault(); if (xmlOutputFormatter != null) { xmlOutputFormatter.SupportedMediaTypes .Add("application/vnd.codemaze.hateoas+xml"); } }); }

We are registering two new custom media types for the JSON and XML output formatters. This ensures we don’t get a 406 Not Acceptable response.

Now, we have to add that to the Program class, just after the AddControllers method:

builder.Services.AddCustomMediaTypes();

Excellent. The registration process is done.

21.4.2 Implementing a Media Type Validation Filter

Now, since we’ve implemented custom media types, we want our Accept header to be present in our requests so we can detect when the user requested the HATEOAS-enriched response.

To do that, we’ll implement an ActionFilter in the Presentation project inside the ActionFilters folder, which will validate our Accept header and media types:

public class ValidateMediaTypeAttribute : IActionFilter { public void OnActionExecuting(ActionExecutingContext context) { var acceptHeaderPresent = context.HttpContext .Request.Headers.ContainsKey("Accept"); if (!acceptHeaderPresent) { context.Result = new BadRequestObjectResult($"Accept header is missing."); return; } var mediaType = context.HttpContext .Request.Headers["Accept"].FirstOrDefault(); if (!MediaTypeHeaderValue.TryParse(mediaType, out MediaTypeHeaderValue? outMediaType)) { context.Result = new BadRequestObjectResult($"Media type not present. Please add Accept header with the required media type."); return; } context.HttpContext.Items.Add("AcceptHeaderMediaType", outMediaType); } public void OnActionExecuted(ActionExecutedContext context){} }

We check for the existence of the Accept header first. If it’s not present, we return BadRequest. If it is, we parse the media type — and if there is no valid media type present, we return BadRequest.

Once we’ve passed the validation checks, we pass the parsed media type to the HttpContext of the controller.

Now, we have to register the filter in the Program class:

builder.Services.AddScoped<ValidateMediaTypeAttribute>();

And to decorate the GetEmployeesForCompany action:

[HttpGet] [ServiceFilter(typeof(ValidateMediaTypeAttribute))] public async Task<IActionResult> GetEmployeesForCompany(Guid companyId, [FromQuery] EmployeeParameters employeeParameters)

Great job.

Finally, we can work on the HATEOAS implementation.

21.5 Implementing HATEOAS

We are going to start by creating a new interface in the Contracts‌ project:

public interface IEmployeeLinks { LinkResponse TryGenerateLinks(IEnumerable<EmployeeDto> employeesDto, string fields, Guid companyId, HttpContext httpContext); }

Currently, you will get the error about HttpContext, but we will solve that a bit later.

Let’s continue by creating a new Utility folder in the main project and the EmployeeLinks class in it. Let’s start by adding the required dependencies inside the class:

public class EmployeeLinks : IEmployeeLinks { private readonly LinkGenerator _linkGenerator; private readonly IDataShaper<EmployeeDto> _dataShaper; public EmployeeLinks(LinkGenerator linkGenerator, IDataShaper<EmployeeDto> dataShaper) { _linkGenerator = linkGenerator; _dataShaper = dataShaper; } }

We are going to use LinkGenerator to generate links for our responses and IDataShaper to shape our data. As you can see, the shaping logic is now extracted from the EmployeeService class, which we will modify a bit later.

After dependencies, we are going to add the first method:

public LinkResponse TryGenerateLinks(IEnumerable<EmployeeDto> employeesDto, string fields, Guid companyId, HttpContext httpContext) { var shapedEmployees = ShapeData(employeesDto, fields); if (ShouldGenerateLinks(httpContext)) return ReturnLinkdedEmployees(employeesDto, fields, companyId, httpContext, shapedEmployees); return ReturnShapedEmployees(shapedEmployees);}

So, our method accepts four parameters. The employeeDto collection, the fields that are going to be used to shape the previous collection, companyId because routes to the employee resources contain the Id from the company, and httpContext which holds information about media types.

The first thing we do is shape our collection. Then if the httpContext contains the required media type, we add links to the response. On the other hand, we just return our shaped data.

Of course, we have to add those not implemented methods:

private List<Entity> ShapeData(IEnumerable<EmployeeDto> employeesDto, string fields) => _dataShaper.ShapeData(employeesDto, fields) .Select(e => e.Entity) .ToList();

The ShapeData method executes data shaping and extracts only the entity part without the Id property.

Let’s add two additional methods:

private bool ShouldGenerateLinks(HttpContext httpContext) { var mediaType = (MediaTypeHeaderValue)httpContext.Items["AcceptHeaderMediaType"]; return mediaType.SubTypeWithoutSuffix.EndsWith("hateoas", StringComparison.InvariantCultureIgnoreCase); } private LinkResponse ReturnShapedEmployees(List<Entity> shapedEmployees) => new LinkResponse { ShapedEntities = shapedEmployees };

In the ShouldGenerateLinks method, we extract the media type from the httpContext. If that media type ends with hateoas, the method returns true; otherwise, it returns false. The ReturnShapedEmployees method just returns a new LinkResponse with the ShapedEntities property populated. By default, the HasLinks property is false.

After these methods, we have to add the ReturnLinkedEmployees method as well:

private LinkResponse ReturnLinkdedEmployees(IEnumerable<EmployeeDto> employeesDto, string fields, Guid companyId, HttpContext httpContext, List<Entity> shapedEmployees) { var employeeDtoList = employeesDto.ToList(); for (var index = 0; index < employeeDtoList.Count(); index++) { var employeeLinks = CreateLinksForEmployee(httpContext, companyId, employeeDtoList[index].Id, fields); shapedEmployees[index].Add("Links", employeeLinks); } var employeeCollection = new LinkCollectionWrapper<Entity>(shapedEmployees); var linkedEmployees = CreateLinksForEmployees(httpContext, employeeCollection); return new LinkResponse { HasLinks = true, LinkedEntities = linkedEmployees }; }

In this method, we iterate through each employee and create links for it by calling the CreateLinksForEmployee method. Then, we just add it to the shapedEmployees collection. After that, we wrap the collection and create links that are important for the entire collection by calling the CreateLinksForEmployees method.

Finally, we have to add those two new methods that create links:

private List<Link> CreateLinksForEmployee(HttpContext httpContext, Guid companyId, Guid id, string fields = "") { var links = new List<Link> { new Link(_linkGenerator.GetUriByAction(httpContext, "GetEmployeeForCompany", values: new { companyId, id, fields }), "self", "GET"), new Link(_linkGenerator.GetUriByAction(httpContext, "DeleteEmployeeForCompany", values: new { companyId, id }), "delete_employee", "DELETE"), new Link(_linkGenerator.GetUriByAction(httpContext, "UpdateEmployeeForCompany", values: new { companyId, id }), "update_employee", "PUT"), new Link(_linkGenerator.GetUriByAction(httpContext, "PartiallyUpdateEmployeeForCompany", values: new { companyId, id }), "partially_update_employee", "PATCH") }; return links;
} private LinkCollectionWrapper<Entity> CreateLinksForEmployees(HttpContext httpContext, LinkCollectionWrapper<Entity> employeesWrapper) { employeesWrapper.Links.Add(new Link(_linkGenerator.GetUriByAction(httpContext, "GetEmployeesForCompany", values: new { }), "self", "GET")); return employeesWrapper; }

There are a few things to note here.

We need to consider the fields while creating the links since we might be using them in our requests. We are creating the links by using the LinkGenerator‘s GetUriByAction method — which accepts HttpContext, the name of the action, and the values that need to be used to make the URL valid. In the case of the EmployeesController, we send the company id, employee id, and fields.

And that is it regarding this class.

Now, we have to register this class in the Program class:

builder.Services.AddScoped<IEmployeeLinks, EmployeeLinks>();

After the service registration, we are going to create a new record inside the Entities/LinkModels folder:

public record LinkParameters(EmployeeParameters EmployeeParameters, HttpContext Context);

We are going to use this record to transfer required parameters from our controller to the service layer and avoid the installation of an additional NuGet package inside the Service and Service.Contracts projects.

Also for this to work, we have to add the reference to the Shared project, install the Microsoft.AspNetCore.Mvc.Abstractions package needed for HttpContext, and add required using directives:

using Microsoft.AspNetCore.Http; 
using Shared.RequestFeatures;

Now, we can return to the IEmployeeLinks interface and fix that error by importing the required namespace. As you can see, we didn’t have to install the Abstractions NuGet package since Contracts references Entities. If Visual Studio keeps asking for the package installation, just remove the Entities reference from the Contracts project and add it again.

Once that is done, we can modify the EmployeesController:

[HttpGet] [ServiceFilter(typeof(ValidateMediaTypeAttribute))] public async Task<IActionResult> GetEmployeesForCompany(Guid companyId, [FromQuery] EmployeeParameters employeeParameters) { var linkParams = new LinkParameters(employeeParameters, HttpContext); var pagedResult = await _service.EmployeeService.GetEmployeesAsync(companyId, linkParams, trackChanges: false); Response.Headers.Add("X-Pagination", JsonSerializer.Serialize(pagedResult.metaData)); return Ok(pagedResult.employees); }

So, we create the linkParams variable and send it instead of employeeParameters to the service method.

Of course, this means we have to modify the IEmployeeService interface:

Task<(LinkResponse linkResponse, MetaData metaData)> GetEmployeesAsync(Guid companyId, LinkParameters linkParameters, bool trackChanges);

Now the Tuple return type has the LinkResponse as the first field and also we have LinkParameters as the second parameter.

After we modified our interface, let’s modify the EmployeeService class:

private readonly IRepositoryManager _repository; private readonly ILoggerManager _logger; private readonly IMapper _mapper; private readonly IEmployeeLinks _employeeLinks; public EmployeeService(IRepositoryManager repository, ILoggerManager logger, IMapper mapper, IEmployeeLinks employeeLinks) {_repository = repository; _logger = logger; _mapper = mapper; _employeeLinks = employeeLinks; } public async Task<(LinkResponse linkResponse, MetaData metaData)> GetEmployeesAsync (Guid companyId, LinkParameters linkParameters, bool trackChanges) { if (!linkParameters.EmployeeParameters.ValidAgeRange) throw new MaxAgeRangeBadRequestException(); await CheckIfCompanyExists(companyId, trackChanges); var employeesWithMetaData = await _repository.Employee .GetEmployeesAsync(companyId, linkParameters.EmployeeParameters, trackChanges); var employeesDto = _mapper.Map<IEnumerable<EmployeeDto>>(employeesWithMetaData); var links = _employeeLinks.TryGenerateLinks(employeesDto, linkParameters.EmployeeParameters.Fields, companyId, linkParameters.Context); return (linkResponse: links, metaData: employeesWithMetaData.MetaData); }

First, we don’t have the DataShaper injected anymore since this logic is now inside the EmployeeLinks class. Then, we change the method signature, fix a couple of errors since now we have linkParameters and not employeeParameters as a parameter, and we call the TryGenerateLinks method, which will return LinkResponse as a result.

Finally, we construct our Tuple and return it to the caller.

Now we can return to our controller and modify the GetEmployeesForCompany action:

[HttpGet] [ServiceFilter(typeof(ValidateMediaTypeAttribute))] public async Task<IActionResult> GetEmployeesForCompany(Guid companyId, [FromQuery] EmployeeParameters employeeParameters) { var linkParams = new LinkParameters(employeeParameters, HttpContext); var result = await _service.EmployeeService.GetEmployeesAsync(companyId, linkParams, trackChanges: false); Response.Headers.Add("X-Pagination", JsonSerializer.Serialize(result.metaData));return result.linkResponse.HasLinks ? Ok(result.linkResponse.LinkedEntities) : Ok(result.linkResponse.ShapedEntities); }

We change the pageResult variable name to result and use it to return the proper response to the client. If our result has links, we return linked entities, otherwise, we return shaped ones.

Before we test this, we shouldn’t forget to modify the ServiceManager’s constructor:

public ServiceManager(IRepositoryManager repositoryManager, ILoggerManager logger, IMapper mapper, IEmployeeLinks employeeLinks) { _companyService = new Lazy<ICompanyService>(() => new CompanyService(repositoryManager, logger, mapper)); _employeeService = new Lazy<IEmployeeService>(() => new EmployeeService(repositoryManager, logger, mapper, employeeLinks)); }

Excellent. We can test this now:
https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees?pageNumber=1&pageSize=4&minAge=26&maxAge=32&searchTerm=A&orderBy=namedesc&fields=name,age

alt text

You can test this with the xml media type as well (we have prepared the request in Postman for you).

22 WORKING WITH OPTIONS AND HEAD REQUESTS

In one of the previous chapters (Method Safety and Method Idempotency), we talked about different HTTP requests. Until now, we have been working with all request types except OPTIONS and HEAD. So, let’s cover them as well.‌

22.1 OPTIONS HTTP Request

The Options request can be used to request information on the communication options available upon a certain URI. It allows consumers to determine the options or different requirements associated with a resource. Additionally, it allows us to check the capabilities of a server without forcing action to retrieve a resource.‌

Basically, Options should inform us whether we can Get a resource or execute any other action (POST, PUT, or DELETE). All of the options should be returned in the Allow header of the response as a comma- separated list of methods.

Let’s see how we can implement the Options request in our example.

22.2 OPTIONS Implementation

We are going to implement this request in the CompaniesController — so, let’s open it and add a new action:‌

[HttpOptions] public IActionResult GetCompaniesOptions() { Response.Headers.Add("Allow", "GET, OPTIONS, POST"); return Ok(); }

We have to decorate our action with the HttpOptions attribute. As we said, the available options should be returned in the Allow response header, and that is exactly what we are doing here. The URI for this action is /api/companies, so we state which actions can be executed for that certain URI. Finally, the Options request should return the 200 OK status code. We have to understand that the response, if it is empty, must include the content-length field with the value of zero. We don’t have to add it by ourselves because ASP.NET Core takes care of that for us.

Let’s try this:

https://localhost:5001/api/companies

alt text

As you can see, we are getting a 200 OK response. Let’s inspect the Headers tab:

alt text

Everything works as expected.

Let’s move on.

22.3 Head HTTP Request

The Head is identical to Get but without a response body. This type of request could be used to obtain information about validity, accessibility, and recent modifications of the resource.‌

22.4 HEAD Implementation

Let’s open the EmployeesController, because that’s where we are going to implement this type of request. As we said, the Head request must return the same response as the Get request — just without the response body. That means it should include the paging information in the response as well.‌

Now, you may think that we have to write a completely new action and also repeat all the code inside, but that is not the case. All we have to do is add the HttpHead attribute below HttpGet:

[HttpGet] [HttpHead] public async Task<IActionResult> GetEmployeesForCompany(Guid companyId, [FromQuery] EmployeeParameters employeeParameters)

We can test this now:

https://localhost:5001/api/companies/C9D4C053-49B6-410C-BC78-2D54A9991870/employees?pageNumber=2&pageSize=2

alt text

As you can see, we receive a 200 OK status code with the empty body.Let’s check the Headers part:

alt text

You can see the X-Pagination link included in the Headers part of the response. Additionally, all the parts of the X-Pagination link are populated — which means that our code was successfully executed, but the response body hasn’t been included.

Excellent.

We now have support for the Http OPTIONS and HEAD requests.

23 ROOT DOCUMENT

In this section, we are going to create a starting point for the consumers of our API. This starting point is also known as the Root Document. The Root Document is the place where consumers can learn how to interact with the rest of the API.‌

23.1 Root Document Implementation
This document should be created at the api root, so let’s start by creating‌ a new controller:

[Route("api")] [ApiController] public class RootController : ControllerBase { }

We are going to generate links towards the API actions. Therefore, we have to inject LinkGenerator:

[Route("api")] [ApiController] public class RootController : ControllerBase { private readonly LinkGenerator _linkGenerator; public RootController(LinkGenerator linkGenerator) => _linkGenerator = linkGenerator; }

In this controller, we only need a single action, GetRoot, which will be executed with the GET request on the /api URI.

There are several links that we are going to create in this action. The link to the document itself and links to actions available on the URIs at the root level (actions from the Companies controller). We are not creating links to employees, because they are children of the company — and in our API if we want to fetch employees, we have to fetch the company first.

If we inspect our CompaniesController, we can see that GetCompanies and CreateCompany are the only actions on the root URI level (api/companies). Therefore, we are going to create links only to them.

Before we start with the GetRoot action, let’s add a name for the CreateCompany and GetCompanies actions in the CompaniesController:

[HttpGet(Name = "GetCompanies")] public async Task<IActionResult> GetCompanies()
[HttpPost(Name = "CreateCompany")] [ServiceFilter(typeof(ValidationFilterAttribute))] public async Task<IActionResult> CreateCompany([FromBody]CompanyForCreationDto company)

We are going to use the Link class to generate links:

public class Link { public string Href { get; set; } public string Rel { get; set; } public string Method { get; set; } … }

This class contains all the required properties to describe our actions while creating links in the GetRoot action. The Href property defines the URI to the action, the Rel property defines the identification of the action type, and the Method property defines which HTTP method should be used for that action.

Now, we can create the GetRoot action:

[HttpGet(Name = "GetRoot")] public IActionResult GetRoot([FromHeader(Name = "Accept")] string mediaType) { if(mediaType.Contains("application/vnd.codemaze.apiroot")) { var list = new List<Link> { new Link { Href = _linkGenerator.GetUriByName(HttpContext, nameof(GetRoot), new {}), Rel = "self", Method = "GET" }, new Link { Href = _linkGenerator.GetUriByName(HttpContext, "GetCompanies", new {}), Rel = "companies", Method = "GET" }, new Link{ Href = _linkGenerator.GetUriByName(HttpContext, "CreateCompany", new {}), Rel = "create_company", Method = "POST" } }; return Ok(list); } return NoContent(); }

In this action, we generate links only if a custom media type is provided from the Accept header. Otherwise, we return NoContent(). To generate links, we use the GetUriByName method from the LinkGenerator class.

That said, we have to register our custom media types for the json and xml formats. To do that, we are going to extend the AddCustomMediaTypes extension method:

public static void AddCustomMediaTypes(this IServiceCollection services) { services.Configure<MvcOptions>(config => { var systemTextJsonOutputFormatter = config.OutputFormatters .OfType<SystemTextJsonOutputFormatter>()?.FirstOrDefault(); if (systemTextJsonOutputFormatter != null) { systemTextJsonOutputFormatter.SupportedMediaTypes .Add("application/vnd.codemaze.hateoas+json"); systemTextJsonOutputFormatter.SupportedMediaTypes .Add("application/vnd.codemaze.apiroot+json"); } var xmlOutputFormatter = config.OutputFormatters .OfType<XmlDataContractSerializerOutputFormatter>()? .FirstOrDefault(); if (xmlOutputFormatter != null) { xmlOutputFormatter.SupportedMediaTypes .Add("application/vnd.codemaze.hateoas+xml"); xmlOutputFormatter.SupportedMediaTypes .Add("application/vnd.codemaze.apiroot+xml"); } }); }

We can now inspect our result:
https://localhost:5001/api

alt text

This works great.

Let’s test what is going to happen if we don’t provide the custom media type:

https://localhost:5001/api

alt text

Well, we get the 204 No Content message as expected. Of course, you can test the xml request as well:

https://localhost:5001/api

alt text

Great.

Now we can move on to the versioning chapter.

24 VERSIONING APIS

As our project grows, so does our knowledge; therefore, we have a better understanding of how to improve our system. Moreover, requirements change over time — thus, our API has to change as well.‌

When we implement some breaking changes, we want to ensure that we don’t do anything that will cause our API consumers to change their code. Those breaking changes could be:

• Renaming fields, properties, or resource URIs.

• Changes in the payload structure.

• Modifying response codes or HTTP Verbs.

• Redesigning our API endpoints.

If we have to implement some of these changes in the already working API, the best way is to apply versioning to prevent breaking our API for the existing API consumers.

There are different ways to achieve API versioning and there is no guidance that favors one way over another. So, we are going to show you different ways to version an API, and you can choose which one suits you best.

24.1 Required Package Installation and Configuration

In order to start, we have to install the Microsoft.AspNetCore.Mvc.Versioning library in the Presentation project:‌

alt text

This library is going to help us a lot in versioning our API.

After the installation, we have to add the versioning service in the service collection and configure it. So, let’s create a new extension method in the ServiceExtensions class:

public static void ConfigureVersioning(this IServiceCollection services) { services.AddApiVersioning(opt => { opt.ReportApiVersions = true; opt.AssumeDefaultVersionWhenUnspecified = true; opt.DefaultApiVersion = new ApiVersion(1, 0); }); }

With the AddApiVersioning method, we are adding service API versioning to the service collection. We are also using a couple of properties to initially configure versioning:

• ReportApiVersions adds the API version to the response header.
• AssumeDefaultVersionWhenUnspecified does exactly that. It specifies the default API version if the client doesn’t send one.

• DefaultApiVersion sets the default version count.

After that, we are going to use this extension in the Program class:

builder.Services.ConfigureVersioning();

API versioning is installed and configured, and we can move on.

24.2 Versioning Examples

Before we continue, let’s create another controller: CompaniesV2Controller (for example’s sake), which will represent a new version of our existing one. It is going to have just one Get action:‌

[ApiVersion("2.0")] [Route("api/companies")] [ApiController] public class CompaniesV2Controller : ControllerBase { private readonly IServiceManager _service; public CompaniesV2Controller(IServiceManager service) => _service = service; [HttpGet]public async Task<IActionResult> GetCompanies() { var companies = await _service.CompanyService .GetAllCompaniesAsync(trackChanges: false); return Ok(companies); } }

By using the [ApiVersion(“2.0”)] attribute, we are stating that this controller is version 2.0.

After that, let’s version our original controller as well:

[ApiVersion("1.0")] [Route("api/companies")] [ApiController] public class CompaniesController : ControllerBase

If you remember, we configured versioning to use 1.0 as a default API version (opt.AssumeDefaultVersionWhenUnspecified = true;). Therefore, if a client doesn’t state the required version, our API will use this one:

https://localhost:5001/api/companies

alt text

If we inspect the Headers tab of the response, we are going to find that the controller V1 was assigned for this request:

alt text

Of course, you can place a breakpoint in GetCompanies actions in both controllers and confirm which endpoint was hit.

Now, let’s see how we can provide a version inside the request.

24.2.1 Using Query String‌

We can provide a version within the request by using a query string in the URI. Let’s test this with an example:

https://localhost:5001/api/companies?api-version=2.0

alt text

So, we get the same response body.

But, we can inspect the response headers to make sure that version 2.0 is used:

alt text

24.2.2 Using URL Versioning‌

For URL versioning to work, we have to modify the route in our controller:

[ApiVersion("2.0")] [Route("api/{v:apiversion}/companies")] [ApiController] public class CompaniesV2Controller : ControllerBase

Also, let’s just slightly modify the GetCompanies action in this controller, so we could see the difference in Postman by just inspecting the response body:

[HttpGet] public async Task<IActionResult> GetCompanies() { var companies = await _service.CompanyService .GetAllCompaniesAsync(trackChanges: false); var companiesV2 = companies.Select(x => $"{x.Name} V2"); return Ok(companiesV2); }

We are creating a projection from our companies collection by iterating through each element, modifying the Name property to contain the V2 suffix, and extracting it to a new collection companiesV2.

Now, we can test it:
https://localhost:5001/api/2.0/companies

alt text

One thing to mention, we can’t use the query string pattern to call the companies v2 controller anymore. We can use it for version 1.0, though.

24.2.3 HTTP Header Versioning‌

If we don’t want to change the URI of the API, we can send the version in the HTTP Header. To enable this, we have to modify our configuration:

public static void ConfigureVersioning(this IServiceCollection services) { services.AddApiVersioning(opt => { opt.ReportApiVersions = true; opt.AssumeDefaultVersionWhenUnspecified = true; opt.DefaultApiVersion = new ApiVersion(1, 0); opt.ApiVersionReader = new HeaderApiVersionReader("api-version"); }); }

And to revert the Route change in our controller:

[Route("api/companies")]
[ApiVersion("2.0")]

Let’s test these changes:
https://localhost:5001/api/companies

alt text

If we want to support query string versioning, we should use a new QueryStringApiVersionReader class instead:

opt.ApiVersionReader = new QueryStringApiVersionReader("api-version");

24.2.4 Deprecating Versions‌

If we want to deprecate version of an API, but don’t want to remove it completely, we can use the Deprecated property for that purpose:

[ApiVersion("2.0", Deprecated = true)]

We will be able to work with that API, but we will be notified that this version is deprecated:

alt text

24.2.5 Using Conventions

If we have a lot of versions of a single controller, we can assign these versions in the configuration instead:

services.AddApiVersioning(opt => { opt.ReportApiVersions = true; opt.AssumeDefaultVersionWhenUnspecified = true; opt.DefaultApiVersion = new ApiVersion(1, 0); opt.ApiVersionReader = new HeaderApiVersionReader("api-version"); opt.Conventions.Controller<CompaniesController>() .HasApiVersion(new ApiVersion(1, 0)); opt.Conventions.Controller<CompaniesV2Controller>() .HasDeprecatedApiVersion(new ApiVersion(2, 0)); });

Now, we can remove the [ApiVersion] attribute from the controllers.

Of course, there are a lot more features that the installed library provides for us — but with the mentioned ones, we have covered quite enough to version our APIs.

25 CACHING

In this section, we are going to learn about caching resources. Caching can improve the quality and performance of our app a lot, but again, it is something first we need to look at as soon as some bug appears. To cover resource caching, we are going to work with HTTP Cache. Additionally, we are going to talk about cache expiration, validation, and cache-control headers.‌

25.1 About Caching

We want to use cache in our app because it can significantly improve performance. Otherwise, it would be useless. The main goal of caching is to eliminate the need to send requests towards the API in many cases and also to send full responses in other cases.‌

To reduce the number of sent requests, caching uses the expiration mechanism, which helps reduce network round trips. Furthermore, to eliminate the need to send full responses, the cache uses the validation mechanism, which reduces network bandwidth. We can now see why these two are so important when caching resources.

The cache is a separate component that accepts requests from the API’s consumer. It also accepts the response from the API and stores that response if they are cacheable. Once the response is stored, if a consumer requests the same response again, the response from the cache should be served.

But the cache behaves differently depending on what cache type is used.

25.1.1 Cache Types‌

There are three types of caches: Client Cache, Gateway Cache, and Proxy Cache.

The client cache lives on the client (browser); thus, it is a private cache. It is private because it is related to a single client. So every client consuming our API has a private cache.

The gateway cache lives on the server and is a shared cache. This cache is shared because the resources it caches are shared over different clients.

The proxy cache is also a shared cache, but it doesn’t live on the server nor the client side. It lives on the network.

With the private cache, if five clients request the same response for the first time, every response will be served from the API and not from the cache. But if they request the same response again, that response should come from the cache (if it’s not expired). This is not the case with the shared cache. The response from the first client is going to be cached, and then the other four clients will receive the cached response if they request it.

25.1.2 Response Cache Attribute‌

So, to cache some resources, we have to know whether or not it’s cacheable. The response header helps us with that. The one that is used most often is Cache-Control: Cache-Control: max-age=180. This states that the response should be cached for 180 seconds. For that, we use the ResponseCache attribute. But of course, this is just a header. If we want to cache something, we need a cache-store. For our example, we are going to use Response caching middleware provided by ASP.NET Core.

25.2 Adding Cache Headers

Before we start, let’s open Postman and modify the settings to support caching:‌

alt text

In the General tab under Headers, we are going to turn off the Send no- cache header:

alt text

Great. We can move on.

Let’s assume we want to use the ResponseCache attribute to cache the result from the GetCompany action:

alt text

public async Task GetCompany(Guid id)

[ResponseCache(Duration = 60)]

[HttpGet("{id}", Name = "CompanyById")]

It is obvious that we can work with different properties in the ResponseCache attribute — but for now, we are going to use Duration only:

[HttpGet("{id}", Name = "CompanyById")] [ResponseCache(Duration = 60)] public async Task<IActionResult> GetCompany(Guid id)

And that is it. We can inspect our result now:
https://localhost:5001/api/companies/3d490a70-94ce-4d15-9494-5248280c2ce3

alt text

You can see that the Cache-Control header was created with a public cache and a duration of 60 seconds. But as we said, this is just a header; we need a cache-store to cache the response. So, let’s add one.

25.3 Adding Cache-Store

The first thing we are going to do is add an extension method in the‌ ServiceExtensions class:

public static void ConfigureResponseCaching(this IServiceCollection services) => services.AddResponseCaching();

We register response caching in the IOC container, and now we have to call this method in the Program class:

builder.Services.ConfigureResponseCaching();

Additionally, we have to add caching to the application middleware right below UseCors() because Microsoft recommends having UseCors before UseResponseCaching, and as we learned in the section 1.8, order is very important for the middleware execution:

app.UseResponseCaching();
app.UseCors("CorsPolicy");

Now, we can start our application and send the same GetCompany request. It will generate the Cache-Control header. After that, before 60 seconds pass, we are going to send the same request and inspect the headers:

https://localhost:5001/api/companies/3d490a70-94ce-4d15-9494-5248280c2ce3

alt text

You can see the additional Age header that indicates the number of seconds the object has been stored in the cache. Basically, it means that we received our second response from the cache-store.

Another way to confirm that is to wait 60 seconds to pass. After that, you can send the request and inspect the console. You will see the SQL query generated. But if you send a second request, you will find no new logs for the SQL query. That’s because we are receiving our response from the cache.

Additionally, with every subsequent request within 60 seconds, the Age property will increment. After the expiration period passes, the response will be sent from the API, cached again, and the Age header will not be generated. You will also see new logs in the console.

Furthermore, we can use cache profiles to apply the same rules to different resources. If you look at the picture that shows all the properties we can use with ResponseCacheAttribute, you can see that there are a lot of properties. Configuring all of them on top of the action or controller could lead to less readable code. Therefore, we can use CacheProfiles to extract that configuration.

To do that, we are going to modify the AddControllers method:

builder.Services.AddControllers(config => { config.RespectBrowserAcceptHeader = true; config.ReturnHttpNotAcceptable = true; config.InputFormatters.Insert(0, GetJsonPatchInputFormatter()); config.CacheProfiles.Add("120SecondsDuration", new CacheProfile { Duration = 120 }); })...

We only set up Duration, but you can add additional properties as well. Now, let’s implement this profile on top of the Companies controller:

[Route("api/companies")] [ApiController] [ResponseCache(CacheProfileName = "120SecondsDuration")]

We have to mention that this cache rule will apply to all the actions inside the controller except the ones that already have the ResponseCache attribute applied.

That said, once we send the request to GetCompany, we will still have the maximum age of 60. But once we send the request to GetCompanies:

https://localhost:5001/api/companies

alt text

There you go. Now, let’s talk some more about the Expiration and Validation models.

25.4 Expiration Model

The expiration model allows the server to recognize whether or not the response has expired. As long as the response is fresh, it will be served from the cache. To achieve that, the Cache-Control header is used. We have seen this in the previous example.‌

Let’s look at the diagram to see how caching works:

alt text

So, the client sends a request to get companies. There is no cached version of that response; therefore, the request is forwarded to the API. The API returns the response with the Cache-Control header with a 10- minute expiration period; it is being stored in the cache and forwarded to the client.

If after two minutes, the same response has been requested:

alt text

We can see that the cached response was served with an additional Age header with a value of 120 seconds or two minutes. If this is a private cache, that is where it stops. That’s because the private cache is stored in the browser and another client will hit the API for the same response. But if this is a shared cache and another client requests the same response after an additional two minutes:

alt text

The response is served from the cache with an additional two minutes added to the Age header.

We saw how the Expiration model works, now let’s inspect the Validation model.

25.5 Validation Model

The validation model is used to validate the freshness of the response. So it checks if the response is cached and still usable. Let’s assume we have a shared cached GetCompany response for 30 minutes. If someone updates that company after five minutes, without validation the client would receive the wrong response for another 25 minutes — not the updated one.‌

To prevent that, we use validators. The HTTP standard advises using Last- Modified and ETag validators in combination if possible.

Let’s see how validation works:

alt text

So again, the client sends a request, it is not cached, and so it is forwarded to the API. Our API returns the response that contains the Etag and Last-Modified headers. That response is cached and forwarded to the client.

After two minutes, the client sends the same request:

alt text

So, the same request is sent, but we don’t know if the response is valid. Therefore, the cache forwards that request to the API with the additional headers If-None-Match — which is set to the Etag value — and If- Modified-Since — which is set to the Last-Modified value. If this request checks out against the validators, our API doesn’t have to recreate the same response; it just sends a 304 Not Modified status. After that, the regular response is served from the cache. Of course, if this doesn’t check out, a new response must be generated.

That brings us to the conclusion that for the shared cache if the response hasn’t been modified, that response has to be generated only once. Let’s see all of these in an example.

25.6 Supporting Validation

To support validation, we are going to use the Marvin.Cache.Headers library. This library supports HTTP cache headers like Cache-Control, Expires, Etag, and Last-Modified and also implements validation and expiration models.‌

So, let’s install the Marvin.Cache.Headers library in the Presentation project, which will enable the reference for the main project as well. We are going to need it in both projects.

Now, let’s modify the ServiceExtensions class:

public static void ConfigureHttpCacheHeaders(this IServiceCollection services) => services.AddHttpCacheHeaders();

We are going to add additional configuration later.

Then, let’s modify the Program class:

builder.Services.ConfigureResponseCaching(); 
builder.Services.ConfigureHttpCacheHeaders();

And finally, let’s add HttpCacheHeaders to the request pipeline:

app.UseResponseCaching(); 
app.UseHttpCacheHeaders();

To test this, we have to remove or comment out ResponseCache attributes in the CompaniesController. The installed library will provide that for us. Now, let’s send the GetCompany request:

https://localhost:5001/api/companies/3d490a70-94ce-4d15-9494-5248280c2ce3

alt text

We can see that we have all the required headers generated. The default expiration is set to 60 seconds and if we send this request one more time, we are going to get an additional Age header.

25.6.1 Configuration‌

We can globally configure our expiration and validation headers. To do that, let’s modify the ConfigureHttpCacheHeaders method:

public static void ConfigureHttpCacheHeaders(this IServiceCollection services) => services.AddHttpCacheHeaders(
(expirationOpt) => { expirationOpt.MaxAge = 65; expirationOpt.CacheLocation = CacheLocation.Private; }, (validationOpt) => { validationOpt.MustRevalidate = true; });    

After that, we are going to send the same request for a single company:

https://localhost:5001/api/companies/3d490a70-94ce-4d15-9494-5248280c2ce3

alt text

You can see that the changes are implemented. Now, this is a private cache with an age of 65 seconds. Because it is a private cache, our API won’t cache it. You can check the console again and see the SQL logs for each request you send.

Other than global configuration, we can apply it on the resource level (on action or controller). The overriding rules are the same. Configuration on the action level will override the configuration on the controller or global level. Also, the configuration on the controller level will override the global level configuration.

To apply a resource level configuration, we have to use the HttpCacheExpiration and HttpCacheValidation attributes:

[HttpGet("{id}", Name = "CompanyById")] [HttpCacheExpiration(CacheLocation = CacheLocation.Public, MaxAge = 60)] [HttpCacheValidation(MustRevalidate = false)] public async Task<IActionResult> GetCompany(Guid id)

Once we send the GetCompanies request, we are going to see global values:

alt text

But if we send the GetCompany request:

alt text

You can see that it is public and you can send the same request again to see the Age header for the cached response.

25.7 Using ETag and Validation

First, we have to mention that the ResponseCaching library doesn’t correctly implement the validation model. Also, using the authorization header is a problem. We are going to show you alternatives later. But for now, we can simulate how validation with Etag should work.‌

So, let’s restart our app to have a fresh application, and send a GetCompany request one more time. In a header, we are going to get our ETag. Let’s copy the Etag’s value and use another GetCompany request:
https://localhost:5001/api/companies/3d490a70-94ce-4d15-9494-5248280c2ce3

alt text

We send the If-None-Match tag with the value of our Etag. And we can see as a result we get 304 Not Modified.

But this is not a valid situation. As we said, the client should send a valid request and it is up to the Cache to add an If-None-Match tag. In our example, which we sent from Postman, we simulated that. Then, it is up to the server to return a 304 message to the cache and then the cache should return the same response.

But anyhow, we have managed to show you how validation works. If we update that company:

https://localhost:5001/api/companies/3d490a70-94ce-4d15-9494-5248280c2ce3

alt text

And then send the same request with the same If-None-Match value:

https://localhost:5001/api/companies/3d490a70-94ce-4d15-9494-5248280c2ce3

alt text

You can see that we get 200 OK and if we inspect Headers, we will find that ETag is different because the resource changed:

alt text

So, we saw how validation works and also concluded that the ResponseCaching library is not that good for validation — it is much better for just expiration.

But then, what are the alternatives? There are a lot of alternatives, such as:

• Varnish - https://varnish-cache.org/

• Apache Traffic Server - https://trafficserver.apache.org/

• Squid - http://www.squid-cache.org/

They implement caching correctly. And if you want to have expiration and validation, you should combine them with the Marvin library and you are good to go. But those servers are not that trivial to implement.

There is another option: CDN (Content Delivery Network). CDN uses HTTP caching and is used by various sites on the internet. The good thing with CDN is we don’t need to set up a cache server by ourselves, but unfortunately, we have to pay for it. The previous cache servers we presented are free to use. So, it’s up to you to decide what suits you best.

26 RATE LIMITING AND THROTTLING

Rate Limiting allows us to protect our API against too many requests that can deteriorate our API’s performance. API is going to reject requests that exceed the limit. Throttling queues exceeded requests for possible later processing. The API will eventually reject the request if processing cannot occur after a certain number of attempts.‌

For example, we can configure our API to create a limitation of 100 requests/hour per client. Or additionally, we can limit a client to the maximum of 1,000 requests/day per IP and 100 requests/hour. We can even limit the number of requests for a specific resource in our API; for example, 50 requests to api/companies.

To provide information about rate limiting, we use the response headers. They are separated between Allowed requests, which all start with the X- Rate-Limit and Disallowed requests.

The Allowed requests header contains the following information :

• X-Rate-Limit-Limit – rate limit period.

• X-Rate-Limit-Remaining – number of remaining requests.

• X-Rate-Limit-Reset – date/time information about resetting the request limit.

For the disallowed requests, we use a 429 status code; that stands for too many requests. This header may include the Retry-After response header and should explain details in the response body.

26.1 Implementing Rate Limiting

To start, we have to install the AspNetCoreRateLimit library in the main project:‌

alt text

Then, we have to add it to the service collection. This library uses a memory cache to store its counters and rules. Therefore, we have to add the MemoryCache to the service collection as well.

That said, let’s add the MemoryCache:

builder.Services.AddMemoryCache();

After that, we are going to create another extension method in the ServiceExtensions class:

public static void ConfigureRateLimitingOptions(this IServiceCollection services) { var rateLimitRules = new List<RateLimitRule> { new RateLimitRule { Endpoint = "*", Limit = 3, Period = "5m" } }; services.Configure<IpRateLimitOptions>(opt => { opt.GeneralRules = rateLimitRules; }); services.AddSingleton<IRateLimitCounterStore, MemoryCacheRateLimitCounterStore>(); services.AddSingleton<IIpPolicyStore, MemoryCacheIpPolicyStore>(); services.AddSingleton<IRateLimitConfiguration, RateLimitConfiguration>(); services.AddSingleton<IProcessingStrategy, AsyncKeyLockProcessingStrategy>(); }

We create a rate limit rules first, for now just one, stating that three requests are allowed in a five-minute period for any endpoint in our API. Then, we configure IpRateLimitOptions to add the created rule. Finally, we have to register rate limit stores, configuration, and processing strategy as a singleton. They serve the purpose of storing rate limit counters and policies as well as adding configuration.

Now, we have to modify the Program class again:

builder.Services.ConfigureRateLimitingOptions(); 
builder.Services.AddHttpContextAccessor();
builder.Services.AddMemoryCache();

Finally, we have to add it to the request pipeline:

app.UseIpRateLimiting();
app.UseCors("CorsPolicy");

And that is it. We can test this now:
https://localhost:5001/api/companies

alt text

So, we can see that we have two requests remaining and the time to reset the rule. If we send an additional three requests in the five-minute period of time, we are going to get a different response:

https://localhost:5001/api/companies

alt text

The status code is 429 Too Many Requests and we have the Retry-After header.

We can inspect the body as well:

https://localhost:5001/api/companies

alt text

So, our rate limiting works.

There are a lot of options that can be configured with Rate Limiting and you can read more about them on the AspNetCoreRateLimit GitHub page.

27 JWT, IDENTITY, AND REFRESH TOKEN

User authentication is an important part of any application. It refers to the process of confirming the identity of an application’s users. Implementing it properly could be a hard job if you are not familiar with the process.‌

Also, it could take a lot of time that could be spent on different features of an application.

So, in this section, we are going to learn about authentication and authorization in ASP.NET Core by using Identity and JWT (Json Web Token). We are going to explain step by step how to integrate Identity in the existing project and then how to implement JWT for the authentication and authorization actions.

ASP.NET Core provides us with both functionalities, making implementation even easier.

Finally, we are going to learn more about the refresh token flow and implement it in our Web API project.

So, let’s start with Identity integration.

27.1 Implementing Identity in ASP.NET Core Project

Asp.NET Core Identity is the membership system for web applications that includes membership, login, and user data. It provides a rich set of services that help us with creating users, hashing their passwords, creating a database model, and the authentication overall.‌
That said, let’s start with the integration process.

The first thing we have to do is to install the Microsoft.AspNetCore.Identity.EntityFrameworkCore library in the Entities project:

alt text

After the installation, we are going to create a new User class in the Entities/Models folder:

public class User : IdentityUser { public string FirstName { get; set; } public string LastName { get; set; } }

Our class inherits from the IdentityUser class that has been provided by the ASP.NET Core Identity. It contains different properties and we can extend it with our own as well.

After that, we have to modify the RepositoryContext class:

public class RepositoryContext : IdentityDbContext<User> { public RepositoryContext(DbContextOptions options) : base(options) { } protected override void OnModelCreating(ModelBuilder modelBuilder) { base.OnModelCreating(modelBuilder); modelBuilder.ApplyConfiguration(new CompanyConfiguration()); modelBuilder.ApplyConfiguration(new EmployeeConfiguration()); } public DbSet<Company> Companies { get; set; } public DbSet<Employee> Employees { get; set; } }

So, our class now inherits from the IdentityDbContext class and not DbContext because we want to integrate our context with Identity. For this, we have to include the Identity.EntityFrameworkCore namespace:

using Microsoft.AspNetCore.Identity.EntityFrameworkCore;

We don’t have to install the library in the Repository project since we already did that in the Entities project, and Repository has the reference to Entities.

Additionally, we call the OnModelCreating method from the base class. This is required for migration to work properly.

Now, we have to move on to the configuration part.

To do that, let’s create a new extension method in the ServiceExtensions class:

public static void ConfigureIdentity(this IServiceCollection services) { var builder = services.AddIdentity<User, IdentityRole>(o => { o.Password.RequireDigit = true; o.Password.RequireLowercase = false; o.Password.RequireUppercase = false; o.Password.RequireNonAlphanumeric = false; o.Password.RequiredLength = 10; o.User.RequireUniqueEmail = true; }) .AddEntityFrameworkStores<RepositoryContext>() .AddDefaultTokenProviders(); }

With the AddIdentity method, we are adding and configuring Identity for the specific type; in this case, the User and the IdentityRole type. We use different configuration parameters that are pretty self-explanatory on their own. Identity provides us with even more features to configure, but these are sufficient for our example.

Then, we add EntityFrameworkStores implementation with the default token providers.

Now, let’s modify the Program class:

builder.Services.AddAuthentication(); 
builder.Services.ConfigureIdentity();

And, let’s add the authentication middleware to the application’s request pipeline:

app.UseAuthorization();
app.UseAuthentication();

That’s it. We have prepared everything we need.

27.2 Creating Tables and Inserting Roles

Creating tables is quite an easy process. All we have to do is to create and apply migration. So, let’s create a migration:‌

PM> Add-Migration CreatingIdentityTables

And then apply it:

PM> Update-Database

If we check our database now, we are going to see additional tables:

alt text

For our project, the AspNetRoles, AspNetUserRoles, and AspNetUsers tables will be quite enough. If you open the AspNetUsers table, you will see additional FirstName and LastName columns.

Now, let’s insert several roles in the AspNetRoles table, again by using migrations. The first thing we are going to do is to create the RoleConfiguration class in the Repository/Configuration folder:

public class RoleConfiguration : IEntityTypeConfiguration<IdentityRole> { public void Configure(EntityTypeBuilder<IdentityRole> builder) {builder.HasData( new IdentityRole { Name = "Manager", NormalizedName = "MANAGER" }, new IdentityRole { Name = "Administrator", NormalizedName = "ADMINISTRATOR" } ); }

For this to work, we need the following namespaces included:

using Microsoft.AspNetCore.Identity; 
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Metadata.Builders;

And let’s modify the OnModelCreating method in the RepositoryContext class:

protected override void OnModelCreating(ModelBuilder modelBuilder) { base.OnModelCreating(modelBuilder); modelBuilder.ApplyConfiguration(new CompanyConfiguration()); modelBuilder.ApplyConfiguration(new EmployeeConfiguration()); modelBuilder.ApplyConfiguration(new RoleConfiguration()); }

Finally, let’s create and apply migration:

PM> Add-Migration AddedRolesToDb
PM> Update-Database

If you check the AspNetRoles table, you will find two new roles created.

27.3 User Creation

To create/register a new user, we have to create a new controller:‌

[Route("api/authentication")] [ApiController] public class AuthenticationController : ControllerBase { private readonly IServiceManager _service; public AuthenticationController(IServiceManager service) => _service = service; }

So, nothing new here. We have the basic setup for our controller with IServiceManager injected.

The next thing we have to do is to create a UserForRegistrationDto record in the Shared/DataTransferObjects folder:

public record UserForRegistrationDto { public string? FirstName { get; init; } public string? LastName { get; init; } [Required(ErrorMessage = "Username is required")] public string? UserName { get; init; } [Required(ErrorMessage = "Password is required")] public string? Password { get; init; } public string? Email { get; init; } public string? PhoneNumber { get; init; } public ICollection<string>? Roles { get; init; } }

Then, let’s create a mapping rule in the MappingProfile class:

CreateMap<UserForRegistrationDto, User>();

Since we want to extract all the registration/authentication logic to the service layer, we are going to create a new IAuthenticationService interface inside the Service.Contracts project:

public interface IAuthenticationService { Task<IdentityResult> RegisterUser(UserForRegistrationDto userForRegistration); }

This method will execute the registration logic and return the identity result to the caller.

Now that we have the interface, we need to create an implementation service class inside the Service project:

internal sealed class AuthenticationService : IAuthenticationService { private readonly ILoggerManager _logger; private readonly IMapper _mapper; private readonly UserManager<User> _userManager; private readonly IConfiguration _configuration; public AuthenticationService(ILoggerManager logger, IMapper mapper, UserManager<User> userManager, IConfiguration configuration) { _logger = logger;_mapper = mapper; _userManager = userManager; _configuration = configuration; } }

This code is pretty familiar from the previous service classes except for the UserManager class. This class is used to provide the APIs for managing users in a persistence store. It is not concerned with how user information is stored. For this, it relies on a UserStore (which in our case uses Entity Framework Core).

Of course, we have to add some additional namespaces:

using AutoMapper; using Contracts; using Entities.Models; using Microsoft.AspNetCore.Identity; using Microsoft.Extensions.Configuration; using Service.Contracts;

Great. Now, we can implement the RegisterUser method:

public async Task<IdentityResult> RegisterUser(UserForRegistrationDto userForRegistration) { var user = _mapper.Map<User>(userForRegistration); var result = await _userManager.CreateAsync(user, userForRegistration.Password); if (result.Succeeded) await _userManager.AddToRolesAsync(user, userForRegistration.Roles); return result; }

So we map the DTO object to the User object and call the CreateAsync method to create that specific user in the database. The CreateAsync method will save the user to the database if the action succeeds or it will return error messages as a result.

After that, if a user is created, we add that user to the named roles — the ones sent from the client side — and return the result.

If you want, before calling AddToRoleAsync or AddToRolesAsync, you can check if roles exist in the database. But for that, you have to inject RoleManager and use the RoleExistsAsync method.

We want to provide this service to the caller through ServiceManager and for that, we have to modify the IServiceManager interface first:

public interface IServiceManager { ICompanyService CompanyService { get; } IEmployeeService EmployeeService { get; } IAuthenticationService AuthenticationService { get; } }

And then the ServiceManager class:

public sealed class ServiceManager : IServiceManager { private readonly Lazy<ICompanyService> _companyService; private readonly Lazy<IEmployeeService> _employeeService; private readonly Lazy<IAuthenticationService> _authenticationService; public ServiceManager(IRepositoryManager repositoryManager, ILoggerManager logger, IMapper mapper, IEmployeeLinks employeeLinks, UserManager<User> userManager, IConfiguration configuration) { _companyService = new Lazy<ICompanyService>(() => new CompanyService(repositoryManager, logger, mapper)); _employeeService = new Lazy<IEmployeeService>(() => new EmployeeService(repositoryManager, logger, mapper, employeeLinks)); _authenticationService = new Lazy<IAuthenticationService>(() => new AuthenticationService(logger, mapper, userManager, configuration)); } public ICompanyService CompanyService => _companyService.Value; public IEmployeeService EmployeeService => _employeeService.Value; public IAuthenticationService AuthenticationService => _authenticationService.Value; }

Finally, it is time to create the RegisterUser action:

[HttpPost] [ServiceFilter(typeof(ValidationFilterAttribute))] public async Task<IActionResult> RegisterUser([FromBody] UserForRegistrationDto userForRegistration) { var result = await _service.AuthenticationService.RegisterUser(userForRegistration); if (!result.Succeeded){ foreach (var error in result.Errors) { ModelState.TryAddModelError(error.Code, error.Description); } return BadRequest(ModelState); } return StatusCode(201); }

We are implementing our existing action filter for the entity and model validation on top of our action. Then, we call the RegisterUser method and accept the result. If the registration fails, we iterate through each error add it to the ModelState and return the BadRequest response. Otherwise, we return the 201 created status code.

Before we continue with testing, we should increase a rate limit from 3 to 30 (ServiceExtensions class, ConfigureRateLimitingOptions method) just to not stand in our way while we’re testing the different features of our application.

Now we can start with testing.Let’s send a valid request first:
https://localhost:5001/api/authentication

alt text

And we get 201, which means that the user has been created and added to the role. We can send additional invalid requests to test our Action and Identity features.

If the model is invalid:

https://localhost:5001/api/authentication

alt text

If the password is invalid:
https://localhost:5001/api/authentication

alt text

Finally, if we want to create a user with the same user name and email:
https://localhost:5001/api/authentication

alt text

Excellent. Everything is working as planned. We can move on to the JWT implementation.

27.4 Big Picture

Before we get into the implementation of authentication and authorization, let’s have a quick look at the big picture. There is an application that has a login form. A user enters their username and password and presses the login button. After pressing the login button, a client (e.g., web browser) sends the user’s data to the server’s API endpoint:‌

alt text

When the server validates the user’s credentials and confirms that the user is valid, it’s going to send an encoded JWT to the client. A JSON web token is a JavaScript object that can contain some attributes of the logged-in user. It can contain a username, user subject, user roles, or some other useful information.

27.5 About JWT

JSON web tokens enable a secure way to transmit data between two parties in the form of a JSON object. It’s an open standard and it’s a popular mechanism for web authentication. In our case, we are going to use JSON web tokens to securely transfer a user’s data between the client and the server.‌

JSON web tokens consist of three basic parts: the header, the payload, and the signature.

One real example of a JSON web token:

alt text

Every part of all three parts is shown in a different color. The first part of JWT is the header, which is a JSON object encoded in the base64 format. The header is a standard part of JWT and we don’t have to worry about it. It contains information like the type of token and the name of the algorithm:

{ "alg": "HS256", "typ": "JWT" }

After the header, we have a payload which is also a JavaScript object encoded in the base64 format. The payload contains some attributes about the logged-in user. For example, it can contain the user id, the user subject, and information about whether a user is an admin user or not.

JSON web tokens are not encrypted and can be decoded with any base64 decoder, so please never include sensitive information in the Payload:

{ "sub": "1234567890", "name": "John Doe", "iat": 1516239022 }

Finally, we have the signature part. Usually, the server uses the signature part to verify whether the token contains valid information, the information which the server is issuing. It is a digital signature that gets generated by combining the header and the payload. Moreover, it’s based on a secret key that only the server knows:

alt text

So, if malicious users try to modify the values in the payload, they have to recreate the signature; for that purpose, they need the secret key only known to the server. On the server side, we can easily verify if the values are original or not by comparing the original signature with a new signature computed from the values coming from the client.

So, we can easily verify the integrity of our data just by comparing the digital signatures. This is the reason why we use JWT.

27.6 JWT Configuration

Let’s start by modifying the appsettings.json file:‌

{ "Logging": { "LogLevel": { "Default": "Information", "Microsoft.AspNetCore": "Warning", } }, "ConnectionStrings": { "sqlConnection": "server=.; database=CompanyEmployee; Integrated Security=true" }, "JwtSettings": { "validIssuer": "CodeMazeAPI", "validAudience": "https://localhost:5001" }, "AllowedHosts": "*" }

We just store the issuer and audience information in the appsettings.json file. We are going to talk more about that in a minute. As you probably remember, we require a secret key on the server-side. So, we are going to create one and store it in the environment variable because this is much safer than storing it inside the project.

To create an environment variable, we have to open the cmd window as an administrator and type the following command:

setx SECRET "CodeMazeSecretKey" /M

This is going to create a system environment variable with the name SECRET and the value CodeMazeSecretKey. By using /M we specify that we want a system variable and not local.

Great.

We can now modify the ServiceExtensions class:

public static void ConfigureJWT(this IServiceCollection services, IConfiguration configuration) { var jwtSettings = configuration.GetSection("JwtSettings"); var secretKey = Environment.GetEnvironmentVariable("SECRET"); services.AddAuthentication(opt => { opt.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme; opt.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme; }) .AddJwtBearer(options => { options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = true, ValidateAudience = true, ValidateLifetime = true, ValidateIssuerSigningKey = true, ValidIssuer = jwtSettings["validIssuer"], ValidAudience = jwtSettings["validAudience"], IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(secretKey)) }; }); }

First, we extract the JwtSettings from the appsettings.json file and extract our environment variable (If you keep getting null for the secret key, try restarting the Visual Studio or even your computer).

Then, we register the JWT authentication middleware by calling the method AddAuthentication on the IServiceCollection interface. Next, we specify the authentication scheme JwtBearerDefaults.AuthenticationScheme as well as ChallengeScheme. We also provide some parameters that will be used while validating JWT. For this to work, we have to install the Microsoft.AspNetCore.Authentication.JwtBearer library.

For this to work, we require the following namespaces:

using Microsoft.AspNetCore.Authentication.JwtBearer; 
using Microsoft.AspNetCore.Identity;
using Microsoft.IdentityModel.Tokens; 
using System.Text;

Excellent. We’ve successfully configured the JWT authentication.

According to the configuration, the token is going to be valid if:

• The issuer is the actual server that created the token (ValidateIssuer=true)

• The receiver of the token is a valid recipient (ValidateAudience=true)

• The token has not expired (ValidateLifetime=true)

• The signing key is valid and is trusted by the server (ValidateIssuerSigningKey=true)

Additionally, we are providing values for the issuer, the audience, and the secret key that the server uses to generate the signature for JWT.

All we have to do is to call this method in the Program class:

builder.Services.ConfigureJWT(builder.Configuration);
builder.Services.AddAuthentication(); 
builder.Services.ConfigureIdentity();

And that is it. We can now protect our endpoints.

27.7 Protecting Endpoints

Let’s open the CompaniesController and add an additional attribute above the GetCompanies action:‌

[HttpGet(Name = "GetCompanies")]
[Authorize] 
public async Task<IActionResult> GetCompanies()

The [Authorize] attribute specifies that the action or controller that it is applied to requires authorization. For it to be available we need an additional namespace:

using Microsoft.AspNetCore.Authorization;

Now to test this, let’s send a request to get all companies:
https://localhost:5001/api/companies

alt text

We see the protection works. We get a 401 Unauthorized response, which is expected because an unauthorized user tried to access the protected endpoint. So, what we need is for our users to be authenticated and to have a valid token.

27.8 Implementing Authentication

Let’s begin with the UserForAuthenticationDto record:‌

public record UserForAuthenticationDto { [Required(ErrorMessage = "User name is required")] public string? UserName { get; init; } [Required(ErrorMessage = "Password name is required")] public string? Password { get; init; } }

To continue, let’s modify the IAuthenticationService interface:

public interface IAuthenticationService { Task<IdentityResult> RegisterUser(UserForRegistrationDto userForRegistration); Task<bool> ValidateUser(UserForAuthenticationDto userForAuth); Task<string> CreateToken(); }

Next, let’s add a private variable in the AuthenticationService class:

private readonly UserManager<User> _userManager; private readonly IConfiguration _configuration; private User? _user;

Before we continue to the interface implementation, we have to install System.IdentityModel.Tokens.Jwt library in the Service project. Then, we can implement the required methods:

public async Task<bool> ValidateUser(UserForAuthenticationDto userForAuth) { _user = await _userManager.FindByNameAsync(userForAuth.UserName); var result = (_user != null && await _userManager.CheckPasswordAsync(_user, userForAuth.Password)); if (!result) _logger.LogWarn($"{nameof(ValidateUser)}: Authentication failed. Wrong user name or password."); return result; } public async Task<string> CreateToken() { var signingCredentials = GetSigningCredentials(); var claims = await GetClaims(); var tokenOptions = GenerateTokenOptions(signingCredentials, claims); return new JwtSecurityTokenHandler().WriteToken(tokenOptions); } private SigningCredentials GetSigningCredentials() { var key = Encoding.UTF8.GetBytes(Environment.GetEnvironmentVariable("SECRET")); var secret = new SymmetricSecurityKey(key); return new SigningCredentials(secret, SecurityAlgorithms.HmacSha256); } private async Task<List<Claim>> GetClaims() { var claims = new List<Claim> { new Claim(ClaimTypes.Name, _user.UserName) }; var roles = await _userManager.GetRolesAsync(_user); foreach (var role in roles) { claims.Add(new Claim(ClaimTypes.Role, role)); } return claims; }private JwtSecurityToken GenerateTokenOptions(SigningCredentials signingCredentials, List<Claim> claims) { var jwtSettings = _configuration.GetSection("JwtSettings"); var tokenOptions = new JwtSecurityToken ( issuer: jwtSettings["validIssuer"], audience: jwtSettings["validAudience"], claims: claims, expires: DateTime.Now.AddMinutes(Convert.ToDouble(jwtSettings["expires"])), signingCredentials: signingCredentials ); return tokenOptions; }

For this to work, we require a few more namespaces:

using System.IdentityModel.Tokens.Jwt; 
using Microsoft.IdentityModel.Tokens; 
using System.Text;
using System.Security.Claims;

Now we can explain the code.

In the ValidateUser method, we fetch the user from the database and check whether they exist and if the password matches. The UserManager class provides the FindByNameAsync method to find the user by user name and the CheckPasswordAsync to verify the user’s password against the hashed password from the database. If the check result is false, we log a message about failed authentication. Lastly, we return the result.

The CreateToken method does exactly that — it creates a token. It does that by collecting information from the private methods and serializing token options with the WriteToken method.

We have three private methods as well. The GetSignInCredentials method returns our secret key as a byte array with the security algorithm. The GetClaims method creates a list of claims with the user name inside and all the roles the user belongs to. The last method, GenerateTokenOptions, creates an object of the JwtSecurityToken type with all of the required options. We can see the expires parameter as one of the token options. We would extract it from the appsettings.json file as well, but we don’t have it there. So, we have to add it:

"JwtSettings": { "validIssuer": "CodeMazeAPI", "validAudience": "https://localhost:5001", "expires": 5 }

Finally, we have to add a new action in the AuthenticationController:

[HttpPost("login")] [ServiceFilter(typeof(ValidationFilterAttribute))] public async Task<IActionResult> Authenticate([FromBody] UserForAuthenticationDto user) { if (!await _service.AuthenticationService.ValidateUser(user)) return Unauthorized(); return Ok(new { Token = await _service .AuthenticationService.CreateToken() }); }

There is nothing special in this controller. If validation fails, we return the 401 Unauthorized response; otherwise, we return our created token:

https://localhost:5001/api/authentication/login

alt text

Excellent. We can see our token generated. Now, let’s send invalid credentials:
https://localhost:5001/api/authentication/login

alt text

And we get a 401 Unauthorized response.

Right now if we send a request to the GetCompanies action, we are still going to get the 401 Unauthorized response even though we have successful authentication. That’s because we didn’t provide our token in a request header and our API has nothing to authorize against. To solve that, we are going to create another GET request, and in the Authorization header choose the header type and paste the token from the previous request:

https://localhost:5001/api/companies

alt text

Now, we can send the request again:

https://localhost:5001/api/companies

alt text

Excellent. It works like a charm.

27.9 Role-Based Authorization

Right now, even though authentication and authorization are working as expected, every single authenticated user can access the GetCompanies action. What if we don’t want that type of behavior? For example, we want to allow only managers to access it. To do that, we have to make one simple change:‌

[HttpGet(Name = "GetCompanies")] 
[Authorize(Roles = "Manager")] 
public async Task<IActionResult> GetCompanies()

And that is it. To test this, let’s create another user with the Administrator role (the second role from the database):

alt text

We get 201. After we send an authentication request for Jane Doe, we are going to get a new token. Let’s use that token to send the request towards the GetCompanies action:

https://localhost:5001/api/companies

alt text

We get a 403 Forbidden response because this user is not allowed to access the required endpoint. If we log in with John Doe and use his token, we are going to get a successful response for sure. Of course, we don’t have to place an Authorize attribute only on top of the action; we can place it on the controller level as well. For example, we can place just [Authorize] on the controller level to allow only authorized users to access all the actions in that controller; also, we can place the [Authorize (Role=…)] on top of any action in that controller to state that only a user with that specific role has access to that action.

One more thing. Our token expires after five minutes after the creation point. So, if we try to send another request after that period (we probably have to wait 5 more minutes due to the time difference between servers, which is embedded inside the token – this can be overridden with the ClockSkew property in the TokenValidationParameters object ), we are going to get the 401 Unauthorized status for sure. Feel free to try.

28 REFRESH TOKEN

In this chapter, we are going to learn about refresh tokens and their use in modern web application development.‌

In the previous chapter, we have created a flow where a user logs in, gets an access token to be able to access protected resources, and after the token expires, the user has to log in again to obtain a new valid token:

alt text

This flow is great and is used by many enterprise applications.

But sometimes we have a requirement not to force our users to log in every single time the token expires. For that, we can use a refresh token.

Refresh tokens are credentials that can be used to acquire new access tokens. When an access token expires, we can use a refresh token to get a new access token from the authentication component. The lifetime of a refresh token is usually set much longer compared to the lifetime of an access token.

Let’s introduce the refresh token to our authentication workflow:

alt text

  1. First, the client authenticates with the authentication component by providing the credentials.

  2. Then, the authentication component issues the access token and the refresh token.

  3. After that, the client requests the resource endpoints for a protected resource by providing the access token.

  4. The resource endpoint validates the access token and provides a protected resource.

  5. Steps 3 & 4 keep on repeating until the access token expires.

  6. Once the access token expires, the client requests a new access token by providing the refresh token.

  7. The authentication component issues a new access token and refresh token.

  8. Steps 3 through 7 keep on repeating until the refresh token expires.

  9. Once the refresh token expires, the client needs to authenticate with the authentication server once again and the flow repeats from step 1.

28.1 Why Do We Need a Refresh Token

So, why do we need both access tokens and refresh tokens? Why don’t we just set a long expiration date, like a month or a year for the access tokens? Because, if we do that and someone manages to get hold of our access token they can use it for a long period, even if we change our password!‌

The idea of refresh tokens is that we can make the access token short- lived so that, even if it is compromised, the attacker gets access only for a shorter period. With refresh token-based flow, the authentication server issues a one-time use refresh token along with the access token. The app stores the refresh token safely.

Every time the app sends a request to the server it sends the access token in the Authorization header and the server can identify the app using it. Once the access token expires, the server will send a token expired response. Once the app receives the token expired response, it sends the expired access token and the refresh token to obtain a new access token and a refresh token.

If something goes wrong, the refresh token can be revoked which means that when the app tries to use it to get a new access token, that request will be rejected and the user will have to enter credentials once again and authenticate.

Thus, refresh tokens help in a smooth authentication workflow without the need for users to submit their credentials frequently, and at the same time, without compromising the security of the app.

28.2 Refresh Token Implementation

So far we have learned the concept of refresh tokens. Now, let’s dig into‌ the implementation part.

The first thing we have to do is to modify the User class:

public class User : IdentityUser { public string? FirstName { get; set; } public string? LastName { get; set; } public string? RefreshToken { get; set; } public DateTime RefreshTokenExpiryTime { get; set; } }

Here we add two additional properties, which we are going to add to the AspNetUsers table.

To do that, we have to create and execute another migration:

Add-Migration AdditionalUserFiledsForRefreshToken

If for some reason you get the message that you need to review your migration due to possible data loss, you should inspect the migration file and leave only the code that adds and removes our additional columns:

protected override void Up(MigrationBuilder migrationBuilder) { migrationBuilder.AddColumn<string>( name: "RefreshToken", table: "AspNetUsers", type: "nvarchar(max)", nullable: true); migrationBuilder.AddColumn<DateTime>( name: "RefreshTokenExpiryTime", table: "AspNetUsers", type: "datetime2", nullable: false, defaultValue: new DateTime(1, 1, 1, 0, 0, 0, 0, DateTimeKind.Unspecified)); } protected override void Down(MigrationBuilder migrationBuilder) { migrationBuilder.DropColumn( name: "RefreshToken", table: "AspNetUsers"); migrationBuilder.DropColumn( name: "RefreshTokenExpiryTime", table: "AspNetUsers"); }

Also, you should open the RepositoryContextModelSnapshot file, find the AspNetRoles part and revert the Ids of both roles to the previous values:

b.ToTable("AspNetRoles", (string)null); b.HasData( new { Id = "4ac8240a-8498-4869-bc86-60e5dc982d27", ConcurrencyStamp = "ec511bd4-4853-426a-a2fc-751886560c9a", Name = "Manager", NormalizedName = "MANAGER" }, new { Id = "562419f5-eed1-473b-bcc1-9f2dbab182b4", ConcurrencyStamp = "937e9988-9f49-4bab-a545-b422dde85016", Name = "Administrator", NormalizedName = "ADMINISTRATOR" });

After that is done, we can execute our migration with the Update- Database command. This will add two additional columns in the AspNetUsers table.

To continue, let’s create a new record in the Shared/DataTransferObjects folder:

public record TokenDto(string AccessToken, string RefreshToken);

Next, we are going to modify the IAuthenticationService interface:

public interface IAuthenticationService { Task<IdentityResult> RegisterUser(UserForRegistrationDto userForRegistration); Task<bool> ValidateUser(UserForAuthenticationDto userForAuth); Task<TokenDto> CreateToken(bool populateExp); }

Then, we have to implement two new methods in the AuthenticationService class:

private string GenerateRefreshToken() { var randomNumber = new byte[32]; using (var rng = RandomNumberGenerator.Create()) { rng.GetBytes(randomNumber); return Convert.ToBase64String(randomNumber);} } private ClaimsPrincipal GetPrincipalFromExpiredToken(string token) { var jwtSettings = _configuration.GetSection("JwtSettings"); var tokenValidationParameters = new TokenValidationParameters { ValidateAudience = true, ValidateIssuer = true, ValidateIssuerSigningKey = true, IssuerSigningKey = new SymmetricSecurityKey( Encoding.UTF8.GetBytes(Environment.GetEnvironmentVariable("SECRET"))), ValidateLifetime = true, ValidIssuer = jwtSettings["validIssuer"], ValidAudience = jwtSettings["validAudience"] }; var tokenHandler = new JwtSecurityTokenHandler(); SecurityToken securityToken; var principal = tokenHandler.ValidateToken(token, tokenValidationParameters, out securityToken); var jwtSecurityToken = securityToken as JwtSecurityToken; if (jwtSecurityToken == null || !jwtSecurityToken.Header.Alg.Equals(SecurityAlgorithms.HmacSha256, StringComparison.InvariantCultureIgnoreCase)) { throw new SecurityTokenException("Invalid token"); } return principal; }

GenerateRefreshToken contains the logic to generate the refresh token. We use the RandomNumberGenerator class to generate a cryptographic random number for this purpose.

GetPrincipalFromExpiredToken is used to get the user principal from the expired access token. We make use of the ValidateToken method from the JwtSecurityTokenHandler class for this purpose. This method validates the token and returns the ClaimsPrincipal object.

After that, to generate a refresh token and the expiry date for the logged- in user, and to return both the access token and refresh token to the caller, we have to modify the CreateToken method in the same class:

public async Task<TokenDto> CreateToken(bool populateExp) { var signingCredentials = GetSigningCredentials();var claims = await GetClaims(); var tokenOptions = GenerateTokenOptions(signingCredentials, claims); var refreshToken = GenerateRefreshToken(); _user.RefreshToken = refreshToken; if(populateExp) _user.RefreshTokenExpiryTime = DateTime.Now.AddDays(7); await _userManager.UpdateAsync(_user); var accessToken = new JwtSecurityTokenHandler().WriteToken(tokenOptions); return new TokenDto(accessToken, refreshToken); }

Finally, we have to modify the Authenticate action:

[HttpPost("login")] [ServiceFilter(typeof(ValidationFilterAttribute))] public async Task<IActionResult> Authenticate([FromBody] UserForAuthenticationDto user) { if (!await _service.AuthenticationService.ValidateUser(user)) return Unauthorized(); var tokenDto = await _service.AuthenticationService .CreateToken(populateExp: true); return Ok(tokenDto); }

That’s it regarding the action modification.

Now, we can test this by sending the POST request from Postman:
https://localhost:5001/api/authentication/login

alt text

We can see the successful authentication and both our tokens. Additionally, if we inspect the database, we are going to find populated RefreshToken and Expiry columns for JDoe:

alt text

It is a good practice to have a separate endpoint for the refresh token‌ action, and that’s exactly what we are going to do now.

Let’s start by creating a new TokenController in the Presentation project:

[Route("api/token")] [ApiController] public class TokenController : ControllerBase { private readonly IServiceManager _service; public TokenController(IServiceManager service) => _service = service; }

Before we continue with the controller modification, we are going to modify the IAuthenticationService interface:

public interface IAuthenticationService { Task<IdentityResult> RegisterUser(UserForRegistrationDto userForRegistration); Task<bool> ValidateUser(UserForAuthenticationDto userForAuth); Task<TokenDto> CreateToken(bool populateExp); Task<TokenDto> RefreshToken(TokenDto tokenDto); }

And to implement this method:

public async Task<TokenDto> RefreshToken(TokenDto tokenDto) { var principal = GetPrincipalFromExpiredToken(tokenDto.AccessToken); var user = await _userManager.FindByNameAsync(principal.Identity.Name); if (user == null || user.RefreshToken != tokenDto.RefreshToken || user.RefreshTokenExpiryTime <= DateTime.Now) throw new RefreshTokenBadRequest(); _user = user; return await CreateToken(populateExp: false); }

We first extract the principal from the expired token and use the Identity.Name property, which is the username of the user, to fetch that user from the database. If the user doesn’t exist, or the refresh tokens are not equal, or the refresh token has expired, we stop the flow returning the BadRequest response to the user. Then we just populate the _user variable and call the CreateToken method to generate new Access and Refresh tokens. This time, we don’t want to update the expiry time of the refresh token thus sending false as a parameter.

Since we don’t have the RefreshTokenBadRequest class, let’s create it in the Entities\Exceptions folder:

public sealed class RefreshTokenBadRequest : BadRequestException { public RefreshTokenBadRequest() : base("Invalid client request. The tokenDto has some invalid values.") { } }

And add a required using directive in the AuthenticationService class to remove the present error.

Finally, let’s add one more action in the TokenController:

[HttpPost("refresh")] [ServiceFilter(typeof(ValidationFilterAttribute))] public async Task<IActionResult> Refresh([FromBody]TokenDto tokenDto) { var tokenDtoToReturn = await _service.AuthenticationService.RefreshToken(tokenDto); return Ok(tokenDtoToReturn); }

That’s it.

Our refresh token logic is prepared and ready for testing.

Let’s first send the POST authentication request:
https://localhost:5001/api/authentication/login

alt text

As before, we have both tokens in the response body.

Now, let’s send the POST refresh request with these tokens as the request body:
https://localhost:5001/api/token/refresh

alt text

And we can see new tokens in the response body. Additionally, if we inspect the database, we will find the same refresh token value:

alt text

Usually, in your client application, you would inspect the exp claim of the access token and if it is about to expire, your client app would send the request to the api/token endpoint and get a new set of valid tokens.

29 BINDING CONFIGURATION AND OPTIONS PATTERN

In the previous chapter, we had to use our appsettings file to store some important values for our JWT configuration and read those values from it:‌

"JwtSettings": { "validIssuer": "CodeMazeAPI", "validAudience": "https://localhost:5001", "expires": 5 },

To access these values, we’ve used the GetSection method from the IConfiguration interface:

var jwtSettings = configuration.GetSection("JwtSettings");

The GetSection method gets a sub-section from the appsettings file based on the provided key.

Once we extracted the sub-section, we’ve accessed the specific values by using the jwtSettings variable of type IConfigurationSection, with the key provided inside the square brackets:

ValidIssuer = jwtSettings["validIssuer"],

This works great but it does have its flaws.

Having to type sections and keys to get the values can be repetitive and error-prone. We risk introducing errors to our code, and these kinds of errors can cost us a lot of time until we discover them since someone else can introduce them, and we won’t notice them since a null result is returned when values are missing.

To overcome this problem, we can bind the configuration data to strongly typed objects. To do that, we can use the Bind method.

29.1 Binding Configuration

To start with the binding process, we are going to create a new ConfigurationModels folder inside the Entities project, and a new JwtConfiguration class inside that folder:‌

public class JwtConfiguration { public string Section { get; set; } = "JwtSettings"; public string? ValidIssuer { get; set; } public string? ValidAudience { get; set; } public string? Expires { get; set; } }

Then in the ServiceExtensions class, we are going to modify the ConfigureJWT method:

public static void ConfigureJWT(this IServiceCollection services, IConfiguration configuration) { var jwtConfiguration = new JwtConfiguration(); configuration.Bind(jwtConfiguration.Section, jwtConfiguration); var secretKey = Environment.GetEnvironmentVariable("SECRET"); services.AddAuthentication(opt => { opt.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme; opt.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme; }) .AddJwtBearer(options => { options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = true, ValidateAudience = true, ValidateLifetime = true, ValidateIssuerSigningKey = true, ValidIssuer = jwtConfiguration.ValidIssuer, ValidAudience = jwtConfiguration.ValidAudience, IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(secretKey)) }; }); }

We create a new instance of the JwtConfiguration class and use the Bind method that accepts the section name and the instance object as parameters, to bind to the JwtSettings section directly and map configuration values to respective properties inside the JwtConfiguration class. Then, we just use those properties instead of string keys inside square brackets, to access required values.
There are two things to note here though. The first is that the names of the configuration data keys and class properties must match. The other is that if you extend the configuration, you need to extend the class as well, which can be a bit cumbersome, but it beats getting values by typing strings.

Now, we can continue with the AuthenticationService class modification since we extract configuration values in two methods from this class:

... private readonly JwtConfiguration _jwtConfiguration; private User? _user; public AuthenticationService(ILoggerManager logger, IMapper mapper, UserManager<User> userManager, IConfiguration configuration) { _logger = logger; _mapper = mapper; _userManager = userManager; _configuration = configuration; _jwtConfiguration = new JwtConfiguration(); _configuration.Bind(_jwtConfiguration.Section, _jwtConfiguration); }

So, we add a readonly variable, and create an instance and execute binding inside the constructor.

And since we’re using the Bind() method we need to install the Microsoft.Extensions.Configuration.Binder NuGet package.

After that, we can modify the GetPrincipalFromExpiredToken method by removing the GetSection part and modifying the TokenValidationParameters object creation:

private ClaimsPrincipal GetPrincipalFromExpiredToken(string token) { var tokenValidationParameters = new TokenValidationParameters { ValidateAudience = true, ValidateIssuer = true,ValidateIssuerSigningKey = true, IssuerSigningKey = new SymmetricSecurityKey( Encoding.UTF8.GetBytes(Environment.GetEnvironmentVariable("SECRET"))), ValidateLifetime = true, ValidIssuer = _jwtConfiguration.ValidIssuer, ValidAudience = _jwtConfiguration.ValidAudience }; ... return principal; }

And let’s do a similar thing for the GenerateTokenOptions method:

private JwtSecurityToken GenerateTokenOptions(SigningCredentials signingCredentials, List<Claim> claims) { var tokenOptions = new JwtSecurityToken ( issuer: _jwtConfiguration.ValidIssuer, audience: _jwtConfiguration.ValidAudience, claims: claims, expires: DateTime.Now.AddMinutes(Convert.ToDouble(_jwtConfiguration.Expires)), signingCredentials: signingCredentials ); return tokenOptions; }

Excellent.

At this point, we can start our application and use both requests from Postman’s collection - 28-Refresh Token - to test our configuration.

We should get the same responses as we did in a previous chapter, which proves that our configuration works as intended but now with a better code and less error-prone.

29.2 Options Pattern

In the previous section, we’ve seen how we can bind configuration data to strongly typed objects. The options pattern gives us similar possibilities, but it offers a more structured approach and more features like validation, live reloading, and easier testing.‌

Once we configure the class containing our configuration we can inject it via dependency injection with IOptions and thus injecting only part of our configuration or rather only the part that we need.

If we need to reload the configuration without stopping the application, we can use the IOptionsSnapshot interface or the IOptionsMonitor interface depending on the situation. We’ll see when these interfaces should be used and why.

The options pattern also provides a good validation mechanism that uses the widely used DataAnotations attributes to check if the configuration abides by the logical rules of our application.

The testing of options is also easy because of the helper methods and easy to mock options classes.

29.2.1 Using IOptions‌

We have already written a lot of code in the previous section that can be used with the IOptions interface, but we still have some more actions to do.

The first thing we are going to do is to register and configure the JwtConfiguration class in the ServiceExtensions class:

public static void AddJwtConfiguration(this IServiceCollection services, IConfiguration configuration) => services.Configure<JwtConfiguration>(configuration.GetSection("JwtSettings"));

And call this method in the Program class:

builder.Services.ConfigureJWT(builder.Configuration); builder.Services.AddJwtConfiguration(builder.Configuration);

Since we can use IOptions with DI, we are going to modify the ServiceManager class to support that:

public ServiceManager(IRepositoryManager repositoryManager, ILoggerManager logger, IMapper mapper, IEmployeeLinks employeeLinks, UserManager<User> userManager, IOptions<JwtConfiguration> configuration)

We just replace the IConfiguration type with the IOptions type in the constructor.

For this, we need two additional namespaces:

using Entities.ConfigurationModels; 
using Microsoft.Extensions.Options;

Then, we can modify the AuthenticationService’s constructor:

private readonly ILoggerManager _logger; private readonly IMapper _mapper; private readonly UserManager<User> _userManager; private readonly IOptions<JwtConfiguration> _configuration; private readonly JwtConfiguration _jwtConfiguration; private User? _user; public AuthenticationService(ILoggerManager logger, IMapper mapper, UserManager<User> userManager, IOptions<JwtConfiguration> configuration) { _logger = logger; _mapper = mapper; _userManager = userManager; _configuration = configuration; _jwtConfiguration = _configuration.Value; }

And that’s it.

We inject IOptions inside the constructor and use the Value property to extract the JwtConfiguration object with all the populated properties. Nothing else has to change in this class.

If we start the application again and send the same requests, we will still get valid results meaning that we’ve successfully implemented IOptions in our project.

One more thing. We didn’t modify anything inside the ServiceExtensions/ConfigureJWT method. That’s because this configuration happens during the service registration and not after services are built. This means that we can’t resolve our required service here.

Well, to be precise, we can use the BuildServiceProvider method to build a service provider containing all the services from the provided IServiceCollection, and thus being able to access the required service. But if you do that, you will create one more list of singleton services, which can be quite expensive depending on the size of your application. So, you should be careful with this method.

That said, using Binding to access configuration values is perfectly safe and cheap in this stage of the application’s lifetime.

29.2.2 IOptionsSnapshot and IOptionsMonitor‌

The previous code looks great but if we want to change the value of Expires to 10 instead of 5 for example, we need to restart the application to do it. You can imagine how useful would be to have a published application and all you need to do is to modify the value in the configuration file without restarting the whole app.

Well, there is a way to do it by using IOptionsSnapshot or IOptionsMonitor.

All we would have to do is to replace the IOptions type with the IOptionsSnapshot or IOptionsMonitor types inside the ServiceManager and AuthenticationService classes. Also if we use IOptionsMonitor, we can’t use the Value property but the CurrentValue.

So the main difference between these two interfaces is that the IOptionsSnapshot service is registered as a scoped service and thus can’t be injected inside the singleton service. On the other hand, IOptionsMonitor is registered as a singleton service and can be injected into any service lifetime.

To make the comparison even clearer, we have prepared the following list for you:

IOptions:

• Is the original Options interface and it’s better than binding the whole Configuration

• Does not support configuration reloading

• Is registered as a singleton service and can be injected anywhere

• Binds the configuration values only once at the registration, and returns the same values every time

• Does not support named options

IOptionsSnapshot:

• Registered as a scoped service

• Supports configuration reloading

• Cannot be injected into singleton services

• Values reload per request

• Supports named options

IOptionsMonitor:

• Registered as a singleton service

• Supports configuration reloading

• Can be injected into any service lifetime

• Values are cached and reloaded immediately

• Supports named options

Having said that, we can see that if we don’t want to enable live reloading or we don’t need named options, we can simply use IOptions. If we do, we can use either IOptionsSnapshot or IOptionsMonitor,but IOptionsMonitor can be injected into other singleton services while IOptionsSnapshot cannot.

We have mentioned Named Options a couple of times so let’s explain what that is.

Let’s assume, just for example sake, that we have a configuration like this one:

"JwtSettings": { "validIssuer": "CodeMazeAPI", "validAudience": "https://localhost:5001", "expires": 5 }, "JwtAPI2Settings": { "validIssuer": "CodeMazeAPI2", "validAudience": "https://localhost:5002", "expires": 10 },

Instead of creating a new JwtConfiguration2 class that has the same properties as our existing JwtConfiguration class, we can add another configuration:

services.Configure<JwtConfiguration>("JwtSettings", configuration.GetSection("JwtSettings")); services.Configure<JwtConfiguration>("JwtAPI2Settings", configuration.GetSection("JwtAPI2Settings"));

Now both sections are mapped to the same configuration class, which makes sense. We don’t want to create multiple classes with the same properties and just name them differently. This is a much better way of doing it.

Calling the specific option is now done using the Get method with a section name as a parameter instead of the Value or CurrentValue properties:

_jwtConfiguration = _configuration.Get("JwtSettings");

That’s it. All the rest is the same.

30 DOCUMENTING API WITH SWAGGER

Developers who consume our API might be trying to solve important business problems with it. Hence, it is very important for them to understand how to use our API effectively. This is where API documentation comes into the picture.‌

API documentation is the process of giving instructions on how to effectively use and integrate an API. Hence, it can be thought of as a concise reference manual containing all the information required to work with the API, with details about functions, classes, return types, arguments, and more, supported by tutorials and examples.

So, having the proper documentation for our API enables consumers to integrate our APIs as quickly as possible and move forward with their development. Furthermore, this also helps them understand the value and usage of our API, improves the chances for our API’s adoption, and makes our APIs easier to maintain and support.

30.1 About Swagger

Swagger is a language-agnostic specification for describing REST APIs. Swagger is also referred to as OpenAPI. It allows us to understand the capabilities of a service without looking at the actual implementation code.‌

Swagger minimizes the amount of work needed while integrating an API. Similarly, it also helps API developers document their APIs quickly and accurately.

Swagger Specification is an important part of the Swagger flow. By default, a document named swagger.json is generated by the Swagger tool which is based on our API. It describes the capabilities of our API and how to access it via HTTP.

30.2 Swagger Integration Into Our Project

We can use the Swashbuckle package to easily integrate Swagger into our‌ .NET Core Web API project. It will generate the Swagger specification for the project as well. Additionally, the Swagger UI is also contained within Swashbuckle.

There are three main components in the Swashbuckle package:

• Swashbuckle.AspNetCore.Swagger: This contains the Swagger object model and the middleware to expose SwaggerDocument objects as JSON.
• Swashbuckle.AspNetCore.SwaggerGen: A Swagger generator that builds SwaggerDocument objects directly from our routes, controllers, and models.
• Swashbuckle.AspNetCore.SwaggerUI: An embedded version of the Swagger UI tool. It interprets Swagger JSON to build a rich, customizable experience for describing web API functionality.

So, the first thing we are going to do is to install the required library in the main project. Let’s open the Package Manager Console window and type the following command:

PM> Install-Package Swashbuckle.AspNetCore

After a couple of seconds, the package will be installed. Now, we have to configure the Swagger Middleware. To do that, we are going to add a new method in the ServiceExtensions class:

public static void ConfigureSwagger(this IServiceCollection services) { services.AddSwaggerGen(s => { s.SwaggerDoc("v1", new OpenApiInfo { Title = "Code Maze API", Version = "v1" }); s.SwaggerDoc("v2", new OpenApiInfo { Title = "Code Maze API", Version = "v2" }); }); }

We are creating two versions of SwaggerDoc because if you remember, we have two versions for the Companies controller and we want to separate them in our documentation.

Also, we need an additional namespace:

using Microsoft.OpenApi.Models;

The next step is to call this method in the Program class:

builder.Services.ConfigureSwagger();

And in the middleware part of the class, we are going to add it to the application’s execution pipeline together with the UI feature:

app.UseSwagger(); app.UseSwaggerUI(s => { s.SwaggerEndpoint("/swagger/v1/swagger.json", "Code Maze API v1"); s.SwaggerEndpoint("/swagger/v2/swagger.json", "Code Maze API v2"); });

Finally, let’s slightly modify the Companies and CompaniesV2 controllers:

[Route("api/companies")] [ApiController] [ApiExplorerSettings(GroupName = "v1")] public class CompaniesController : ControllerBase [Route("api/companies")] [ApiController] [ApiExplorerSettings(GroupName = "v2")] public class CompaniesV2Controller : ControllerBase

With this change, we state that the CompaniesController belongs to group v1 and the CompaniesV2Controller belongs to group v2. All the other controllers will be included in both groups because they are not versioned. Which is what we want.

And that is all. We have prepared the basic configuration.

Now, we can start our app, open the browser, and navigate to https://localhost:5001/swagger/v1/swagger.json. Once the page is up, you are going to see a json document containing all the controllers and actions without the v2 companies controller. Of course, if you change v1 to v2 in the URL, you are going to see all the controllers — including v2 companies, but without v1 companies.

Additionally, let’s navigate to https://localhost:5001/swagger/index.html:

alt text

Also if we expand the Schemas part, we are going to find the DTOs that we used in our project.

If we click on a specific controller to expand its details, we are going to see all the actions inside:

alt text

Once we click on an action method, we can see detailed information like parameters, response, and example values. There is also an option to try out each of those action methods by clicking the Try it out button.

So, let’s try it with the /api/companies action:

alt text

Once we click the Execute button, we are going to see that we get our response:

alt text

And this is an expected response. We are not authorized. To enable authorization, we have to add some modifications.

30.3 Adding Authorization Support

To add authorization support, we need to modify the ConfigureSwagger‌ method:

public static void ConfigureSwagger(this IServiceCollection services) { services.AddSwaggerGen(s => { s.SwaggerDoc("v1", new OpenApiInfo { Title = "Code Maze API", Version = "v1" }); s.SwaggerDoc("v2", new OpenApiInfo { Title = "Code Maze API", Version = "v2" }); s.AddSecurityDefinition("Bearer", new OpenApiSecurityScheme { In = ParameterLocation.Header, Description = "Place to add JWT with Bearer", Name = "Authorization", Type = SecuritySchemeType.ApiKey, Scheme = "Bearer" }); s.AddSecurityRequirement(new OpenApiSecurityRequirement() { { new OpenApiSecurityScheme { Reference = new OpenApiReference { Type = ReferenceType.SecurityScheme, Id = "Bearer"}, Name = "Bearer", }, new List<string>() } }); }); }

With this modification, we are adding the security definition in our swagger configuration. Now, we can start our app again and navigate to the index.html page.

The first thing we are going to notice is the Authorize options for requests:

alt text

We are going to use that in a moment. But let’s get our token first. For that, let’s open the api/authentication/login action, click try it out, add credentials, and copy the received token:

alt text

Once we have copied the token, we are going to click on the authorization button for the /api/companies request, paste it with the Bearer in front of it, and click Authorize:

alt text

After authorization, we are going to click on the Close button and try our request:

alt text

And we get our response. Excellent job.

30.4 Extending Swagger Configuration

Swagger provides options for extending the documentation and customizing the UI. Let’s explore some of those.‌

First, let’s see how we can specify the API info and description. The configuration action passed to the AddSwaggerGen() method adds information such as Contact, License, and Description. Let’s provide some values for those:

s.SwaggerDoc("v1", new OpenApiInfo { Title = "Code Maze API", Version = "v1", Description = "CompanyEmployees API by CodeMaze", TermsOfService = new Uri("https://example.com/terms"), Contact = new OpenApiContact { Name = "John Doe", Email = "John.Doe@gmail.com", Url = new Uri("https://twitter.com/johndoe"), }, License = new OpenApiLicense { Name = "CompanyEmployees API LICX", Url = new Uri("https://example.com/license"), } });
......

We have implemented this just for the first version, but you get the point. Now, let’s run the application once again and explore the Swagger UI:

alt text

For enabling XML comments, we need to suppress warning 1591, which will now give warnings about any method, class, or field that doesn’t have triple-slash comments. We need to do this in the Presentation project.

Additionally, we have to add the documentation path for the same project, since our controllers are in the Presentation project:

<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net6.0</TargetFramework> <ImplicitUsings>enable</ImplicitUsings> <Nullable>enable</Nullable> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|AnyCPU'"> <DocumentationFile>CompanyEmployees.Presentation.xml</DocumentationFile> <OutputPath></OutputPath> <NoWarn>1701;1702;1591</NoWarn> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'"> <NoWarn>1701;1702;1591</NoWarn> </PropertyGroup>

Now, let’s modify our configuration:

s.SwaggerDoc("v2", new OpenApiInfo { Title = "Code Maze API", Version = "v2" }); var xmlFile = $"{typeof(Presentation.AssemblyReference).Assembly.GetName().Name}.xml"; var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile); s.IncludeXmlComments(xmlPath);

Next, adding triple-slash comments to the action method enhances the Swagger UI by adding a description to the section header:

/// <summary> /// Gets the list of all companies /// </summary> /// <returns>The companies list</returns> [HttpGet(Name = "GetCompanies")] [Authorize(Roles = "Manager")] public async Task<IActionResult> GetCompanies()

And this is the result:

alt text

The developers who consume our APIs are usually more interested in what it returns — specifically the response types and error codes. Hence, it is very important to describe our response types. These are denoted using XML comments and data annotations.

Let’s enhance the response types a little bit:

/// <summary> /// Creates a newly created company /// </summary> /// <param name="company"></param> /// <returns>A newly created company</returns> /// <response code="201">Returns the newly created item</response> /// <response code="400">If the item is null</response> /// <response code="422">If the model is invalid</response> [HttpPost(Name = "CreateCompany")] [ProducesResponseType(201)] [ProducesResponseType(400)] [ProducesResponseType(422)]

Here, we are using both XML comments and data annotation attributes. Now, we can see the result:

alt text

And, if we inspect the response part, we will find our mentioned responses:

alt text

Excellent.

We can continue to the deployment part.

30 DEPLOYMENT TO IIS

Before we start the deployment process, we would like to point out one important thing. We should always try to deploy an application on at least a local machine to somehow simulate the production environment as soon as we start with development. That way, we can observe how the application behaves in a production environment from the beginning of the development process.‌

That leads us to the conclusion that the deployment process should not be the last step of the application’s lifecycle. We should deploy our application to the staging environment as soon as we start building it.

That said, let’s start with the deployment process.

31.1 Creating Publish Files

Let’s create a folder on the local machine with the name Publish. Inside that folder, we want to place all of our files for deployment. After the folder creation, let’s right-click on the main project in the Solution Explorer window and click publish option:‌

alt text

In the “Pick a publish target” window, we are going to choose the Folder option and click Next:

alt text

And point to the location of the Publish folder we just created and click Finish:

alt text
Publish windows can be different depending on the Visual Studio version.

After that, we have to click the Publish button:

alt text

Visual Studio is going to do its job and publish the required files in the specified folder.

31.2 Windows Server Hosting Bundle

Before any further action, let’s install the .NET Core Windows Server Hosting bundle on our system to install .NET Core Runtime. Furthermore, with this bundle, we are installing the .NET Core Library and the ASP.NET Core Module. This installation will create a reverse proxy between IIS and the Kestrel server, which is crucial for the deployment process.‌

If you have a problem with missing SDK after installing the Hosting Bundle, follow this solution suggested by Microsoft:

Installing the .NET Core Hosting Bundle modifies the PATH when it installs the .NET Core runtime to point to the 32-bit (x86) version of .NET Core (C:\Program Files (x86)\dotnet). This can result in missing SDKs when the 32-bit (x86) .NET Core dotnet command is used (No .NET Core SDKs were detected). To resolve this problem, move C:\Program Files\dotnet\to a position before C:\Program Files (x86)\dotnet\ on the PATH environment variable.

After the installation, we are going to locate the Windows hosts file on C:\Windows\System32\drivers\etc and add the following record at the end of the file:

127.0.0.1 www.companyemployees.codemaze

After that, we are going to save the file.

31.3 Installing IIS

If you don’t have IIS installed on your machine, you need to install it by opening ControlPanel and then Programs and Features:‌

alt text

After the IIS installation finishes, let’s open the Run window (windows key + R) and type: inetmgr to open the IIS manager:

alt text

Now, we can create a new website:

alt text

In the next window, we need to add a name to our site and a path to the published files:

alt text

And click the OK button.

After this step, we are going to have our site inside the “sites” folder in the IIS Manager. Additionally, we need to set up some basic settings for our application pool:

alt text

After we click on the Basic Settings link, let’s configure our application pool:

alt text

ASP.NET Core runs in a separate process and manages the runtime. It doesn't rely on loading the desktop CLR (.NET CLR). The Core Common Language Runtime for .NET Core is booted to host the app in the worker process. Setting the .NET CLR version to No Managed Code is optional but recommended.

Our website and the application pool should be started automatically.

31.4 Configuring Environment File

In the section where we configured JWT, we had to use a secret key that we placed in the environment file. Now, we have to provide to IIS the name of that key and the value as well.‌

The first step is to click on our site in IIS and open Configuration Editor:

alt text

Then, in the section box, we are going to choose system.webServer/aspNetcore:

alt text

From the “From” combo box, we are going to choose ApplicationHost.config:

alt text

After that, we are going to select environment variables:

alt text

Click Add and type the name and the value of our variable:

alt text

As soon as we click the close button, we should click apply in the next window, restart our application in IIS, and we are good to go.

31.5 Testing Deployed Application

Let’s open Postman and send a request for the Root document:‌
http://www.companyemployees.codemaze/api

alt text

We can see that our API is working as expected. If it’s not, and you have a problem related to web.config in IIS, try reinstalling the Server Hosting Bundle package.

If you get an error message that the Presentation.xml file is missing, you can copy it from the project and paste it into the Publish folder. Also, in the Properties window for that file, you can set it to always copy during the publish.

Now, let’s continue.

We still have one more thing to do. We have to add a login to the SQL Server for IIS APPPOOL\CodeMaze Web Api and grant permissions to the database. So, let’s open the SQL Server Management Studio and add a new login:

alt text

In the next window, we are going to add our user:

alt text

After that, we are going to expand the Logins folder, right-click on our user, and choose Properties. There, under UserMappings, we have to select the CompanyEmployee database and grant the dbwriter and dbreader roles.

Now, we can try to send the Authentication request:
http://www.companyemployees.codemaze/api/authentication/login

alt text

Excellent; we have our token. Now, we can send the request to the GetCompanies action with the generated token:

http://www.companyemployees.codemaze/api/companies

alt text

And there we go. Our API is published and working as expected.

32 BONUS 1 - RESPONSE PERFORMANCE IMPROVEMENTS

As mentioned in section 6.1.1, we will show you an alternative way of handling error responses. To repeat, with custom exceptions, we have great control of returning error responses to the client due to the global error handler, which is pretty fast if we use it correctly. Also, the code is pretty clean and straightforward since we don’t have to care about the return types and additional validation in the service methods.‌

Even though some libraries enable us to write custom responses, for example, OneOf, we still like to create our abstraction logic, which is tested by us and fast. Additionally, we want to show you the whole creation process for such a flow.

For this example, we will use an existing project from part 6 and modify it to implement our API Response flow.

32.1 Adding Response Classes to the Project

Let’s start with the API response model classes.‌

The first thing we are going to do is create a new Responses folder in the Entities project. Inside that folder, we are going to add our first class:

public abstract class ApiBaseResponse { public bool Success { get; set; } protected ApiBaseResponse(bool success) => Success = success; }

This is an abstract class, which will be the main return type for all of our methods where we have to return a successful result or an error result. It also contains a single Success property stating whether the action was successful or not.

Now, if our result is successful, we are going to create only one class in the same folder:

public sealed class ApiOkResponse<TResult> : ApiBaseResponse { public TResult Result { get; set; } public ApiOkResponse(TResult result) :base(true) { Result = result; } }

We are going to use this class as a return type for a successful result. It inherits from the ApiBaseResponse and populates the Success property to true through the constructor. It also contains a single Result property of type TResult. We will store our concrete result in this property, and since we can have different result types in different methods, this property is a generic one.

That’s all regarding the successful responses. Let’s move one to the error classes.

For the error responses, we will follow the same structure as we have for the exception classes. So, we will have base abstract classes for NotFound or BadRequest or any other error responses, and then concrete implementations for these classes like CompanyNotFound or CompanyBadRequest, etc.

That said, let’s use the same folder to create an abstract error class:

public abstract class ApiNotFoundResponse : ApiBaseResponse { public string Message { get; set; } public ApiNotFoundResponse(string message) : base(false) { Message = message; } }

This class also inherits from the ApiBaseResponse, populates the Success property to false, and has a single Message property for the error message.

In the same manner, we can create the ApiBadRequestResponse class:

public abstract class ApiBadRequestResponse : ApiBaseResponse { public string Message { get; set; } public ApiBadRequestResponse(string message) : base(false) { Message = message; } }

This is the same implementation as the previous one. The important thing to notice is that both of these classes are abstract.

To continue, let’s create a concrete error response:

public sealed class CompanyNotFoundResponse : ApiNotFoundResponse { public CompanyNotFoundResponse(Guid id) : base($"Company with id: {id} is not found in db.") { } }

The class inherits from the ApiNotFoundResponse abstract class, which again inherits from the ApiBaseResponse class. It accepts an id parameter and creates a message that sends to the base class.

We are not going to create the CompanyBadRequestResponse class because we are not going to need it in our example. But the principle is the same.

32.2 Service Layer Modification

Now that we have the response model classes, we can start with the service layer modification.‌

Let’s start with the ICompanyService interface:

public interface ICompanyService { ApiBaseResponse GetAllCompanies(bool trackChanges); ApiBaseResponse GetCompany(Guid companyId, bool trackChanges); }

We don’t return concrete types in our methods anymore. Instead of the IEnumerable or CompanyDto return types, we return the ApiBaseResponse type. This will enable us to return either the success result or to return any of the error response results.

After the interface modification, we can modify the CompanyService class:

public ApiBaseResponse GetAllCompanies(bool trackChanges) { var companies = _repository.Company.GetAllCompanies(trackChanges); var companiesDto = _mapper.Map<IEnumerable<CompanyDto>>(companies); return new ApiOkResponse<IEnumerable<CompanyDto>>(companiesDto); } public ApiBaseResponse GetCompany(Guid id, bool trackChanges) { var company = _repository.Company.GetCompany(id, trackChanges); if (company is null) return new CompanyNotFoundResponse(id); var companyDto = _mapper.Map<CompanyDto>(company); return new ApiOkResponse<CompanyDto>(companyDto); }

Both method signatures are modified to use APIBaseResponse, and also the return types are modified accordingly. Additionally, in the GetCompany method, we are not using an exception class to return an error result but the CompanyNotFoundResponse class. With the ApiBaseResponse abstraction, we are safe to return multiple types from our method as long as they inherit from the ApiBaseResponse abstract class. Here you could also log some messages with _logger.

One more thing to notice here.

In the GetAllCompanies method, we don’t have an error response just a successful one. That means we didn’t have to implement our Api response flow, and we could’ve left the method unchanged (in the interface and this class). If you want that kind of implementation it is perfectly fine. We

just like consistency in our projects, and due to that fact, we’ve changed both methods.

32.3 Controller Modification

Before we start changing the actions in the CompaniesController, we have to create a way to handle error responses and return them to the client – similar to what we have with the global error handler middleware.‌

We are not going to create any additional middleware but another controller base class inside the Presentation/Controllers folder:

public class ApiControllerBase : ControllerBase { public IActionResult ProcessError(ApiBaseResponse baseResponse) { return baseResponse switch { ApiNotFoundResponse => NotFound(new ErrorDetails { Message = ((ApiNotFoundResponse)baseResponse).Message, StatusCode = StatusCodes.Status404NotFound }), ApiBadRequestResponse => BadRequest(new ErrorDetails { Message = ((ApiBadRequestResponse)baseResponse).Message, StatusCode = StatusCodes.Status400BadRequest }), _ => throw new NotImplementedException() }; } }

This class inherits from the ControllerBase class and implements a single ProcessError action accepting an ApiBaseResponse parameter. Inside the action, we are inspecting the type of the sent parameter, and based on that type we return an appropriate message to the client. A similar thing we did in the exception middleware class.

If you add additional error response classes to the Response folder, you only have to add them here to process the response for the client.

Additionally, this is where we can see the advantage of our abstraction approach.

Now, we can modify our CompaniesController:

[Route("api/companies")] [ApiController] public class CompaniesController : ApiControllerBase { private readonly IServiceManager _service; public CompaniesController(IServiceManager service) => _service = service; [HttpGet] public IActionResult GetCompanies() { var baseResult = _service.CompanyService.GetAllCompanies(trackChanges: false); var companies = ((ApiOkResponse<IEnumerable<CompanyDto>>)baseResult).Result; return Ok(companies); } [HttpGet("{id:guid}")] public IActionResult GetCompany(Guid id) { var baseResult = _service.CompanyService.GetCompany(id, trackChanges: false); if (!baseResult.Success) return ProcessError(baseResult); var company = ((ApiOkResponse<CompanyDto>)baseResult).Result; return Ok(company); } }

Now our controller inherits from the ApiControllerBase, which inherits from the ControllerBase class. In the GetCompanies action, we extract the result from the service layer and cast the baseResult variable to the concrete ApiOkResponse type, and use the Result property to extract our required result of type IEnumerable.

We do a similar thing for the GetCompany action. Of course, here we check if our result is successful and if it’s not, we return the result of the ProcessError method.

And that’s it.

We can leave the solution as is, but we mind having these castings inside our actions – they can be moved somewhere else making them reusable and our actions cleaner. So, let’s do that.

In the same project, we are going to create a new Extensions folder and a new ApiBaseResponseExtensions class:

public static class ApiBaseResponseExtensions { public static TResultType GetResult<TResultType>(this ApiBaseResponse apiBaseResponse) => ((ApiOkResponse<TResultType>)apiBaseResponse).Result; }

The GetResult method will extend the ApiBaseResponse type and return the result of the required type.

Now, we can modify actions inside the controller:

[HttpGet] public IActionResult GetCompanies() { var baseResult = _service.CompanyService.GetAllCompanies(trackChanges: false); var companies = baseResult.GetResult<IEnumerable<CompanyDto>>(); return Ok(companies); } [HttpGet("{id:guid}")] public IActionResult GetCompany(Guid id) { var baseResult = _service.CompanyService.GetCompany(id, trackChanges: false); if (!baseResult.Success) return ProcessError(baseResult); var company = baseResult.GetResult<CompanyDto>(); return Ok(company); }

This is much cleaner and easier to read and understand.

32.4 Testing the API Response Flow

Now we can start our application, open Postman, and send some requests.‌

Let’s try to get all the companies:
https://localhost:5001/api/companies

alt text

Then, we can try to get a single company:

https://localhost:5001/api/companies/3d490a70-94ce-4d15-9494-5248280c2ce3

alt text

And finally, let’s try to get a company that does not exist:

https://localhost:5001/api/companies/3d490a70-94ce-4d15-9494-5248280c2ce2

alt text

And we have our response with a proper status code and response body. Excellent.

We have a solution that is easy to implement, fast, and extendable.

Our suggestion is to go with custom exceptions since they are easier to implement and fast as well. But if you have an app flow where you have to return error responses at a much higher rate and thus maybe impact the app’s performance, the APi Response flow is the way to go.

33 BONUS 2 - INTRODUCTION TO CQRS AND MEDIATR WITH ASP.NET CORE WEB API

In this chapter, we will provide an introduction to the CQRS pattern and how the .NET library MediatR helps us build software with this architecture.‌

In the Source Code folder, you will find the folder for this chapter with two folders inside – start and end. In the start folder, you will find a prepared project for this section. We are going to use it to explain the implementation of CQRS and MediatR. We have used the existing project from one of the previous chapters and removed the things we don’t need or want to replace - like the service layer.

In the end folder, you will find a finished project for this chapter.

33.1 About CQRS and Mediator Pattern

The MediatR library was built to facilitate two primary software architecture patterns: CQRS and the Mediator pattern. Whilst similar, let’s spend a moment understanding the principles behind each pattern.‌
33.1.1 CQRS‌

CQRS stands for “Command Query Responsibility Segregation”. As the acronym suggests, it’s all about splitting the responsibility of commands (saves) and queries (reads) into different models.

If we think about the commonly used CRUD pattern (Create-Read- Update-Delete), we usually have the user interface interacting with a datastore responsible for all four operations. CQRS would instead have us split these operations into two models, one for the queries (aka “R”), and another for the commands (aka “CUD”).

The following image illustrates how this works:

alt text

The Application simply separates the query and command models.

The CQRS pattern makes no formal requirements of how this separation occurs. It could be as simple as a separate class in the same application (as we’ll see shortly with MediatR), all the way up to separate physical applications on different servers. That decision would be based on factors such as scaling requirements and infrastructure, so we won’t go into that decision path here.

The key point being is that to create a CQRS system, we just need to split the reads from the writes.

What problem is this trying to solve?

Well, a common reason is when we design a system, we start with data storage. We perform database normalization, add primary and foreign keys to enforce referential integrity, add indexes, and generally ensure the “write system” is optimized. This is a common setup for a relational database such as SQL Server or MySQL. Other times, we think about the read use cases first, then try and add that into a database, worrying less about duplication or other relational DB concerns (often “document databases” are used for these patterns).

Neither approach is wrong. But the issue is that it’s a constant balancing act between reads and writes, and eventually one side will “win out”. All further development means both sides need to be analyzed, and often one is compromised.

CQRS allows us to “break free” from these considerations and give each system the equal design and consideration it deserves without worrying about the impact of the other system. This has tremendous benefits on both performance and agility, especially if separate teams are working on these systems.

33.1.2 Advantages and Disadvantages of CQRS‌

The benefits of CQRS are:
• Single Responsibility – Commands and Queries have only one job. It is either to change the state of the application or retrieve it. Therefore, they are very easy to reason about and understand.

• Decoupling – The Command or Query is completely decoupled from its handler, giving you a lot of flexibility on the handler side to implement it the best way you see fit.

• Scalability – The CQRS pattern is very flexible in terms of how you can organize your data storage, giving you options for great scalability. You can use one database for both Commands and Queries. You can use separate Read/Write databases, for improved performance, with messaging or replication between the databases for synchronization.

• Testability – It is very easy to test Command or Query handlers since they will be very simple by design, and perform only a single job.

Of course, it can’t all be good. Here are some of the disadvantages of CQRS:

• Complexity – CQRS is an advanced design pattern, and it will take you time to fully understand it. It introduces a lot of complexity that will create friction and potential problems in your project. Be sure to consider everything, before deciding to use it in your project.

• Learning Curve – Although it seems like a straightforward design pattern, there is still a learning curve with CQRS. Most developers are used to the procedural (imperative) style of writing code, and CQRS is a big shift away from that.

• Hard to Debug – Since Commands and Queries are decoupled from their handler, there isn’t a natural imperative flow of the application. This makes it harder to debug than traditional applications.

33.1.3 Mediator Pattern‌

The Mediator pattern is simply defining an object that encapsulates how objects interact with each other. Instead of having two or more objects take a direct dependency on each other, they instead interact with a “mediator”, who is in charge of sending those interactions to the other party:

alt text

In this image, SomeService sends a message to the Mediator, and the Mediator then invokes multiple services to handle the message. There is no direct dependency between any of the blue components.

The reason the Mediator pattern is useful is the same reason patterns like Inversion of Control are useful. It enables “loose coupling”, as the dependency graph is minimized and therefore code is simpler and easier to test. In other words, the fewer considerations a component has, the easier it is to develop and evolve.

We saw in the previous image how the services have no direct dependency, and the producer of the messages doesn’t know who or how many things are going to handle it. This is very similar to how a message broker works in the “publish/subscribe” pattern. If we wanted to add another handler we could, and the producer wouldn’t have to be modified.

Now that we’ve been over some theory, let’s talk about how MediatR makes all these things possible.

33.2 How MediatR facilitates CQRS and Mediator Patterns

You can think of MediatR as an “in-process” Mediator implementation, that helps us build CQRS systems. All communication between the user interface and the data store happens via MediatR.‌

The term “in process” is an important limitation here. Since it’s a .NET library that manages interactions within classes on the same process, it’s not an appropriate library to use if we want to separate the commands and queries across two systems. A better approach would be to use a message broker such as Kafka or Azure Service Bus.

However, for this chapter, we are going to stick with a simple single- process CQRS system, so MediatR fits the bill perfectly.

33.3 Adding Application Project and Initial Configuration

Let’s start by opening the starter project from the start folder. You will‌ see that we don’t have the Service nor the Service.Contracts projects. Well, we don’t need them. We are going to use CQRS with MediatR to replace that part of our solution.

But, we do need an additional project for our business logic so, let’s create a new class library (.NET Core) and name it Application.

Additionally, we are going to add a new class named AssemblyReference. We will use it for the same purpose as we used the class with the same name in the Presentation project:

public static class AssemblyReference { }

Now let’s install a couple of packages.

The first package we are going to install is the MediatR in the Application project:

PM> install-package MediatR

Then in the main project, we are going to install another package that wires up MediatR with the ASP.NET dependency injection container:

PM> install-package MediatR.Extensions.Microsoft.DependencyInjection

After the installations, we are going to configure MediatR in the Program class:

builder.Services.AddMediatR(typeof(Application.AssemblyReference).Assembly);

For this, we have to reference the Application project, and add a using directive:

using MediatR;

The AddMediatR method will scan the project assembly that contains the handlers that we are going to use to handle our business logic. Since we are going to place those handlers in the Application project, we are using the Application’s assembly as a parameter.

Before we continue, we have to reference the Application project from the Presentation project.

Now MediatR is configured, and we can use it inside our controller.

In the Controllers folder of the Presentation project, we are going to find a single controller class. It contains only a base code, and we are going to modify it by adding a sender through the constructor injection:

[Route("api/companies")] [ApiController] public class CompaniesController : ControllerBase { private readonly ISender _sender; public CompaniesController(ISender sender) => _sender = sender; }

Here we inject the ISender interface from the MediatR namespace. We are going to use this interface to send requests to our handlers.

We have to mention one thing about using ISender and not the IMediator interface. From the MediatR version 9.0, the IMediator interface is split into two interfaces:

public interface ISender { Task<TResponse> Send<TResponse>(IRequest<TResponse> request, CancellationToken cancellationToken = default); Task<object?> Send(object request, CancellationToken cancellationToken = default); } public interface IPublisher { Task Publish(object notification, CancellationToken cancellationToken = default); Task Publish<TNotification>(TNotification notification, CancellationToken cancellationToken = default) where TNotification : INotification; } public interface IMediator : ISender, IPublisher { }

So, by looking at the code, it is clear that you can continue using the IMediator interface to send requests and publish notifications. But it is recommended to split that by using ISender and IPublisher interfaces.

With that said, we can continue with the Application’s logic implementation.

33.4 Requests with MediatR

MediatR Requests are simple request-response style messages where a single request is synchronously handled by a single handler (synchronous from the request point of view, not C# internal async/await). Good use cases here would be returning something from a database or updating a database.‌

There are two types of requests in MediatR. One that returns a value, and one that doesn’t. Often this corresponds to reads/queries (returning a value) and writes/commands (usually doesn’t return a value).

So, before we start sending requests, we are going to create several folders in the Application project to separate queries, commands, and handlers:

alt text

Since we are going to work only with the company entity, we are going to place our queries, commands, and handlers directly into these folders.

But in larger projects with multiple entities, we can create additional folders for each entity inside each of these folders for better organization.

Also, as we already know, we are not going to send our entities as a result to the client but DTOs, so we have to reference the Shared project.

That said, let’s start with our first query. Let’s create it in the Queries folder:

public sealed record GetCompaniesQuery(bool TrackChanges) : IRequest<IEnumerable<CompanyDto>>;

Here, we create the GetCompaniesQuery record, which implements IRequest<IEnumerable>. This simply means our request will return a list of companies.

Here we need two additional namespaces:

using MediatR;
using Shared.DataTransferObjects;

Once we send the request from our controller’s action, we are going to see the usage of this query.

After the query, we need a handler. This handler in simple words will be our replacement for the service layer method that we had in our project. In our previous project, all the service classes were using the repository to access the database – we will make no difference here. For that, we have to reference the Contracts project so we can access the IRepositoryManager interface.

After adding the reference, we can create a new GetCompaniesHandler class in the Handlers folder:

internal sealed class GetCompaniesHandler : IRequestHandler<GetCompaniesQuery, IEnumerable<CompanyDto>> { private readonly IRepositoryManager _repository; public GetCompaniesHandler(IRepositoryManager repository) => _repository = repository; public Task<IEnumerable<CompanyDto>> Handle(GetCompaniesQuery request, CancellationToken cancellationToken) { throw new NotImplementedException(); } }

Our handler inherits from IRequestHandler<GetCompaniesQuery,IEnumerable>. This means this class will handle GetCompaniesQuery, in this case, returning the list of companies.

We also inject the repository through the constructor and add a default implementation of the Handle method, required by the IRequestHandler interface.

These are the required namespaces:

using Application.Queries; 
using Contracts;
using MediatR;
using Shared.DataTransferObjects;

Of course, we are not going to leave this method to throw an exception. But before we add business logic, we have to install AutoMapper in the Application project:

PM> Install-Package AutoMapper.Extensions.Microsoft.DependencyInjection

Register the package in the Program class:

builder.Services.AddAutoMapper(typeof(Program));
builder.Services.AddMediatR(typeof(Application.AssemblyReference).Assembly);

And create the MappingProfile class, also in the main project, with a single mapping rule:

public class MappingProfile : Profile { public MappingProfile() { CreateMap<Company, CompanyDto>() .ForMember(c => c.FullAddress, opt => opt.MapFrom(x => string.Join(' ', x.Address, x.Country))); } }

Everything with these actions is familiar since we’ve already used AutoMapper in our project.

Now, we can modify the handler class:

internal sealed class GetCompaniesHandler : IRequestHandler<GetCompaniesQuery, IEnumerable<CompanyDto>> { private readonly IRepositoryManager _repository; private readonly IMapper _mapper; public GetCompaniesHandler(IRepositoryManager repository, IMapper mapper) {_repository = repository; _mapper = mapper; } public async Task<IEnumerable<CompanyDto>> Handle(GetCompaniesQuery request, CancellationToken cancellationToken) { var companies = await _repository.Company.GetAllCompaniesAsync(request.TrackChanges); var companiesDto = _mapper.Map<IEnumerable<CompanyDto>>(companies); return companiesDto; } }

This logic is also familiar since we had almost the same one in our GetAllCompaniesAsync service method. One difference is that we are passing the track changes parameter through the request object.

Now, we can modify CompaniesController:

[HttpGet] public async Task<IActionResult> GetCompanies() { var companies = await _sender.Send(new GetCompaniesQuery(TrackChanges: false)); return Ok(companies); }

We use the Send method to send a request to our handler and pass the GetCompaniesQuery as a parameter. Nothing more than that. We also need an additional namespace:

using Application.Queries;

Our controller is clean as it was with the service layer implemented. But this time, we don’t have a single service class to handle all the methods but a single handler to take care of only one thing.

Now, we can test this:
https://localhost:5001/api/companies

alt text

Everything works great.

With this in mind, we can continue and implement the logic for fetching a single company.

So, let’s start with the query in the Queries folder:

public sealed record GetCompanyQuery(Guid Id, bool TrackChanges) : IRequest<CompanyDto>;

Then, let’s implement a new handler:

internal sealed class GetCompanyHandler : IRequestHandler<GetCompanyQuery, CompanyDto> { private readonly IRepositoryManager _repository; private readonly IMapper _mapper; public GetCompanyHandler(IRepositoryManager repository, IMapper mapper) { _repository = repository; _mapper = mapper; } public async Task<CompanyDto> Handle(GetCompanyQuery request, CancellationToken cancellationToken) { var company = await _repository.Company.GetCompanyAsync(request.Id, request.TrackChanges); if (company is null) throw new CompanyNotFoundException(request.Id); var companyDto = _mapper.Map<CompanyDto>(company); return companyDto;} }

So again, our handler inherits from the IRequestHandler interface accepting the query as the first parameter and the result as the second. Then, we inject the required services and familiarly implement the Handle method.

We need these namespaces here:

using Application.Queries; 
using AutoMapper;
using Contracts;
using Entities.Exceptions; 
using MediatR;
using Shared.DataTransferObjects;

Lastly, we have to add another action in CompaniesController:

[HttpGet("{id:guid}", Name = "CompanyById")] public async Task<IActionResult> GetCompany(Guid id) { var company = await _sender.Send(new GetCompanyQuery(id, TrackChanges: false)); return Ok(company); }

Awesome, let’s test it:

https://localhost:5001/api/companies/3d490a70-94ce-4d15-9494-5248280c2ce3

alt text

Excellent, we can see the company DTO in the response body. Additionally, we can try an invalid request:

https://localhost:5001/api/companies/3d490a70-94ce-4d15-9494-5248280c2ce2

alt text

And, we can see this works as well.

33.5 Commands with MediatR

As with both queries, we are going to start with a command record creation inside the Commands folder:‌

public sealed record CreateCompanyCommand(CompanyForCreationDto Company) : IRequest<CompanyDto>;

Our command has a single parameter sent from the client, and it inherits from IRequest. Our request has to return CompanyDto because we will need it, in our action, to create a valid route in the return statement.

After the query, we are going to create another handler:

internal sealed class CreateCompanyHandler : IRequestHandler<CreateCompanyCommand, CompanyDto> { private readonly IRepositoryManager _repository; private readonly IMapper _mapper; public CreateCompanyHandler(IRepositoryManager repository, IMapper mapper) { _repository = repository; _mapper = mapper; } public async Task<CompanyDto> Handle(CreateCompanyCommand request, CancellationToken cancellationToken) { var companyEntity = _mapper.Map<Company>(request.Company); _repository.Company.CreateCompany(companyEntity); await _repository.SaveAsync();var companyToReturn = _mapper.Map<CompanyDto>(companyEntity); return companyToReturn; } }

So, we inject our services and implement the Handle method as we did with the service method. We map from the creation DTO to the entity, save it to the database, and map it to the company DTO object.

Then, before we add a new mapping rule in the MappingProfile class:

CreateMap<CompanyForCreationDto, Company>();

Now, we can add a new action in a controller:

[HttpPost] public async Task<IActionResult> CreateCompany([FromBody] CompanyForCreationDto companyForCreationDto) { if (companyForCreationDto is null) return BadRequest("CompanyForCreationDto object is null"); var company = await _sender.Send(new CreateCompanyCommand(companyForCreationDto)); return CreatedAtRoute("CompanyById", new { id = company.Id }, company); }

That’s all it takes. Now we can test this:

https://localhost:5001/api/companies

alt text

A new company is created, and if we inspect the Headers tab, we are going to find the link to fetch this new company:

alt text

There is one important thing we have to understand here. We are communicating to a datastore via simple message constructs without having any idea on how it’s being implemented. The commands and queries could be pointing to different data stores. They don’t know how their request will be handled, and they don’t care.

33.5.1 Update Command‌

Following the same principle from the previous example, we can implement the update request.

Let’s start with the command:

public sealed record UpdateCompanyCommand
(Guid Id, CompanyForUpdateDto Company, bool TrackChanges) : IRequest;

This time our command inherits from IRequest without any generic parameter. That’s because we are not going to return any value with this request.

Let’s continue with the handler implementation:

internal sealed class UpdateCompanyHandler : IRequestHandler<UpdateCompanyCommand, Unit> { private readonly IRepositoryManager _repository; private readonly IMapper _mapper; public UpdateCompanyHandler(IRepositoryManager repository, IMapper mapper) { _repository = repository; _mapper = mapper; } public async Task<Unit> Handle(UpdateCompanyCommand request, CancellationToken cancellationToken) {var companyEntity = await _repository.Company.GetCompanyAsync(request.Id, request.TrackChanges); if (companyEntity is null) throw new CompanyNotFoundException(request.Id); _mapper.Map(request.Company, companyEntity); await _repository.SaveAsync(); return Unit.Value; } }

This handler inherits from IRequestHandler<UpdateCompanyCommand, Unit>. This is new for us because the first time our command is not returning any value. But IRequestHandler always accepts two parameters (TRequest and TResponse). So, we provide the Unit structure for the TResponse parameter since it represents the void type.

Then the Handle implementation is familiar to us except for the return part. We have to return something from the Handle method and we use Unit.Value.

Before we modify the controller, we have to add another mapping rule:

CreateMap<CompanyForUpdateDto, Company>();

Lastly, let’s add a new action in the controller:

[HttpPut("{id:guid}")] public async Task<IActionResult> UpdateCompany(Guid id, CompanyForUpdateDto companyForUpdateDto) { if (companyForUpdateDto is null) return BadRequest("CompanyForUpdateDto object is null"); await _sender.Send(new UpdateCompanyCommand(id, companyForUpdateDto, TrackChanges: true)); return NoContent(); }

At this point, we can send a PUT request from Postman:

https://localhost:5001/api/companies/7aea16e2-74b9-4fd9-c22a-08d9961aa2d5

alt text

There is the 204 status code.

If you fetch this company, you will find the name updated for sure.

33.5.2 Delete Command‌

After all of this implementation, this one should be pretty straightforward.

Let’s start with the command:

public record DeleteCompanyCommand(Guid Id, bool TrackChanges) : IRequest;

Then, let’s continue with a handler:

internal sealed class DeleteCompanyHandler : IRequestHandler<DeleteCompanyCommand, Unit> { private readonly IRepositoryManager _repository; public DeleteCompanyHandler(IRepositoryManager repository) => _repository = repository; public async Task<Unit> Handle(DeleteCompanyCommand request, CancellationToken cancellationToken) { var company = await _repository.Company.GetCompanyAsync(request.Id, request.TrackChanges); if (company is null) throw new CompanyNotFoundException(request.Id); _repository.Company.DeleteCompany(company); await _repository.SaveAsync(); return Unit.Value; } }

Finally, let’s add one more action inside the controller:

[HttpDelete("{id:guid}")]
public async Task<IActionResult> DeleteCompany(Guid id) { await _sender.Send(new DeleteCompanyCommand(id, TrackChanges: false)); return NoContent(); }

That’s it. Pretty easy.We can test this now:
https://localhost:5001/api/companies/7aea16e2-74b9-4fd9-c22a-08d9961aa2d5

alt text

It works great.

Now that we know how to work with requests using MediatR, let’s see how to use notifications.

33.6 MediatR Notifications

So for we’ve only seen a single request being handled by a single handler. However, what if we want to handle a single request by multiple handlers?‌

That’s where notifications come in. In these situations, we usually have multiple independent operations that need to occur after some event. Examples might be:

• Sending an email

• Invalidating a cache

• ...

To demonstrate this, we will update the delete company flow we created previously to publish a notification and have it handled by two handlers.

Sending an email is out of the scope of this book (you can learn more about that in our Bonus 6 Security book). But to demonstrate the behavior of notifications, we will use our logger service and log a message as if the email was sent.

So, the flow will be - once we delete the Company, we want to inform our administrators with an email message that the delete has action occurred.

That said, let’s start by creating a new Notifications folder inside the Application project and add a new notification in that folder:

public sealed record CompanyDeletedNotification(Guid Id, bool TrackChanges) : INotification;

The notification has to inherit from the INotification interface. This is the equivalent of the IRequest we saw earlier, but for Notifications.

As we can conclude, notifications don’t return a value. They work on the fire and forget principle, like publishers.

Next, we are going to create a new Emailhandler class:

internal sealed class EmailHandler : INotificationHandler<CompanyDeletedNotification> { private readonly ILoggerManager _logger; public EmailHandler(ILoggerManager logger) => _logger = logger; public async Task Handle(CompanyDeletedNotification notification, CancellationToken cancellationToken) { _logger.LogWarn($"Delete action for the company with id: {notification.Id} has occurred."); await Task.CompletedTask; } }

Here, we just simulate sending our email message in an async manner. Without too many complications, we use our logger service to process the message.

Let’s continue by modifying the DeleteCompanyHandler class:

internal sealed class DeleteCompanyHandler : INotificationHandler<CompanyDeletedNotification> { private readonly IRepositoryManager _repository; public DeleteCompanyHandler(IRepositoryManager repository) => _repository = repository; public async Task Handle(CompanyDeletedNotification notification, CancellationToken cancellationToken) { var company = await _repository.Company.GetCompanyAsync(notification.Id, notification.TrackChanges); if (company is null) throw new CompanyNotFoundException(notification.Id); _repository.Company.DeleteCompany(company); await _repository.SaveAsync(); } }

This time, our handler inherits from the INotificationHandler interface, and it doesn’t return any value – we’ve modified the method signature and removed the return statement.

Finally, we have to modify the controller’s constructor:

private readonly ISender _sender; private readonly IPublisher _publisher; public CompaniesController(ISender sender, IPublisher publisher) { _sender = sender; _publisher = publisher; }

We inject another interface, which we are going to use to publish notifications.

And, we have to modify the DeleteCompany action:

[HttpDelete("{id:guid}")] public async Task<IActionResult> DeleteCompany(Guid id) { await _publisher.Publish(new CompanyDeletedNotification(id, TrackChanges: false)); return NoContent(); }

To test this, let’s create a new company first:

alt text

Now, if we send the Delete request, we are going to receive the 204 NoContent response:

https://localhost:5001/api/companies/e06089af-baeb-44ef-1fdf-08d99630e212

alt text

And also, if we inspect the logs, we will find a new logged message stating that the delete action has occurred:

alt text

33.7 MediatR Behaviors

Often when we build applications, we have many cross-cutting concerns. These include authorization, validating, and logging.‌

Instead of repeating this logic throughout our handlers, we can make use of Behaviors. Behaviors are very similar to ASP.NET Core middleware in that they accept a request, perform some action, then (optionally) pass along the request.

In this section, we are going to use behaviors to perform validation on the DTOs that come from the client.

As we have already learned in chapter 13, we can perform the validation by using data annotations attributes and the ModelState dictionary. Then we can extract the validation logic into action filters to clear our actions. Well, we can apply all of that to our current solution as well.

But, some developers have a preference for using fluent validation over data annotation attributes. In that case, behaviors are the perfect place to execute that validation logic.

So, let’s go step by step and add the fluent validation in our project first and then use behavior to extract validation errors if any, and return them to the client.

33.7.1 Adding Fluent Validation‌

The FluentValidation library allows us to easily define very rich custom validation for our classes. Since we are implementing CQRS, it makes the most sense to define validation for our Commands. We should not bother ourselves with defining validators for Queries, since they don’t contain any behavior. We use Queries only for fetching data from the application.

So, let’s start by installing the FluentValidation package in the Application project:

PM> install-package FluentValidation.AspNetCore

The FluentValidation.AspNetCore package installs both FluentValidation and FluentValidation.DependencyInjectionExtensions packages.

After the installation, we are going to register all the validators inside the service collection by modifying the Program class:

builder.Services.AddValidatorsFromAssembly(typeof(Application.AssemblyReference).Assem bly);
builder.Services.AddMediatR(typeof(Application.AssemblyReference).Assembly); builder.Services.AddAutoMapper(typeof(Program));

Then, let’s create a new Validators folder inside the Application project and add a new class inside:

public sealed class CreateCompanyCommandValidator : AbstractValidator<CreateCompanyCommand> {public CreateCompanyCommandValidator() { RuleFor(c => c.Company.Name).NotEmpty().MaximumLength(60); RuleFor(c => c.Company.Address).NotEmpty().MaximumLength(60); } }

The following using directives are necessary for this class:

using Application.Commands; 
using FluentValidation;

We create the CreateCompanyCommandValidator class that inherits from the AbstractValidator class, specifying the type CreateCompanyCommand. This lets FluentValidation know that this validation is for the CreateCompanyCommand record. Since this record contains a parameter of type CompanyForCreationDto, which is the object that we have to validate since it comes from the client, we specify the rules for properties from that DTO.

The NotEmpty method specifies that the property can’t be null or empty, and the MaximumLength method specifies the maximum string length of the property.

33.7.2 Creating Decorators with MediatR PipelineBehavior

The CQRS pattern uses Commands and Queries to convey information, and receive a response. In essence, it represents a request-response pipeline. This gives us the ability to easily introduce additional behavior around each request that is going through the pipeline, without actually modifying the original request.‌

You may be familiar with this technique under the name Decorator pattern. Another example of using the Decorator pattern is the ASP.NET Core Middleware concept, which we talked about in section 1.8.

MediatR has a similar concept to middleware, and it is called IPipelineBehavior:

public interface IPipelineBehavior<in TRequest, TResponse> where TRequest : notnull { Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next); }

The pipeline behavior is a wrapper around a request instance and gives us a lot of flexibility with the implementation. Pipeline behaviors are a good fit for cross-cutting concerns in your application. Good examples of cross- cutting concerns are logging, caching, and of course, validation!

Before we use this interface, let’s create a new exception class in the Entities/Exceptions folder:

public sealed class ValidationAppException : Exception { public IReadOnlyDictionary<string, string[]> Errors { get; } public ValidationAppException(IReadOnlyDictionary<string, string[]> errors) :base("One or more validation errors occurred") => Errors = errors; }

Next, to implement the IPipelineBehavior interface, we are going to create another folder named Behaviors in the Application project, and add a single class inside it:

public sealed class ValidationBehavior<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : IRequest<TResponse> { private readonly IEnumerable<IValidator<TRequest>> _validators; public ValidationBehavior(IEnumerable<IValidator<TRequest>> validators) => _validators = validators; public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next) { if (!_validators.Any()) return await next(); var context = new ValidationContext<TRequest>(request); var errorsDictionary = _validators .Select(x => x.Validate(context)) .SelectMany(x => x.Errors) .Where(x => x != null) .GroupBy( x => x.PropertyName.Substring(x.PropertyName.IndexOf('.') + 1), x => x.ErrorMessage,(propertyName, errorMessages) => new { Key = propertyName, Values = errorMessages.Distinct().ToArray() }) .ToDictionary(x => x.Key, x => x.Values); if (errorsDictionary.Any()) throw new ValidationAppException(errorsDictionary); return await next(); } }

This class has to inherit from the IPipelineBehavior interface and implement the Handler method. We also inject a collection of IValidator implementations in the constructor. The FluentValidation library will scan our project for all AbstractValidator implementations for a given type and then provide us with the instance at runtime. It is how we can apply the actual validators that we implemented in our project.

Then, if there are no validation errors, we just call the next delegate to allow the execution of the next component in the middleware.

But if there are any errors, we extract them from the _validators collection and group them inside the dictionary. If there are entries in our dictionary, we throw the ValidationAppException and pass the dictionary with errors. This exception will be caught inside our global error handler, which we will modify in a minute.

But before we do that, we have to register this behavior in the Program class:

builder.Services.AddMediatR(typeof(Application.AssemblyReference).Assembly); builder.Services.AddAutoMapper(typeof(Program)); builder.Services.AddTransient(typeof(IPipelineBehavior<,>), typeof(ValidationBehavior<,>)); builder.Services.AddValidatorsFromAssembly(typeof(Application.AssemblyReference).Assembly);

After that, we can modify the ExceptionMiddlewareExtensions class:

public static class ExceptionMiddlewareExtensions
{ public static void ConfigureExceptionHandler(this WebApplication app, ILoggerManager logger) { app.UseExceptionHandler(appError => { appError.Run(async context => { context.Response.ContentType = "application/json"; var contextFeature = context.Features.Get<IExceptionHandlerFeature>(); if (contextFeature != null) { context.Response.StatusCode = contextFeature.Error switch { NotFoundException => StatusCodes.Status404NotFound, BadRequestException => StatusCodes.Status400BadRequest, ValidationAppException => StatusCodes.Status422UnprocessableEntity, _ => StatusCodes.Status500InternalServerError }; logger.LogError($"Something went wrong: {contextFeature.Error}"); if (contextFeature.Error is ValidationAppException exception) { await context.Response .WriteAsync(JsonSerializer.Serialize(new { exception.Errors })); } else { await context.Response.WriteAsync(new ErrorDetails() { StatusCode = context.Response.StatusCode, Message = contextFeature.Error.Message, }.ToString()); } } }); }); } }

So we modify the switch statement to check for the ValidationAppException type and to assign a proper status code 422.

Then, we use the declaration pattern to test the type of the variable and assign it to a new variable named exception. If the type is ValidationAppException we just write our response to the client providing our errors dictionary as a parameter. Otherwise, we do the same thing we did up until now.

Now, we can test this by sending an invalid request:
https://localhost:5001/api/companies

alt text

Excellent, this works great.

Additionally, if the Address property has too many characters, we will see a different message:

alt text

Great.

33.7.3 Validating null Object‌

Now, if we send a request with an empty request body, we are going to get the result produced from our action:
https://localhost:5001/api/companies

alt text

We can see the 400 status code and the error message. It is perfectly fine since we want to have a Bad Request response if the object sent from the client is null. But if for any reason you want to remove that validation from the action, and handle it with fluent validation rules, you can do that by modifying the CreateCompanyCommandValidator class and overriding the Validate method:

public sealed class CreateCompanyCommandValidator : AbstractValidator<CreateCompanyCommand> { public CreateCompanyCommandValidator() { RuleFor(c => c.Company.Name).NotEmpty().MaximumLength(60); RuleFor(c => c.Company.Address).NotEmpty().MaximumLength(60); } public override ValidationResult Validate(ValidationContext<CreateCompanyCommand> context) { return context.InstanceToValidate.Company is null ? new ValidationResult(new[] { new ValidationFailure("CompanyForCreationDto", "CompanyForCreationDto object is null") }) : base.Validate(context); } }

Now, you can remove the validation check inside the action and send a null body request:

alt text

Pay attention that now the status code is 422 and not 400. But this validation is now part of the fluent validation.

If this solution fits your project, feel free to use it. Our recommendation is to use 422 only for the validation errors, and 400 if the request body is null.

--EOF--