0%

1.环境准备

在这之前,需要先准备主机的环境,环境如下:

Ubuntu18.04
cuda11.3
pytorch:1.11.0
torchvision:0.12.0
在服务器上执行以下命令,

创建yolov8虚拟环境

1
conda create -n yolov8 python=3.8

进入虚拟环境

1
conda activate yolov8

安装pytorch v1.11.0

pytorch v1.11.0(torch1.11.0+cu1113 ,torchvision0.12.0+cu113)

1
# CUDA 11.3 pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113

下载yolov8的代码

先创建yolov8文件夹,存放等会要下载的yolov8代码mkdir yolov8
进入yolov8文件夹,cd yolov8
下载yolov8代码git clone https://github.com/ultralytics/ultralytics.git

其他配置

1
pip install ultralytics

2.VisDrone数据集准备

数据集下载

github链接上下载:官方链接
下载Task1:Object Detectino in Images下面的四个VisDrone-DET dataset数据集

下载好zip文件后,使用winscp将zip文件传输到远程服务器上。
在服务器上进入到zip文件所在的文件夹中使用unzip命令解压zip文件。
如: unzip VisDrone2019-DET-val.zip

数据集处理

和yolov5所需要的格式一致。参考yolov5数据处理方法。
主要是labels的生成,可以在yolov8下面新建一个visdrone2yolov.py文件。

1
from utils.general import download, os, Path def visdrone2yolo(dir): from PIL import Image from tqdm import tqdm def convert_box(size, box): # Convert VisDrone box to YOLO xywh box dw = 1. / size[0] dh = 1. / size[1] return (box[0] + box[2] / 2) * dw, (box[1] + box[3] / 2) * dh, box[2] * dw, box[3] * dh (dir / 'labels').mkdir(parents=True, exist_ok=True) # make labels directory pbar = tqdm((dir / 'annotations').glob('*.txt'), desc=f'Converting {dir}') for f in pbar: img_size = Image.open((dir / 'images' / f.name).with_suffix('.jpg')).size lines = [] with open(f, 'r') as file: # read annotation.txt for row in [x.split(',') for x in file.read().strip().splitlines()]: if row[4] == '0': # VisDrone 'ignored regions' class 0 continue cls = int(row[5]) - 1 # 类别号-1 box = convert_box(img_size, tuple(map(int, row[:4]))) lines.append(f"{cls} {' '.join(f'{x:.6f}' for x in box)}\n") with open(str(f).replace(os.sep + 'annotations' + os.sep, os.sep + 'labels' + os.sep), 'w') as fl: fl.writelines(lines) # write label.txt dir = Path('/home/yolov5/datasets/VisDrone2019') # datasets文件夹下Visdrone2019文件夹目录 # Convert for d in 'VisDrone2019-DET-train', 'VisDrone2019-DET-val', 'VisDrone2019-DET-test-dev': visdrone2yolo(dir / d) # convert VisDrone annotations to YOLO labels

正确执行代码后,会在’VisDrone2019-DET-train’, ‘VisDrone2019-DET-val’, ‘VisDrone2019-DET-test-dev三个文件夹内新生成labels文件夹,用以存放将VisDrone数据集处理成YoloV8格式后的数据标

修改数据配置文件

记事本或notepad++打开ultralytics-main\ultralytics\datasets\文件夹下的VisDrone.yaml文件,将其中path参数修改为VisDrone2019文件夹所在的路径。
在这里插入图片描述

3.训练/验证/导出

训练

打开终端(或者pycharm等IDE),进入虚拟环境,随后进入yolov8文件夹,在终端中输入下面命令,即可开始训练。

1
yolo task=detect mode=train model=yolov8s.pt data=datasets/VisDrone.yaml batch=16 epochs=100 imgsz=640 workers=0 device=0

验证

  1. val数据集上验证
    激活yolov8虚拟环境conda activate yolov8
    进入yolov8文件夹cd pyCode/yolov8/ultralytics/ultralytics/
    使用如下命令,即可完成对验证数据的评估。
    开始验证
1
yolo task=detect mode=val model=runs/detect/train4/weights/best.pt data=datasets/VisDrone.yaml device=0

验证结果如下。
在这里插入图片描述

  1. 在test数据集上验证
    将datasets/VisDrone.yaml文件中的val路径修改为:VisDrone2019-DET-test-dev/images
1
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] # path: ../datasets/VisDrone # dataset root dir path: /home/xxx/yolov5/datasets/VisDrone # dataset root dir train: VisDrone2019-DET-train/images # train images (relative to 'path') 6471 images val: VisDrone2019-DET-test-dev/images # val images (relative to 'path') 548 images VisDrone2019-DET-val/images test: VisDrone2019-DET-test-dev/images # test images (optional) 1610 images

在这里插入图片描述

使用如下命令,即可完成在VisDrone2019-DET-test-dev数据集上的评估。
开始验证

1
yolo task=detect mode=val model=runs/detect/train4/weights/best.pt data=datasets/VisDrone.yaml device=0

结果如下
在这里插入图片描述

导出

使用如下命令即可导出

1
yolo task=detect mode=export model=runs/detect/train4/weights/best.pt

点击这里,观看项目说明视频讲解

本案例 使用 YOLOv8 结合 Python Qt ,开发一个图形界面的 AI实时物品监测程序。

注意

如果你使用其它版本YOLO(比如YOLOv5)训练的模型,请修改相应的导入和检测代码。

您需要高效学习,找工作? 点击咨询 报名实战班

点击查看学员就业情况

示例代码

下面示例代码实现了摄像头实时视频流的YOLO检测。

如果还需要 包含 视频文件实时检测 的功能代码, 请将这个YOLO+Qt视频分享到朋友圈(点击打开),截屏发微给 byhy44

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
<span></span><code><span id="__span-0-1"><span>from</span> <span>PySide6</span> <span>import</span> <span>QtWidgets</span><span>,</span> <span>QtCore</span><span>,</span> <span>QtGui</span>
</span><span id="__span-0-2"><span>import</span> <span>cv2</span><span>,</span> <span>os</span><span>,</span> <span>time</span>
</span><span id="__span-0-3"><span>from</span> <span>threading</span> <span>import</span> <span>Thread</span>
</span><span id="__span-0-4">
</span><span id="__span-0-5"><span># 不然每次YOLO处理都会输出调试信息</span>
</span><span id="__span-0-6"><span>os</span><span>.</span><span>environ</span><span>[</span><span>'YOLO_VERBOSE'</span><span>]</span> <span>=</span> <span>'False'</span>
</span><span id="__span-0-7"><span>from</span> <span>ultralytics</span> <span>import</span> <span>YOLO</span>
</span><span id="__span-0-8">
</span><span id="__span-0-9"><span>class</span> <span>MWindow</span><span>(</span><span>QtWidgets</span><span>.</span><span>QMainWindow</span><span>):</span>
</span><span id="__span-0-10">
</span><span id="__span-0-11"> <span>def</span> <span>__init__</span><span>(</span><span>self</span><span>):</span>
</span><span id="__span-0-12">
</span><span id="__span-0-13"> <span>super</span><span>()</span><span>.</span><span>__init__</span><span>()</span>
</span><span id="__span-0-14">
</span><span id="__span-0-15"> <span># 设置界面</span>
</span><span id="__span-0-16"> <span>self</span><span>.</span><span>setupUI</span><span>()</span>
</span><span id="__span-0-17">
</span><span id="__span-0-18"> <span>self</span><span>.</span><span>camBtn</span><span>.</span><span>clicked</span><span>.</span><span>connect</span><span>(</span><span>self</span><span>.</span><span>startCamera</span><span>)</span>
</span><span id="__span-0-19"> <span>self</span><span>.</span><span>stopBtn</span><span>.</span><span>clicked</span><span>.</span><span>connect</span><span>(</span><span>self</span><span>.</span><span>stop</span><span>)</span>
</span><span id="__span-0-20">
</span><span id="__span-0-21"> <span># 定义定时器,用于控制显示视频的帧率</span>
</span><span id="__span-0-22"> <span>self</span><span>.</span><span>timer_camera</span> <span>=</span> <span>QtCore</span><span>.</span><span>QTimer</span><span>()</span>
</span><span id="__span-0-23"> <span># 定时到了,回调 self.show_camera</span>
</span><span id="__span-0-24"> <span>self</span><span>.</span><span>timer_camera</span><span>.</span><span>timeout</span><span>.</span><span>connect</span><span>(</span><span>self</span><span>.</span><span>show_camera</span><span>)</span>
</span><span id="__span-0-25">
</span><span id="__span-0-26"> <span># 加载 YOLO nano 模型,第一次比较耗时,要20秒左右</span>
</span><span id="__span-0-27"> <span>self</span><span>.</span><span>model</span> <span>=</span> <span>YOLO</span><span>(</span><span>'yolov8n.pt'</span><span>)</span>
</span><span id="__span-0-28">
</span><span id="__span-0-29"> <span># 要处理的视频帧图片队列,目前就放1帧图片</span>
</span><span id="__span-0-30"> <span>self</span><span>.</span><span>frameToAnalyze</span> <span>=</span> <span>[]</span>
</span><span id="__span-0-31">
</span><span id="__span-0-32"> <span># 启动处理视频帧独立线程</span>
</span><span id="__span-0-33"> <span>Thread</span><span>(</span><span>target</span><span>=</span><span>self</span><span>.</span><span>frameAnalyzeThreadFunc</span><span>,</span><span>daemon</span><span>=</span><span>True</span><span>)</span><span>.</span><span>start</span><span>()</span>
</span><span id="__span-0-34">
</span><span id="__span-0-35"> <span>def</span> <span>setupUI</span><span>(</span><span>self</span><span>):</span>
</span><span id="__span-0-36">
</span><span id="__span-0-37"> <span>self</span><span>.</span><span>resize</span><span>(</span><span>1200</span><span>,</span> <span>800</span><span>)</span>
</span><span id="__span-0-38">
</span><span id="__span-0-39"> <span>self</span><span>.</span><span>setWindowTitle</span><span>(</span><span>'白月黑羽 YOLO-Qt 演示'</span><span>)</span>
</span><span id="__span-0-40">
</span><span id="__span-0-41"> <span># central Widget</span>
</span><span id="__span-0-42"> <span>centralWidget</span> <span>=</span> <span>QtWidgets</span><span>.</span><span>QWidget</span><span>(</span><span>self</span><span>)</span>
</span><span id="__span-0-43"> <span>self</span><span>.</span><span>setCentralWidget</span><span>(</span><span>centralWidget</span><span>)</span>
</span><span id="__span-0-44">
</span><span id="__span-0-45"> <span># central Widget 里面的 主 layout</span>
</span><span id="__span-0-46"> <span>mainLayout</span> <span>=</span> <span>QtWidgets</span><span>.</span><span>QVBoxLayout</span><span>(</span><span>centralWidget</span><span>)</span>
</span><span id="__span-0-47">
</span><span id="__span-0-48"> <span># 界面的上半部分 : 图形展示部分</span>
</span><span id="__span-0-49"> <span>topLayout</span> <span>=</span> <span>QtWidgets</span><span>.</span><span>QHBoxLayout</span><span>()</span>
</span><span id="__span-0-50"> <span>self</span><span>.</span><span>label_ori_video</span> <span>=</span> <span>QtWidgets</span><span>.</span><span>QLabel</span><span>(</span><span>self</span><span>)</span>
</span><span id="__span-0-51"> <span>self</span><span>.</span><span>label_treated</span> <span>=</span> <span>QtWidgets</span><span>.</span><span>QLabel</span><span>(</span><span>self</span><span>)</span>
</span><span id="__span-0-52"> <span>self</span><span>.</span><span>label_ori_video</span><span>.</span><span>setMinimumSize</span><span>(</span><span>520</span><span>,</span><span>400</span><span>)</span>
</span><span id="__span-0-53"> <span>self</span><span>.</span><span>label_treated</span><span>.</span><span>setMinimumSize</span><span>(</span><span>520</span><span>,</span><span>400</span><span>)</span>
</span><span id="__span-0-54"> <span>self</span><span>.</span><span>label_ori_video</span><span>.</span><span>setStyleSheet</span><span>(</span><span>'border:1px solid #D7E2F9;'</span><span>)</span>
</span><span id="__span-0-55"> <span>self</span><span>.</span><span>label_treated</span><span>.</span><span>setStyleSheet</span><span>(</span><span>'border:1px solid #D7E2F9;'</span><span>)</span>
</span><span id="__span-0-56">
</span><span id="__span-0-57"> <span>topLayout</span><span>.</span><span>addWidget</span><span>(</span><span>self</span><span>.</span><span>label_ori_video</span><span>)</span>
</span><span id="__span-0-58"> <span>topLayout</span><span>.</span><span>addWidget</span><span>(</span><span>self</span><span>.</span><span>label_treated</span><span>)</span>
</span><span id="__span-0-59">
</span><span id="__span-0-60"> <span>mainLayout</span><span>.</span><span>addLayout</span><span>(</span><span>topLayout</span><span>)</span>
</span><span id="__span-0-61">
</span><span id="__span-0-62"> <span># 界面下半部分: 输出框 和 按钮</span>
</span><span id="__span-0-63"> <span>groupBox</span> <span>=</span> <span>QtWidgets</span><span>.</span><span>QGroupBox</span><span>(</span><span>self</span><span>)</span>
</span><span id="__span-0-64">
</span><span id="__span-0-65"> <span>bottomLayout</span> <span>=</span> <span>QtWidgets</span><span>.</span><span>QHBoxLayout</span><span>(</span><span>groupBox</span><span>)</span>
</span><span id="__span-0-66"> <span>self</span><span>.</span><span>textLog</span> <span>=</span> <span>QtWidgets</span><span>.</span><span>QTextBrowser</span><span>()</span>
</span><span id="__span-0-67"> <span>bottomLayout</span><span>.</span><span>addWidget</span><span>(</span><span>self</span><span>.</span><span>textLog</span><span>)</span>
</span><span id="__span-0-68">
</span><span id="__span-0-69"> <span>mainLayout</span><span>.</span><span>addWidget</span><span>(</span><span>groupBox</span><span>)</span>
</span><span id="__span-0-70">
</span><span id="__span-0-71"> <span>btnLayout</span> <span>=</span> <span>QtWidgets</span><span>.</span><span>QVBoxLayout</span><span>()</span>
</span><span id="__span-0-72"> <span>self</span><span>.</span><span>videoBtn</span> <span>=</span> <span>QtWidgets</span><span>.</span><span>QPushButton</span><span>(</span><span>'🎞️视频文件'</span><span>)</span>
</span><span id="__span-0-73"> <span>self</span><span>.</span><span>camBtn</span> <span>=</span> <span>QtWidgets</span><span>.</span><span>QPushButton</span><span>(</span><span>'📹摄像头'</span><span>)</span>
</span><span id="__span-0-74"> <span>self</span><span>.</span><span>stopBtn</span> <span>=</span> <span>QtWidgets</span><span>.</span><span>QPushButton</span><span>(</span><span>'🛑停止'</span><span>)</span>
</span><span id="__span-0-75"> <span>btnLayout</span><span>.</span><span>addWidget</span><span>(</span><span>self</span><span>.</span><span>videoBtn</span><span>)</span>
</span><span id="__span-0-76"> <span>btnLayout</span><span>.</span><span>addWidget</span><span>(</span><span>self</span><span>.</span><span>camBtn</span><span>)</span>
</span><span id="__span-0-77"> <span>btnLayout</span><span>.</span><span>addWidget</span><span>(</span><span>self</span><span>.</span><span>stopBtn</span><span>)</span>
</span><span id="__span-0-78"> <span>bottomLayout</span><span>.</span><span>addLayout</span><span>(</span><span>btnLayout</span><span>)</span>
</span><span id="__span-0-79">
</span><span id="__span-0-80">
</span><span id="__span-0-81"> <span>def</span> <span>startCamera</span><span>(</span><span>self</span><span>):</span>
</span><span id="__span-0-82">
</span><span id="__span-0-83"> <span># 参考 https://docs.opencv.org/3.4/dd/d43/tutorial_py_video_display.html</span>
</span><span id="__span-0-84">
</span><span id="__span-0-85"> <span># 在 windows上指定使用 cv2.CAP_DSHOW 会让打开摄像头快很多, </span>
</span><span id="__span-0-86"> <span># 在 Linux/Mac上 指定 V4L, FFMPEG 或者 GSTREAMER</span>
</span><span id="__span-0-87"> <span>self</span><span>.</span><span>cap</span> <span>=</span> <span>cv2</span><span>.</span><span>VideoCapture</span><span>(</span><span>0</span><span>,</span> <span>cv2</span><span>.</span><span>CAP_DSHOW</span><span>)</span>
</span><span id="__span-0-88"> <span>if</span> <span>not</span> <span>self</span><span>.</span><span>cap</span><span>.</span><span>isOpened</span><span>():</span>
</span><span id="__span-0-89"> <span>print</span><span>(</span><span>"1号摄像头不能打开"</span><span>)</span>
</span><span id="__span-0-90"> <span>return</span><span>()</span>
</span><span id="__span-0-91">
</span><span id="__span-0-92"> <span>if</span> <span>self</span><span>.</span><span>timer_camera</span><span>.</span><span>isActive</span><span>()</span> <span>==</span> <span>False</span><span>:</span> <span># 若定时器未启动</span>
</span><span id="__span-0-93"> <span>self</span><span>.</span><span>timer_camera</span><span>.</span><span>start</span><span>(</span><span>50</span><span>)</span>
</span><span id="__span-0-94">
</span><span id="__span-0-95">
</span><span id="__span-0-96"> <span>def</span> <span>show_camera</span><span>(</span><span>self</span><span>):</span>
</span><span id="__span-0-97">
</span><span id="__span-0-98"> <span>ret</span><span>,</span> <span>frame</span> <span>=</span> <span>self</span><span>.</span><span>cap</span><span>.</span><span>read</span><span>()</span> <span># 从视频流中读取</span>
</span><span id="__span-0-99"> <span>if</span> <span>not</span> <span>ret</span><span>:</span>
</span><span id="__span-0-100"> <span>return</span>
</span><span id="__span-0-101">
</span><span id="__span-0-102"> <span># 把读到的16:10帧的大小重新设置 </span>
</span><span id="__span-0-103"> <span>frame</span> <span>=</span> <span>cv2</span><span>.</span><span>resize</span><span>(</span><span>frame</span><span>,</span> <span>(</span><span>520</span><span>,</span> <span>400</span><span>))</span>
</span><span id="__span-0-104"> <span># 视频色彩转换回RGB,OpenCV images as BGR</span>
</span><span id="__span-0-105"> <span>frame</span> <span>=</span> <span>cv2</span><span>.</span><span>cvtColor</span><span>(</span><span>frame</span><span>,</span> <span>cv2</span><span>.</span><span>COLOR_BGR2RGB</span><span>)</span>
</span><span id="__span-0-106"> <span>qImage</span> <span>=</span> <span>QtGui</span><span>.</span><span>QImage</span><span>(</span><span>frame</span><span>.</span><span>data</span><span>,</span> <span>frame</span><span>.</span><span>shape</span><span>[</span><span>1</span><span>],</span> <span>frame</span><span>.</span><span>shape</span><span>[</span><span>0</span><span>],</span>
</span><span id="__span-0-107"> <span>QtGui</span><span>.</span><span>QImage</span><span>.</span><span>Format_RGB888</span><span>)</span> <span># 变成QImage形式</span>
</span><span id="__span-0-108"> <span># 往显示视频的Label里 显示QImage</span>
</span><span id="__span-0-109"> <span>self</span><span>.</span><span>label_ori_video</span><span>.</span><span>setPixmap</span><span>(</span><span>QtGui</span><span>.</span><span>QPixmap</span><span>.</span><span>fromImage</span><span>(</span><span>qImage</span><span>))</span>
</span><span id="__span-0-110">
</span><span id="__span-0-111"> <span># 如果当前没有处理任务</span>
</span><span id="__span-0-112"> <span>if</span> <span>not</span> <span>self</span><span>.</span><span>frameToAnalyze</span><span>:</span>
</span><span id="__span-0-113"> <span>self</span><span>.</span><span>frameToAnalyze</span><span>.</span><span>append</span><span>(</span><span>frame</span><span>)</span>
</span><span id="__span-0-114">
</span><span id="__span-0-115"> <span>def</span> <span>frameAnalyzeThreadFunc</span><span>(</span><span>self</span><span>):</span>
</span><span id="__span-0-116">
</span><span id="__span-0-117"> <span>while</span> <span>True</span><span>:</span>
</span><span id="__span-0-118"> <span>if</span> <span>not</span> <span>self</span><span>.</span><span>frameToAnalyze</span><span>:</span>
</span><span id="__span-0-119"> <span>time</span><span>.</span><span>sleep</span><span>(</span><span>0.01</span><span>)</span>
</span><span id="__span-0-120"> <span>continue</span>
</span><span id="__span-0-121">
</span><span id="__span-0-122"> <span>frame</span> <span>=</span> <span>self</span><span>.</span><span>frameToAnalyze</span><span>.</span><span>pop</span><span>(</span><span>0</span><span>)</span>
</span><span id="__span-0-123">
</span><span id="__span-0-124"> <span>results</span> <span>=</span> <span>self</span><span>.</span><span>model</span><span>(</span><span>frame</span><span>)[</span><span>0</span><span>]</span>
</span><span id="__span-0-125">
</span><span id="__span-0-126"> <span>img</span> <span>=</span> <span>results</span><span>.</span><span>plot</span><span>(</span><span>line_width</span><span>=</span><span>1</span><span>)</span>
</span><span id="__span-0-127">
</span><span id="__span-0-128"> <span>qImage</span> <span>=</span> <span>QtGui</span><span>.</span><span>QImage</span><span>(</span><span>img</span><span>.</span><span>data</span><span>,</span> <span>img</span><span>.</span><span>shape</span><span>[</span><span>1</span><span>],</span> <span>img</span><span>.</span><span>shape</span><span>[</span><span>0</span><span>],</span>
</span><span id="__span-0-129"> <span>QtGui</span><span>.</span><span>QImage</span><span>.</span><span>Format_RGB888</span><span>)</span> <span># 变成QImage形式</span>
</span><span id="__span-0-130">
</span><span id="__span-0-131"> <span>self</span><span>.</span><span>label_treated</span><span>.</span><span>setPixmap</span><span>(</span><span>QtGui</span><span>.</span><span>QPixmap</span><span>.</span><span>fromImage</span><span>(</span><span>qImage</span><span>))</span> <span># 往显示Label里 显示QImage</span>
</span><span id="__span-0-132">
</span><span id="__span-0-133"> <span>time</span><span>.</span><span>sleep</span><span>(</span><span>0.5</span><span>)</span>
</span><span id="__span-0-134">
</span><span id="__span-0-135"> <span>def</span> <span>stop</span><span>(</span><span>self</span><span>):</span>
</span><span id="__span-0-136"> <span>self</span><span>.</span><span>timer_camera</span><span>.</span><span>stop</span><span>()</span> <span># 关闭定时器</span>
</span><span id="__span-0-137"> <span>self</span><span>.</span><span>cap</span><span>.</span><span>release</span><span>()</span> <span># 释放视频流</span>
</span><span id="__span-0-138"> <span>self</span><span>.</span><span>label_ori_video</span><span>.</span><span>clear</span><span>()</span> <span># 清空视频显示区域 </span>
</span><span id="__span-0-139"> <span>self</span><span>.</span><span>label_treated</span><span>.</span><span>clear</span><span>()</span> <span># 清空视频显示区域</span>
</span><span id="__span-0-140">
</span><span id="__span-0-141">
</span><span id="__span-0-142"><span>app</span> <span>=</span> <span>QtWidgets</span><span>.</span><span>QApplication</span><span>()</span>
</span><span id="__span-0-143"><span>window</span> <span>=</span> <span>MWindow</span><span>()</span>
</span><span id="__span-0-144"><span>window</span><span>.</span><span>show</span><span>()</span>
</span><span id="__span-0-145"><span>app</span><span>.</span><span>exec</span><span>()</span>
</span></code>

一、Yolov8简介

1、yolov8 源码地址:

工程链接:https://github.com/ultralytics/ultralytics

2、官方文档:

CLI - Ultralytics YOLOv8 Docs

3、预训练模型百度网盘地址:

训练时需要用到,下载的网址较慢:

如果模型下载不了,加QQ:187100248.

链接: https://pan.baidu.com/s/1YfMxRPGk8LF75a4cbgYxGg 提取码: rd7b

二、模型训练

1、标定红绿灯数据:

         类别为23类,分别为:

红绿灯类别
red_lightgreen_lightyellow_lightoff_lightpart_ry_lightpart_rg_light
part_yg_lightryg_lightcountdown_off_lightcountdown_on_lightshade_lightzero
onetwothreefourfivesix
seveneightninebrokeNumberbrokenLight

        标注工具地址:AI标注工具Labelme和LabelImage Labelme和LabelImage集成工具_labelimage与labelme-CSDN博客

标注后图像格式

2、训练环境:

1)、Ubuntu18.04;

2)、Cuda11.7 + CUDNN8.0.6;

3)、opencv4.5.5;

4)、PyTorch1.8.1-GPU;

5)、python3.9

3、数据转化:

 1)、需要把上面标定的数据集中的.xml文件转换为.txt,转换代码为:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
import os

import shutil

import xml.etree.ElementTree as ET

from xml.etree.ElementTree import Element, SubElement

from PIL import Image

import cv2

classes = ['red_light', 'green_light', 'yellow_light', 'off_light', 'part_ry_light', 'part_rg_light', 'part_yg_light', 'ryg_light',

'countdown_off_light', 'countdown_on_light','shade_light','zero','one','two','three','four','five','six','seven',

'eight','nine','brokeNumber','brokenLight']

class Xml_make(object):

def __init__(self):

super().__init__()

def __indent(self, elem, level=0):

i = "\n" + level * "\t"

if len(elem):

if not elem.text or not elem.text.strip():

elem.text = i + "\t"

if not elem.tail or not elem.tail.strip():

elem.tail = i

for elem in elem:

self.__indent(elem, level + 1)

if not elem.tail or not elem.tail.strip():

elem.tail = i

else:

if level and (not elem.tail or not elem.tail.strip()):

elem.tail = i

def _imageinfo(self, list_top):

annotation_root = ET.Element('annotation')

annotation_root.set('verified', 'no')

tree = ET.ElementTree(annotation_root)

'''

0:xml_savepath 1:folder,2:filename,3:path

4:checked,5:width,6:height,7:depth

'''

folder_element = ET.Element('folder')

folder_element.text = list_top[1]

annotation_root.append(folder_element)

filename_element = ET.Element('filename')

filename_element.text = list_top[2]

annotation_root.append(filename_element)

path_element = ET.Element('path')

path_element.text = list_top[3]

annotation_root.append(path_element)

# checked_element = ET.Element('checked')

# checked_element.text = list_top[4]

# annotation_root.append(checked_element)

source_element = ET.Element('source')

database_element = SubElement(source_element, 'database')

database_element.text = 'Unknown'

annotation_root.append(source_element)

size_element = ET.Element('size')

width_element = SubElement(size_element, 'width')

width_element.text = str(list_top[5])

height_element = SubElement(size_element, 'height')

height_element.text = str(list_top[6])

depth_element = SubElement(size_element, 'depth')

depth_element.text = str(list_top[7])

annotation_root.append(size_element)

segmented_person_element = ET.Element('segmented')

segmented_person_element.text = '0'

annotation_root.append(segmented_person_element)

return tree, annotation_root

def _bndbox(self, annotation_root, list_bndbox):

for i in range(0, len(list_bndbox), 9):

object_element = ET.Element('object')

name_element = SubElement(object_element, 'name')

name_element.text = list_bndbox[i]

# flag_element = SubElement(object_element, 'flag')

# flag_element.text = list_bndbox[i + 1]

pose_element = SubElement(object_element, 'pose')

pose_element.text = list_bndbox[i + 2]

truncated_element = SubElement(object_element, 'truncated')

truncated_element.text = list_bndbox[i + 3]

difficult_element = SubElement(object_element, 'difficult')

difficult_element.text = list_bndbox[i + 4]

bndbox_element = SubElement(object_element, 'bndbox')

xmin_element = SubElement(bndbox_element, 'xmin')

xmin_element.text = str(list_bndbox[i + 5])

ymin_element = SubElement(bndbox_element, 'ymin')

ymin_element.text = str(list_bndbox[i + 6])

xmax_element = SubElement(bndbox_element, 'xmax')

xmax_element.text = str(list_bndbox[i + 7])

ymax_element = SubElement(bndbox_element, 'ymax')

ymax_element.text = str(list_bndbox[i + 8])

annotation_root.append(object_element)

return annotation_root

def txt_to_xml(self, list_top, list_bndbox):

tree, annotation_root = self._imageinfo(list_top)

annotation_root = self._bndbox(annotation_root, list_bndbox)

self.__indent(annotation_root)

tree.write(list_top[0], encoding='utf-8', xml_declaration=True)

def txt_2_xml(source_path, xml_save_dir, jpg_save_dir,txt_dir):

COUNT = 0

for folder_path_tuple, folder_name_list, file_name_list in os.walk(source_path):

for file_name in file_name_list:

file_suffix = os.path.splitext(file_name)[-1]

if file_suffix != '.jpg':

continue

list_top = []

list_bndbox = []

path = os.path.join(folder_path_tuple, file_name)

xml_save_path = os.path.join(xml_save_dir, file_name.replace(file_suffix, '.xml'))

txt_path = os.path.join(txt_dir, file_name.replace(file_suffix, '.txt'))

filename = file_name#os.path.splitext(file_name)[0]

checked = 'NO'

#print(file_name)

im = Image.open(path)

im_w = im.size[0]

im_h = im.size[1]

shutil.copy(path, jpg_save_dir)

if im_w*im_h > 34434015:

print(file_name)

if im_w < 100:

print(file_name)

width = str(im_w)

height = str(im_h)

depth = '3'

flag = 'rectangle'

pose = 'Unspecified'

truncated = '0'

difficult = '0'

list_top.extend([xml_save_path, folder_path_tuple, filename, path, checked, width, height, depth])

for line in open(txt_path, 'r'):

line = line.strip()

info = line.split(' ')

name = classes[int(info[0])]

x_cen = float(info[1]) * im_w

y_cen = float(info[2]) * im_h

w = float(info[3]) * im_w

h = float(info[4]) * im_h

xmin = int(x_cen - w / 2) - 1

ymin = int(y_cen - h / 2) - 1

xmax = int(x_cen + w / 2) + 3

ymax = int(y_cen + h / 2) + 3

if xmin < 0:

xmin = 0

if ymin < 0:

ymin = 0

if xmax > im_w - 1:

xmax = im_w - 1

if ymax > im_h - 1:

ymax = im_h - 1

if w > 5 and h > 5:

list_bndbox.extend([name, flag, pose, truncated, difficult,str(xmin), str(ymin), str(xmax), str(ymax)])

if xmin < 0 or xmax > im_w - 1 or ymin < 0 or ymax > im_h - 1:

print(xml_save_path)

Xml_make().txt_to_xml(list_top, list_bndbox)

COUNT += 1

#print(COUNT, xml_save_path)

if __name__ == "__main__":

out_xml_path = "/home/TL_TrainData/" # .xml输出文件存放地址

out_jpg_path = "/home/TL_TrainData/" # .jpg输出文件存放地址

txt_path = "/home/Data/TrafficLight/trainData" # yolov3标注.txt和图片文件夹

images_path = "/home/TrafficLight/trainData" # image文件存放地址

txt_2_xml(images_path, out_xml_path, out_jpg_path, txt_path)

4、构造训练数据:

2)、训练样本数据构造,需要把分成images和labels,images下面放入图片,labels下面放入.txt文件:

分成images和labels

images

labels

5、训练样本:

 1)、首先安装训练包:

1
pip install ultralytics

2)、修改训练数据参数coco128_light.yaml文件,这个是自己修改的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
# Ultralytics YOLO 🚀, AGPL-3.0 license

# COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) by Ultralytics

# Example usage: yolo train data=coco128.yaml

# parent

# ├── ultralytics

# └── datasets

# └── coco128 ← downloads here (7 MB)

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]

path: /home/Data/TrafficLight/datasets # dataset root dir

train: images # train images (relative to 'path') 128 images

val: images # val images (relative to 'path') 128 images

test: # test images (optional)

# Parameters

nc: 23 # number of classes

# Classes

names:

0: red_light

1: green_light

2: yellow_light

3: off_light

4: part_ry_light

5: part_rg_light

6: part_yg_light

7: ryg_light

8: countdown_off_light

9: countdown_on_light

10: shade_light

11: zero

12: one

13: two

14: three

15: four

16: five

17: six

18: seven

19: eight

20: nine

21: brokeNumber

22: brokenLight

# Download script/URL (optional)

#download: https://ultralytics.com/assets/coco128.zip

3)、执行 train_yolov8x_light.sh,内容为:

1
yolo detect train data=coco128_light.yaml model=./runs/last.pt epochs=100 imgsz=640 workers=16 batch=32

        开始启动训练:

模型训练启动

三、验证模型:

1、图像测试:

1
2
3
4
5
6
7
8
9
from ultralytics import YOLO

model = YOLO('best.pt')

results = model('bus.jpg')

for r in results:

print(r.boxes)

2、视频测试:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import cv2

from ultralytics import YOLO

# Load the YOLOv8 model

model = YOLO('best.pt')

# Open the video file

video_path = "test_car_person_1080P.mp4"

cap = cv2.VideoCapture(video_path)

# Loop through the video frames

while cap.isOpened():

# Read a frame from the video

success, frame = cap.read()

if success:

# Run YOLOv8 inference on the frame

results = model(frame)

# Visualize the results on the frame

annotated_frame = results[0].plot()

# Display the annotated frame

cv2.imshow("YOLOv8 Inference", annotated_frame)

cv2.waitKey(10)

四、导出ONNX

1、训练输出,经过上面的训练后,得到训练生成文件,weights下生成了best.pt和last.pt:

训练数据生成文件

2、等训练完毕后,利用best.pt生成best.onnx,执行命令如下:

1
yolo export model=best.pt imgsz=640 format=onnx opset=12

五、Opencv实现Yolov8 C++ 识别

1、开发环境:

1)、win7/win10;

2)、vs2019;

3)、opencv4.7.0;

2、main函数代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
#include <iostream>

#include <vector>

#include "opencv2/opencv.hpp"

#include "inference.h"

#include <io.h>

#include <thread>

#define socklen_t int

#pragma comment (lib, "ws2_32.lib")

using namespace std;

using namespace cv;

int getFiles(std::string path, std::vector<std::string>& files, std::vector<std::string>& names)

{

int i = 0;

intptr_t hFile = 0;

struct _finddata_t c_file;

std::string imageFile = path + "*.*";

if ((hFile = _findfirst(imageFile.c_str(), &c_file)) == -1L)

{

_findclose(hFile);

return -1;

}

else

{

while (true)

{

std::string strname(c_file.name);

if (std::string::npos != strname.find(".jpg") || std::string::npos != strname.find(".png") || std::string::npos != strname.find(".bmp"))

{

std::string fullName = path + c_file.name;

files.push_back(fullName);

std::string cutname = strname.substr(0, strname.rfind("."));

names.push_back(cutname);

}

if (_findnext(hFile, &c_file) != 0)

{

_findclose(hFile);

break;

}

}

}

return 0;

}

int main()

{

std::string projectBasePath = "./"; // Set your ultralytics base path

bool runOnGPU = true;

//

// Pass in either:

//

// "yolov8s.onnx" or "yolov5s.onnx"

//

// To run Inference with yolov8/yolov5 (ONNX)

//

// Note that in this example the classes are hard-coded and 'classes.txt' is a place holder.

Inference inf(projectBasePath + "/best.onnx", cv::Size(640, 640), "classes.txt", runOnGPU);

std::vector<std::string> files;

std::vector<std::string> names;

getFiles("./test/", files, names);

//std::vector<std::string> imageNames;

//imageNames.push_back(projectBasePath + "/test/20221104_8336.jpg");

//imageNames.push_back(projectBasePath + "/test/20221104_8339.jpg");

for (int i = 0; i < files.size(); ++i)

{

cv::Mat frame = cv::imread(files[i]);

// Inference starts here...

clock_t start, end;

float time;

start = clock();

std::vector<Detection> output = inf.runInference(frame);

end = clock();

time = (float)(end - start);//CLOCKS_PER_SEC;

printf("timeCount = %f\n", time);

int detections = output.size();

std::cout << "Number of detections:" << detections << std::endl;

for (int i = 0; i < detections; ++i)

{

Detection detection = output[i];

cv::Rect box = detection.box;

cv::Scalar color = detection.color;

// Detection box

cv::rectangle(frame, box, color, 2);

// Detection box text

std::string classString = detection.className + ' ' + std::to_string(detection.confidence).substr(0, 4);

cv::Size textSize = cv::getTextSize(classString, cv::FONT_HERSHEY_DUPLEX, 1, 2, 0);

cv::Rect textBox(box.x, box.y - 40, textSize.width + 10, textSize.height + 20);

cv::rectangle(frame, textBox, color, cv::FILLED);

cv::putText(frame, classString, cv::Point(box.x + 5, box.y - 10), cv::FONT_HERSHEY_DUPLEX, 1, cv::Scalar(0, 0, 0), 2, 0);

}

// Inference ends here...

// This is only for preview purposes

float scale = 0.8;

cv::resize(frame, frame, cv::Size(frame.cols * scale, frame.rows * scale));

cv::imshow("Inference", frame);

cv::waitKey(10);

}

}

3、yolov8 头文件inference.h代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
#ifndef INFERENCE_H

#define INFERENCE_H

// Cpp native

#include <fstream>

#include <vector>

#include <string>

#include <random>

// OpenCV / DNN / Inference

#include <opencv2/imgproc.hpp>

#include <opencv2/opencv.hpp>

#include <opencv2/dnn.hpp>

struct Detection

{

int class_id{0};

std::string className{};

float confidence{0.0};

cv::Scalar color{};

cv::Rect box{};

};

class Inference

{

public:

Inference(const std::string &onnxModelPath, const cv::Size &modelInputShape = {640, 640}, const std::string &classesTxtFile = "", const bool &runWithCuda = true);

std::vector<Detection> runInference(const cv::Mat &input);

private:

void loadClassesFromFile();

void loadOnnxNetwork();

cv::Mat formatToSquare(const cv::Mat &source);

std::string modelPath{};

std::string classesPath{};

bool cudaEnabled{};

std::vector<std::string> classes{ "red_light", "green_light", "yellow_light", "off_light", "part_ry_light", "part_rg_light", "part_yg_light", "ryg_light","countdown_off_light", "countdown_on_light","shade_light","zero","one","two","three","four","five","six","seven","eight","nine","brokeNumber","brokenLight" };

cv::Size2f modelShape{};

float modelConfidenceThreshold {0.25};

float modelScoreThreshold {0.45};

float modelNMSThreshold {0.50};

bool letterBoxForSquare = true;

cv::dnn::Net net;

};

#endif // INFERENCE_H

4、yolov8 cpp文件inference.cpp代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
#include "inference.h"

Inference::Inference(const std::string &onnxModelPath, const cv::Size &modelInputShape, const std::string &classesTxtFile, const bool &runWithCuda)

{

modelPath = onnxModelPath;

modelShape = modelInputShape;

classesPath = classesTxtFile;

cudaEnabled = runWithCuda;

loadOnnxNetwork();

// loadClassesFromFile(); The classes are hard-coded for this example

}

std::vector<Detection> Inference::runInference(const cv::Mat &input)

{

cv::Mat modelInput = input;

if (letterBoxForSquare && modelShape.width == modelShape.height)

modelInput = formatToSquare(modelInput);

cv::Mat blob;

cv::dnn::blobFromImage(modelInput, blob, 1.0/255.0, modelShape, cv::Scalar(), true, false);

net.setInput(blob);

std::vector<cv::Mat> outputs;

net.forward(outputs, net.getUnconnectedOutLayersNames());

int rows = outputs[0].size[1];

int dimensions = outputs[0].size[2];

bool yolov8 = false;

// yolov5 has an output of shape (batchSize, 25200, 85) (Num classes + box[x,y,w,h] + confidence[c])

// yolov8 has an output of shape (batchSize, 84, 8400) (Num classes + box[x,y,w,h])

if (dimensions > rows) // Check if the shape[2] is more than shape[1] (yolov8)

{

yolov8 = true;

rows = outputs[0].size[2];

dimensions = outputs[0].size[1];

outputs[0] = outputs[0].reshape(1, dimensions);

cv::transpose(outputs[0], outputs[0]);

}

float *data = (float *)outputs[0].data;

float x_factor = modelInput.cols / modelShape.width;

float y_factor = modelInput.rows / modelShape.height;

std::vector<int> class_ids;

std::vector<float> confidences;

std::vector<cv::Rect> boxes;

for (int i = 0; i < rows; ++i)

{

if (yolov8)

{

float *classes_scores = data+4;

cv::Mat scores(1, classes.size(), CV_32FC1, classes_scores);

cv::Point class_id;

double maxClassScore;

minMaxLoc(scores, 0, &maxClassScore, 0, &class_id);

if (maxClassScore > modelScoreThreshold)

{

confidences.push_back(maxClassScore);

class_ids.push_back(class_id.x);

float x = data[0];

float y = data[1];

float w = data[2];

float h = data[3];

int left = int((x - 0.5 * w) * x_factor);

int top = int((y - 0.5 * h) * y_factor);

int width = int(w * x_factor);

int height = int(h * y_factor);

boxes.push_back(cv::Rect(left, top, width, height));

}

}

else // yolov5

{

float confidence = data[4];

if (confidence >= modelConfidenceThreshold)

{

float *classes_scores = data+5;

cv::Mat scores(1, classes.size(), CV_32FC1, classes_scores);

cv::Point class_id;

double max_class_score;

minMaxLoc(scores, 0, &max_class_score, 0, &class_id);

if (max_class_score > modelScoreThreshold)

{

confidences.push_back(confidence);

class_ids.push_back(class_id.x);

float x = data[0];

float y = data[1];

float w = data[2];

float h = data[3];

int left = int((x - 0.5 * w) * x_factor);

int top = int((y - 0.5 * h) * y_factor);

int width = int(w * x_factor);

int height = int(h * y_factor);

boxes.push_back(cv::Rect(left, top, width, height));

}

}

}

data += dimensions;

}

std::vector<int> nms_result;

cv::dnn::NMSBoxes(boxes, confidences, modelScoreThreshold, modelNMSThreshold, nms_result);

std::vector<Detection> detections{};

for (unsigned long i = 0; i < nms_result.size(); ++i)

{

int idx = nms_result[i];

Detection result;

result.class_id = class_ids[idx];

result.confidence = confidences[idx];

std::random_device rd;

std::mt19937 gen(rd());

std::uniform_int_distribution<int> dis(100, 255);

result.color = cv::Scalar(dis(gen),

dis(gen),

dis(gen));

result.className = classes[result.class_id];

result.box = boxes[idx];

detections.push_back(result);

}

return detections;

}

void Inference::loadClassesFromFile()

{

std::ifstream inputFile(classesPath);

if (inputFile.is_open())

{

std::string classLine;

while (std::getline(inputFile, classLine))

classes.push_back(classLine);

inputFile.close();

}

}

void Inference::loadOnnxNetwork()

{

net = cv::dnn::readNetFromONNX(modelPath);

if (cudaEnabled)

{

std::cout << "\nRunning on CUDA" << std::endl;

net.setPreferableBackend(cv::dnn::DNN_BACKEND_CUDA);

net.setPreferableTarget(cv::dnn::DNN_TARGET_CUDA);

}

else

{

std::cout << "\nRunning on CPU" << std::endl;

net.setPreferableBackend(cv::dnn::DNN_BACKEND_OPENCV);

net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);

}

}

cv::Mat Inference::formatToSquare(const cv::Mat &source)

{

int col = source.cols;

int row = source.rows;

int _max = MAX(col, row);

cv::Mat result = cv::Mat::zeros(_max, _max, CV_8UC3);

source.copyTo(result(cv::Rect(0, 0, col, row)));

return result;

}

4、效果图:

vs2019工程运行结果

红绿灯识别结果

安装

1
2
3
4
5
6
7
8
9

sudo apt install lm-sensors curl hddtemp # 安装工具

sensors-detect # 检测传感器

sudo apt install conky # 安装

conky & # 运行conky

传感器数据样例

1
2
3
4
5
6
7
8
9
acpitz-virtual-0-
Adapter: Virtual device
temp1: +49.5°C (crit = +99.0°C)

coretemp-isa-0000
Adapter: ISA adapter
Physical id 0: +49.0°C (high = +100.0°C, crit = +100.0°C)
Core 0: +49.0°C (high = +100.0°C, crit = +100.0°C)
Core 1: +49.0°C (high = +100.0°C, crit = +100.0°C)

:TODO conky 默认运行效果截图

conky默认以一个弹窗的形式运行,并使用位于/etc/conky/conky.conf的基础配置文件

集成到桌面

1
2
cp /etc/conky/conky.conf /home/$USER/.conkyrc # 复制默认配置文件

名词解释

CA(Certificate Authority)

证书授权机构, 负责发放和管理数字证书的权威机构,有自己的证书,可以拿自己的证书给别人签名。

RootCA

根证书,权威机构持有的证书,安装根证书意味着对这个证书机构的信任,所有其他证书都由这个根证书来签发。只需要把这个根证书添加到受信任的根证书,所有其他由此根证书签发的证书都会被自动信任。

SubCA

中间证书机构,由权威机构签发的证书,

CSR(Certificate Signing Request)

证书请求文件,证书申请者在申请数字证书时生成私钥的同时也生成证书请求文件,证书申请者只要把CSR文件提交给证书颁发机构后,证书颁发机构使用其根证书给私钥签名就生成了证书公钥文件,也就是颁发给用户的证书。

常用后缀名
格式 说明
.crt,.cer 证书文件
.key 私钥文件
.csr 证书签名请求文件
.pem base64编码证书文件,可以单独放证书或密钥,也可以同时放两个。Apache 和 NGINX 服务器偏向于使用这种编码格式,也是 openssl 默认采用的信息存放方式。
.der 二进制证书文件,可包含所有私钥、公钥和证书,是大多数浏览器的缺省格式,常见于 Windows 系统中的证书格式。

证书链

在证书链中,通常分为三级结构,分别是根证书、中间证书、服务器证书。正确的证书链顺序中服务器证书处在最底端,里面包含服务器域名域名服务器公钥和签名值等。服务器证书的上一级是中间证书,可以由多张证书组合在一起,最上级是根证书。对服务器身份进行校验时,需要验证一整个证书链。每一级证书都有签名值,根证书使用自己的根CA公钥验证自己的签名,也用来验证中间证书的签名值,中间证书的公钥用来验证下一级证书的签名值。

生成根证书

生成根私钥
1
2
3
4
5
6
7
# 生成CA认证机构的证书密钥
# 需要设置密码(输入两次)
openssl genrsa -des3 -out root-ca.priv.key 4096

# 去除密钥里的密码,有密码的话每次使用的时候都要输入密码才能使用。
# 需要再输入一次上一步的密码
openssl rsa -in root-ca.priv.key -out root-ca.key

第一次生成的私钥,是带有 passphrase 的。这带来一个副作用,就是需要在使用过程中输入密码。这对于一些特定场景来说会带来一些问题。比如:Apache 的自动启动过程,或者一些工具,甚至有没有提供输入 passphrase 的机会。其实是可以将 3DES 的加密从秘钥中移除的,这样,使用的过程中就不再需要输入 passphrase。这也带来另一个问题,如果其他人获取到了未加密的私钥,对应的证书也需要被吊销,以避免带来危害。

生成自签名的根证书请求

-subj 参考证书请求文件参数说明

1
2
3
4
# 生成根证书自签名请求
openssl req -new -key root-ca.key \
-subj "/C=CN/ST=Tianjin/L=Tianjin/O=Example/OU=DEV/CN=Example Root" \
-out root-ca.csr
生成自签名根证书
1
2
3
4
# 参数说明
# -x509 使用X.509证书结构生成证书,X.509 证书的结构是用 ASN1(Abstract Syntax Notation One)进行描述数据结构。
# -days 证书有效期,按天来算
openssl x509 -req -in root-ca.csr -signkey root-ca.key -out root-ca.crt -days 3650

生成中间证书

如果不是复杂场景可以跳过此步骤,使用根证书直接生成客户端证书

使用root-ca签发sub-ca的证书签名请求,中间证书指的是可以允许继续生成下级证书,否则的话默认生成终端证书,即使可以用中间证书生成下一级客户端和服务端等用户证书,最终验证的无法通过。

生成中间证书私钥
1
2
3
4
5
6
# 生成私钥,方式1(参考根证书私钥)
openssl genrsa -des3 -out mid-ca.priv.key 4096
openssl rsa -in mid-ca.priv.key -out mid-ca.key

# 生成私钥,方式2
openssl genpkey -algorithm RSA -out mid-ca.key -pkeyopt rsa_keygen_bits:4096
生成中间证书请求

-subj参考证书请求文件参数说明

1
2
3
4
5
6
7
8
# 生成中间证书自签名请求
openssl req -new -key root-ca.key \
-subj "/C=CN/ST=Tianjin/L=Tianjin/O=Example/OU=DEV/CN=Example Root" \
-out mid-ca.csr

# 同时生成key和csr
openssl req -new -newkey rsa:4096 -nodes -keyout mid-ca.key -out mid-ca.csr \
-subj="/C=CN/ST=Tianjin/L=Tianjin/O=Example/OU=DEV/CN=Example Mid"
生成中间证书
1
2
3
4
# 生成证书
openssl x509 -req -extfile <(printf "subjectKeyIdentifier=hash\nauthorityKeyIdentifier=keyid:always,issuer:always") \
-days 3650 -in mid-ca.csr -CA root-ca.crt -CAkey root-ca.key \
-CAcreateserial -out mid-ca.crt
验证中间证书
1
openssl verify -CAfile root-ca.crt mid-ca.crt

生成终端证书

假设服务器域名为example.io

生成终端证书私钥
1
2
3
4
5
6
# 生成私钥,方式1(参考根证书私钥)
openssl genrsa -des3 -out example.io.priv.key 4096
openssl rsa -in example.io.priv.key -out example.io.priv.key

# 生成私钥,方式2
openssl genpkey -algorithm RSA -out example.io.key -pkeyopt rsa_keygen_bits:4096
生成终端证书请求

-subj 参考证书请求文件参数说明

1
2
# 生成终端证书自签名请求
openssl req -new -key example.io.key -out example.io.csr -subj="/CN=example.io"
生成终端证书
1
2
3
4
5
6
7
8
9
10
11
 # 生成证书,如果不使用中间证书可把mid-ca.crt替换为root-ca.crt
openssl x509 -req -days 3650 \
-extfile v3.ext
-CA root-ca.crt -CAkey root-ca.key -CAcreateserial \
-in example.io.csr -signkey example.io.key \
-out example.io.crt

openssl x509 -req -extfile v3.ext -days 365 -in example.io.csr -CA mid-ca.crt -CAkey mid-ca.key -CAcreateserial -out example.io.crt

# 导出pfx,
openssl pkcs12 -export -out example.io.pfx -inkey example.io.key -in example.io.crt

v3.ext参考X.509扩展配置

验证终端证书
1
openssl verify -CAfile root-ca.crt example.io.crt

查看证书信息

1
2
3
4
5
6
7
8
9
10
11
# 查看公钥的内容,如果为.PEM ,则会以 base 64 明文方式显示
openssl rsa -noout -text -in cakey.key

# 查看证书的内容命令为
openssl x509 -noout -text -in cacert.crt

# 证书编码格式转换
# PEM 转为 DER
openssl x509 -in cacert.crt -outform der -out cacert.der
# DER 转为 PEM
openssl x509 -in cert.crt -inform der -outform pem -out cacert.pem

证书有效性验证

可以利用openssls_server命令来模拟一个服务端,要使用到证书管理员生成的证书client.crt,以及申请人在创建csr时生成的 client.key

1
openssl s_server -cert example.io.crt -key example.io.key -debug -HTTP -accept 443

然后浏览器访问 https://ip地址来查看证书是否有效(要先导入根证书到

信任的根证书颁发机构)。

吊销证书

一般由于用户私钥泄露等情况才需要吊销一个未过期的证书。

假设需要被吊销的证书文件为 cert.pem

1
2
3
4
5
6
7
8
# 生成证书吊销列表文件
# 可选参数
# -crldays 下一个吊销列表将在n天后发布
# -crlhours 下一个吊销列表将在n小时后发布
openssl ca -revoke cert.pem

# 查看吊销列表
openssl crl -in testca.crl -text -noout

附件

证书请求文件参数说明
参数 说明
/C Country Name (2 letter code) 两字母的国家代码,例如 “CN”。
/ST State or Province Name 州或省的全名。
/L Locality Name (e.g., city) 城市或地区的全名。
/O Organization Name (e.g., company) 公司或组织的全名。
/OU Organizational Unit Name (e.g., section) 部门或单位的全名。
/CN Common Name (e.g., your name or your server’s hostname) 通常是你的服务器的主机名。
emailAddress Email Address 电子邮件地址,用于证书联系。

这些信息将用于填写证书请求文件。在实际情况中,一些字段可能不是必需的,具体取决于你的使用场景和证书颁发机构(CA)的要求。通常,“Common Name” 是最重要的字段,应该设置为与你的服务器域名或主机名相匹配的值。其他字段的值可以根据实际情况填写。

示例

1
/C=CN/ST=Tianjin/L=Tianjin/O=Example/OU=DEV/CN=example.com/emailAddress=dev@example.com
X.509扩展配置

v3.ext

1
2
3
4
5
6
7
8
9
10
authorityKeyIdentifier=keyid,issuer
subjectKeyIdentifier=hash
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
#extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1=example.io
DNS.2=*.example.io
IP.3=192.168.0.2

extendedKeyUsage 可以指定证书目的,即用途,一般有

​ serverAuth 保证远程计算机的身份

​ clientAuth 向远程计算机证明你的身份

​ codeSigning 确保软件来自软件发布者,保护软件在发行后不被更改

​ emailProtection 保护电子邮件消息

​ timeStamping 允许用当前时间签名数据,如果不指定,则默认为 所有应用程序策略

SubjectAlternativeName

​ DNS.1用来确保网站的域名必须时*.example.com

​ IP.1用来确保网站的IP地址,如果证书里面的内容和实际对应不上,浏览器就会报错。

参考

构建安全的X.509三级证书体系:OpenSSL实战指南-百度开发者中心 (baidu.com)

如何创建自签名的 SSL 证书 - HappyVK - 博客园 (cnblogs.com)

关于OpeSSL生成自签名证书-包含完整证书链生成(全网最全) - 52只鱼 - 博客园 (cnblogs.com)

添加删除

1
2
3
4
5
6
7
8
9
10
11
12
13
# 查看虚拟机
virsh list --all

# 创建虚拟机
virt-install -virt-type=kvm --name={虚拟机名} --vcpus=4 \
--memory=1024 --location={iso文件} \
--disk path={虚拟机硬盘存放路径}.qcow2,size=30,format=qcow2 \
--network bridge=virbr0 --graphics none --extra-args='console=ttyS0'
--force

# 删除虚拟机
virsh undefine {虚拟机名}

开关机

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

# 开启虚拟机
virsh start {虚拟机名}

# 关闭虚拟机
virsh shutdown {虚拟机名}

# 强制关机
virsh destroy {虚拟机名}

# 挂起虚拟机
virsh suspend {虚拟机名}

# 恢复挂起的
virsh resume {虚拟机名}

# 设置虚拟机和物理机开机一起启动
virsh autostart {虚拟机名}

# 取消虚拟机开机启动
virsh autostart {虚拟机名}

备份克隆

1
2
3
4
5
6
7
8
9
10
11
12
13

# 通过配置文件启动
virsh create /etc/libvirt/qemu/{虚拟机名}.xml

# 备份虚拟机配置文件
virsh dumpxml {虚拟机名} > {存储路径}

# 恢复备份虚拟机
virsh create {虚拟机名}

# 虚拟机克隆
virt-clone -o {虚拟机名} -n localhost -f /virtual/KVM/{虚拟机名}.qcow2

快照

1
2
3
4
5
6
7
8
9
10
11
# 创建快照
virsh snapshot-create {虚拟机名}

# 查看快照
virsh snapshot-list {虚拟机名}

# 恢复快照
virsh snapshot-revert {虚拟机名} {快照名称}

# 删除快照
virsh snapshot-delete {虚拟机名} {快照名称}

使用markdown的链接语法

使用markdown的语法指定url创建站内链接,有绝对地址和相对地址两种方式,绝对地址与相对地址的区别在于是否以/开头:

使用绝对地址

代码如下:

1
2
# 格式 [标题](文章地址)
[Hexo 增加站内文章链接](/Hexo/Hexo-增加站内文章链接)

示例中,Hexo-增加站内文章链接使用的是文章对应的md文件名,使用hexo n创建post时,空格会转换为中划线-。/Hexo是为了文章管理方便在_posts目录下增加的子目录,Hexo-增加站内文章链接.md位于_posts/Hexo/目录下。

结果如下:

Hexo 增加站内文章链接

Hexo对绝对地址和相对地址的处理方式是不一样的。对于绝对地址/Hexo/Hexo-博客配置,生成的目标url不会变化。

使用相对地址

代码如下:

1
[Hexo 增加站内文章链接](Hexo/Hexo-增加站内文章链接)

对于相对地址Hexo/Hexo-增加站内文章链接,生成的目标URL会叠加文章的的URL,结果是/Hexo/Hexo/Hexo-增加站内文章链接,这显然不是期望的结果。但是如果是文章内的锚点链接,使用这种方式非常合适。

代码如下:

1
2
# 格式 [标题](#文章内要跳转的标题)
[测试文章内跳转锚点](#测试文章内跳转锚点)

结果如下:
跳转文章内测试锚点

生成的URL可以正确的跳转到文章内的锚点。注意,标题中的空格用-代替。

使用post_link标签

由于Hexo文章的URL规则是可以配置的,在_config.yml中可以配置URL自动添加日期、目录等信息。如果使用markdown语法的链接规则多有不便,一方面需要知道目标URL,一方面如果规则修改或者站点迁移,对应的内容需要修改。

好在Hexo提供了post_link标签解决这个问题。

代码如下:

1
2
# 格式 {% post_link 以_post下文件路径 '显示链接名'%}
{% post_link Hexo/Hexo-博客配置 'Hexo 博客配置' %}

示例中,Hexo-博客配置使用的是文章对应的md文件名,使用hexo n创建post时,空格会转换为中划线-。Hexo是为了文章管理方便在_posts目录下增加的子目录,Hexo-博客配置.md位于_posts/Hexo目录下。

结果如下:

Hexo 博客配置

这样的链接会自动适配_config.yml中的文章URL规则。

小结

对比markdown语法和post_link标签,推荐在文章链接到站内文章时优先使用post_link,链接到文章内锚点时优先使用markdown语法。

测试文章内跳转锚点

文章内锚点跳转示例

使用 Visual Studio 2022 创建 ASP.NET Core Web API

可以从 Visual Studio 2022 中选择 ASP.NET Core Web API 或 ASP.NET Core gRPC模板

安装依赖库,可以使用NuGet安装或使用DotNet CLI

1
2
3
4
dotnet add package Microsoft.EntityFrameworkCore.SqlServer --version 6.0.33
dotnet add package Microsoft.EntityFrameworkCore.Tools --version 6.0.33
dotnet add package Microsoft.AspNetCore.Identity.EntityFrameworkCore --version 6.0.33
dotnet add package Microsoft.AspNetCore.Authentication.JwtBearer --version 6.0.33

appsettings.json中添加配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*",
"Database": {
"Driver": "SqlServer",
"Host": "127.0.0.1",
"Port": 6543,
"DbName": "SAMPLE",
"User": "postgres",
"Password": "postgres"
},
"Jwt": {
"Audience": "",
"Issuer": "",
"Secret": ""
}
}

准备Model

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
using Microsoft.AspNetCore.Identity;

namespace Samples.Identity.Model

public class Role : IdentityRole<int>
{

}

public class RoleClaim : IdentityRoleClaim<int>
{

}

public class User : IdentityUser<int>
{

}

public class UserClaim : IdentityUserClaim<int>
{

}

public class UserLogin : IdentityUserLogin<int>
{

}

public class UserRole : IdentityUserRole<int>
{

}

public class UserToken : IdentityUserToken<int>
{

}

Fluent API重定义数据库表名,字段

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
using Sample.Identity.Model;
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Metadata.Builders;

namespace Samples.Identity.Configurations;

public class RoleClaimConfiguration : IEntityTypeConfiguration<RoleClaim>
{
public void Configure(EntityTypeBuilder<RoleClaim> builder)
{
builder.ToTable("SYS_ROLE_CLAIM").HasKey(x => x.Id);
builder.Property(x => x.Id).HasColumnName("ID").ValueGeneratedOnAdd();
builder.Property(x => x.RoleId).HasColumnName("ROLE_ID");
builder.Property(x => x.ClaimType).HasColumnName("CLAIM_TYPE").HasMaxLength(50);
builder.Property(x => x.ClaimType).HasColumnName("CLAIM_VALUE").HasMaxLength(50);
}
}

public class RoleConfiguration : IEntityTypeConfiguration<Role>
{
public void Configure(EntityTypeBuilder<Role> builder)
{
builder.ToTable("SYS_ROLE").HasKey(x => x.Id);
builder.Property(x => x.Id).HasColumnName("ID").ValueGeneratedOnAdd();
builder.Property(x => x.Name).HasColumnName("NAME").HasMaxLength(50);
builder.Property(x => x.NormalizedName).HasColumnName("NORMALIZED_NAME").HasMaxLength(50);
builder.Property(x => x.ConcurrencyStamp).HasColumnName("CONCURRENCY_STAMP").HasMaxLength(50);

builder.HasData(new Role { Id = 1, Name = "SuperAdmin", NormalizedName = "超级管理员" });
builder.HasData(new Role { Id = 2, Name = "Admin", NormalizedName = "管理员" });
builder.HasData(new Role { Id = 3, Name = "Operator", NormalizedName = "操作员" });
}
}

public class UserClaimConfiguration : IEntityTypeConfiguration<UserClaim>
{
public void Configure(EntityTypeBuilder<UserClaim> builder)
{
builder.ToTable("SYS_USER_CLAIM").HasKey(x => x.Id);
builder.Property(x => x.Id).HasColumnName("ID").ValueGeneratedOnAdd();
builder.Property(x => x.UserId).HasColumnName("USER_ID");
builder.Property(x => x.ClaimType).HasColumnName("CLAIM_TYPE").HasMaxLength(50);
builder.Property(x => x.ClaimValue).HasColumnName("CLAIM_VALUE").HasMaxLength(50);
}
}

public class UserConfiguration : IEntityTypeConfiguration<User>
{
public void Configure(EntityTypeBuilder<User> builder)
{
builder.ToTable("SYS_USER").HasKey(x => x.Id);
builder.Property(x => x.Id).HasColumnName("ID").ValueGeneratedOnAdd();
builder.Property(x => x.UserName).HasColumnName("USERNAME").HasMaxLength(20);
builder.Property(x => x.NormalizedUserName).HasColumnName("NORMALIZED_USERNAME").HasMaxLength(20);
builder.Property(x => x.Email).HasColumnName("EMAIL").HasMaxLength(50);
builder.Property(x => x.NormalizedEmail).HasColumnName("NORMALIZED_EMAIL").HasMaxLength(50);
builder.Property(x => x.EmailConfirmed).HasColumnName("EMAIL_CONFIRMED");
builder.Property(x => x.PasswordHash).HasColumnName("PASSWORD_HASH").HasMaxLength(256);
builder.Property(x => x.SecurityStamp).HasColumnName("SECURITY_STAMP").HasMaxLength(256);
builder.Property(x => x.ConcurrencyStamp).HasColumnName("CONCURRENCY_STAMP").HasMaxLength(256);
builder.Property(x => x.PhoneNumber).HasColumnName("PHONE_NUMBER").HasMaxLength(15);
builder.Property(x => x.PhoneNumberConfirmed).HasColumnName("PHONE_NUMBER_CONFIRMED");
builder.Property(x => x.TwoFactorEnabled).HasColumnName("TWO_FACTOR_ENABLED");
builder.Property(x => x.LockoutEnd).HasColumnName("LOCKOUT_END");
builder.Property(x => x.LockoutEnabled).HasColumnName("LOCKOUT_ENABLED");
builder.Property(x => x.AccessFailedCount).HasColumnName("ACCESS_FAILED_COUNT");

builder.HasData(new User
{
Id = 1,
UserName = "admin",
NormalizedUserName = "ADMIN",
PasswordHash = "AQAAAAEAACcQAAAAELR93lThWhjLUaJtEMPGJXUR88rGK9RjjZytUhr0Jfy3J7JaObJCZAcu5MhPl39erg==",
SecurityStamp = "LA4OVIYIUDB7CB44WR4CTS6FCY4VRWSO",
});
}
}

public class UserLoginConfiguration : IEntityTypeConfiguration<UserLogin>
{
public void Configure(EntityTypeBuilder<UserLogin> builder)
{
builder.ToTable("SYS_USER_LOGIN");
builder.Property(x => x.LoginProvider).HasColumnName("LOGIN_PROVIDER").HasMaxLength(20);
builder.Property(x => x.ProviderKey).HasColumnName("PROVIDER_KEY").HasMaxLength(20);
builder.Property(x => x.ProviderDisplayName).HasColumnName("PROVIDER_DISPLAY_NAME").HasMaxLength(20);
builder.Property(x => x.UserId).HasColumnName("USER_ID");
}
}

public class UserRoleConfiguration : IEntityTypeConfiguration<UserRole>
{
public void Configure(EntityTypeBuilder<UserRole> builder)
{
builder.ToTable("SYS_USER_ROLE");
builder.Property(x => x.UserId).HasColumnName("USER_ID");
builder.Property(x => x.RoleId).HasColumnName("ROLE_ID");
}
}

public class UserTokenConfiguration : IEntityTypeConfiguration<UserToken>
{
public void Configure(EntityTypeBuilder<UserToken> builder)
{
builder.ToTable("SYS_USER_TOKEN");
builder.Property(x => x.UserId).HasColumnName("USER_ID");
builder.Property(x => x.LoginProvider).HasColumnName("LOGIN_PROVIDER").HasMaxLength(20);
builder.Property(x => x.Name).HasColumnName("NAME").HasMaxLength(50);
builder.Property(x => x.Value).HasColumnName("VALUE").HasMaxLength(256);
}
}

新建DataContext

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
using Microsoft.AspNetCore.Identity.EntityFrameworkCore;
using Microsoft.AspNetCore.Identity;
using Microsoft.EntityFrameworkCore;

namespace Samples.Identity;

public class DataContext: IdentityDbContext<User>
{
public DataContext(DbContextOptions<DataContext> options)
: base(options)
{

}

protected override void OnModelCreating(ModelBuilder builder)
{
base.OnModelCreating(builder);
builder.ApplyConfiguration(new UserConfiguration());
builder.ApplyConfiguration(new RoleConfiguration());
builder.ApplyConfiguration(new UserClaimConfiguration());
builder.ApplyConfiguration(new UserRoleConfiguration());
builder.ApplyConfiguration(new UserLoginConfiguration());
builder.ApplyConfiguration(new RoleClaimConfiguration());
builder.ApplyConfiguration(new UserTokenConfiguration());
}
}

添加ViewModel

1
2
3
4
5
6
7
8
9
10
11
12
using System.ComponentModel.DataAnnotations;

namespace Sample.Identity.ViewModels;

public class LoginViewModel
{
[Required(ErrorMessage = "用户名不能为空")]
public string? Username { get; set; }

[Required(ErrorMessage = "密码不能为空")]
public string? Password { get; set; }
}

添加Controller

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
using Sample.Identity.ViewModels;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Tokens;
using System.IdentityModel.Tokens.Jwt;
using System.Security.Claims;
using System.Text;

namespace Sample.Identity.Controllers;

[Route("api/[controller]")]
[ApiController]
public class AuthenticateController : ControllerBase
{
private readonly UserManager<User> m_userManager;
private readonly RoleManager<Role> m_roleManager;
private readonly IConfiguration m_configuration;

private readonly JwtOption m_jwtOptions;

public AuthenticateController(UserManager<User> userManager,
RoleManager<Role> roleManager,
IConfiguration configuration)
{
m_userManager = userManager;
m_roleManager = roleManager;
m_configuration = configuration;

m_jwtOptions = m_configuration.GetSection("").Get<JwtOption>();
}

[HttpPost]
[Route("login")]
public async Task<IActionResult> Login([FromBody] LoginViewModel model)
{
var user = await m_userManager.FindByNameAsync(model.Username);
if(user!=null && await m_userManager.CheckPasswordAsync(user, model.Password))
{
var roles = await m_userManager.GetRolesAsync(user);

var claims = new List<Claim>
{
new Claim(ClaimTypes.Name, user.Username);
new Claim(JwtRegisteredClaimNames.Jti, Guid.NewGuid().ToString())
}

foreach (var role in roles)
{
claims.Add(new Claim(ClaimTypes.Role, role));
}

var token = GenerateToken(claims);

return Ok(new {
token = new JwtSecurityTokenHandler().WriteToken(token),
expiration = token.ValidTo
});
}

return Unauthorized();
}

private JwtSecurityToken GetToken(List<Claim> authClaims)
{
var authSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(m_jwtOptions.Secret));
var token = new JwtSecurityToken(
issuer: _configuration["JWT:ValidIssuer"],
audience: _configuration["JWT:ValidAudience"],
expires: DateTime.Now.AddHours(3),
claims: authClaims,
signingCredentials: new SigningCredentials(authSigningKey, SecurityAlgorithms.HmacSha256)
);

return token;
}
}

修改Program中添加

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
using Sample.Identity;
using Microsoft.AspNetCore.Authentication.JwtBearer;
using Microsoft.AspNetCore.Identity;
using Microsoft.EntityFrameworkCore;
using Microsoft.IdentityModel.Tokens;
using System.Text;

var builder = WebApplication.CreateBuilder(args);
ConfigurationManager configuration = builder.Configuration;

// Add services to the container.

// For Entity Framework
builder.Services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer(configuration.GetConnectionString("ConnStr")));

// For Identity
builder.Services.AddIdentity<User, Role>()
//optinos => {
//options.Password.RequireDigit = false;
//options.Password.RequireLowercase = false;
//options.Password.RequireUppercase = false;
//options.Password.RequireNonAlphanumeric = false;
//options.Password.RequiredLength = 8;
//options.Password.RequiredUniqueChars = 1;

//options.Lockout.DefaultLockoutTimeSpan = TimeSpan.FromMinutes(10);
//options.Lockout.MaxFailedAccessAttempts = 5;
//options.Lockout.AllowedForNewUsers = true;
//}
.AddEntityFrameworkStores<DataContext>()
.AddDefaultTokenProviders();

var jwtOptions = builder.Configuration.GetSection("JWT").Get<JwtOption>();

// Adding Authentication
builder.Services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultScheme = JwtBearerDefaults.AuthenticationScheme;
})

// Adding Jwt Bearer
.AddJwtBearer(options =>
{
options.SaveToken = true;
options.RequireHttpsMetadata = false;
options.TokenValidationParameters = new TokenValidationParameters()
{
ValidateIssuer = true,
ValidateAudience = true,
ValidAudience = jwtOptions.Audience,
ValidIssuer = jwtOptions.Issuer,
IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(jwtOptions.Secret))
};
});

builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}

app.UseHttpsRedirection();

// Authentication & Authorization
app.UseAuthentication();
app.UseAuthorization();

app.MapControllers();

app.Run();

执行数据迁移

1
2
add-migration L0
update-database