• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索

您的检索:

精炼检索结果:

年份

应用

成果类型

应用

语言

应用

清除所有精炼条件

排序方式:
默认
  • 默认
  • 标题
  • 年份
  • WOS被引数
  • Scopus被引数
  • CNKI被引数
  • 万方被引数
  • 维普被引数
  • 影响因子
  • 正序
  • 倒序
< 页,共 1 >
Helmet Detection Based on an Enhanced YOLO Method EI
会议论文 | 2021 , 653 , 84-92 | 2nd International Conference on Artificial Intelligence in China, ChinaAI 2020
摘要 & 关键词 引用

摘要 :

Wearing a safety helmet is one of the most important requirements of the construction site and is essential to the safety of workers. Computer vision can be applied to identifying the helmet worn by the workers as external supervision. In this paper, helmet detection algorithms based on YOLO models with a special data set where the training set consists of simple helmet pictures but the test set holds complicated real construction sites are studied. In view of actual situations of the construction site, some pretreatment methods for the training set are tested to enhance the performance. The result shows that with proper pretreatment, the YOLOv3 model with a simple training set can have good performance in detecting helmet in complicated construction sites. © 2021, The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

关键词 :

Artificial intelligence Safety devices Statistical tests

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Zheng, Weizhou , Chang, Jiayi . Helmet Detection Based on an Enhanced YOLO Method [C] . 2021 : 84-92 .
MLA Zheng, Weizhou 等. "Helmet Detection Based on an Enhanced YOLO Method" . (2021) : 84-92 .
APA Zheng, Weizhou , Chang, Jiayi . Helmet Detection Based on an Enhanced YOLO Method . (2021) : 84-92 .
导入链接 NoteExpress RIS BibTex
Robot recognizing humans intention and interacting with humans based on a multi-task model combining ST-GCN-LSTM model and YOLO model EI SCIE Scopus
期刊论文 | 2021 , 430 , 174-184 | Neurocomputing
摘要 & 关键词 引用

摘要 :

It is hoped that the robot could interact with the human when the robots help us in our daily lives. And understanding humans’ specific intention is the first crucial task for human-robot interaction. In this paper, we firstly develop a multi-task model for recognizing humans’ intention, which is composed of two sub-tasks: human action recognition and hand-held object identification. For the front subtask, an effective ST-GCN-LSTM model is proposed by fusing the Spatial Temporal Graph Convolutional Networks and Long Short Term Memory Networks. And for the second subtask, the YOLO v3 model is adopted for the hand-held object identification. Then, we build a framework for robot interacting with the human. Finally, these proposed models and the interacting framework are verified on several datasets and the testing results show the effectiveness of the proposed models and the framework. © 2020 Elsevier B.V.

关键词 :

Palmprint recognition Human robot interaction Convolutional neural networks Long short-term memory

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Liu, Chunfang , Li, Xiaoli , Li, Qing et al. Robot recognizing humans intention and interacting with humans based on a multi-task model combining ST-GCN-LSTM model and YOLO model [J]. | Neurocomputing , 2021 , 430 : 174-184 .
MLA Liu, Chunfang et al. "Robot recognizing humans intention and interacting with humans based on a multi-task model combining ST-GCN-LSTM model and YOLO model" . | Neurocomputing 430 (2021) : 174-184 .
APA Liu, Chunfang , Li, Xiaoli , Li, Qing , Xue, Yaxin , Liu, Huijun , Gao, Yize . Robot recognizing humans intention and interacting with humans based on a multi-task model combining ST-GCN-LSTM model and YOLO model . | Neurocomputing , 2021 , 430 , 174-184 .
导入链接 NoteExpress RIS BibTex
Study on Improved YOLO_v3-based Algorithm for Identifying Open Windows on Building Facades EI Scopus
会议论文 | 2021 , 1769 (1) | 5th International Conference on Computer Science and Information Engineering, ICCSIE 2020
摘要 & 关键词 引用

摘要 :

In order to ensure the security of building facades in key areas and improve the security detection efficiency of security personnel on building facades, this paper proposes an improved YOLO v3-based algorithm for open window detection recognition in building facade images, which extracts open window features from images to make predictions on full images by convolutional neural network. Firstly, due to the absence of publicly available window datasets in the network and the high number of window types that exist in reality, a self-constructed window dataset containing 13573images of open windows is used to train and test the window detection model. The data set is then clustered by the K-Means clustering algorithm to select an Anchor Box more suitable for window detection, which draws on the ShuffleNet idea to strengthen the feature extraction method, and then optimizes the network structure of YOLO v3. Finally, a block detection mechanism is introduced to effectively enhance the network's ability to detect small dense targets; experimental results show that the method improves the accuracy and speed of window detection and reduces the workload of security personnel in key areas to manually check for windows on both sides of the street. Fire detection, missing and falling bricks on the floor, and overhead throw detectionare of great importance. © 2021 Published under licence by IOP Publishing Ltd.

关键词 :

Convolutional neural networks K-means clustering Personnel Statistical tests Image enhancement Facades Feature extraction Network security

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Sun, Guangmin , Lin, Pengfei , Li, Yu . Study on Improved YOLO_v3-based Algorithm for Identifying Open Windows on Building Facades [C] . 2021 .
MLA Sun, Guangmin et al. "Study on Improved YOLO_v3-based Algorithm for Identifying Open Windows on Building Facades" . (2021) .
APA Sun, Guangmin , Lin, Pengfei , Li, Yu . Study on Improved YOLO_v3-based Algorithm for Identifying Open Windows on Building Facades . (2021) .
导入链接 NoteExpress RIS BibTex
A Lightweight Convolutional Neural Network Flame Detection Algorithm CPCI-S EI Scopus
会议论文 | 2021 , 83-86 | 11th IEEE International Conference on Electronics Information and Emergency Communication (ICEIEC)
WoS核心集被引次数: 12
摘要 & 关键词 引用

摘要 :

Flame detection is a key technical link to realize intelligent forest fire prevention and control. However, the current fire detection methods generally have the problems of low detection rate, high false alarm rate, and poor real-time performance. In order to achieve rapid and accurate recognition of forest fires in natural environments, this paper proposes a lightweight convolutional neural network flame detection algorithm Yolo-Edge. MobileNetv3 has deep separable convolutional structure features, which can replace Yolov4's original CSPDarknet53 feature extraction backbone network, and can reduce the number of network layers and model size, so that it can adapt to the working environment of edge devices and multi-scale prediction. Feature fusion is carried out through the feature pyramid to improve the detection accuracy of small targets. Use 2059 flame images in different occlusion environments as a data set for training and testing, and use F1 value and AP value to evaluate the difference of each model. The test results show that the lightweight improved neural network model proposed in this paper has good recognition accuracy and speed, which significantly reduces the memory usage of the model and achieves a good lightweight effect.

关键词 :

flame detection algorithm convolutional neural network forest fire

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Li, Wenzheng , Yu, Zongyang . A Lightweight Convolutional Neural Network Flame Detection Algorithm [C] . 2021 : 83-86 .
MLA Li, Wenzheng et al. "A Lightweight Convolutional Neural Network Flame Detection Algorithm" . (2021) : 83-86 .
APA Li, Wenzheng , Yu, Zongyang . A Lightweight Convolutional Neural Network Flame Detection Algorithm . (2021) : 83-86 .
导入链接 NoteExpress RIS BibTex
改进YOLOv3的车辆实时检测与信息识别技术 CSCD CQVIP
期刊论文 | 2020 , 56 (22) , 173-184 | 计算机工程与应用
摘要 & 关键词 引用

摘要 :

在复杂无约束自然场景下对车辆实时检测和相关信息的提取识别一直是计算机视觉领域内重要的研究内容之一.该领域问题的突破不但可以为汽车自动驾驶技术的实现和完善带来实际效果的提升,并且在停车场的自动停车调度算法和实时泊车监控系统的改进上有着重要的现实意义.针对当前实时车辆信息检测中存在的车辆检测区域不完整、精度不高以及无法对场景中较远车辆进行准确定位等相关问题,提出了一种Vehicle-YOLO的实时车辆检测分类模型.该模型在最新的YOLOv3算法基础上,通过更改图像输入参数,增强深度残差网络的特征提取能力,采用5个不同尺寸的特征图依次对潜在车辆的边界框提取等方式来提升车辆实时信息检测的精度和普适性,并通过KITTI、VOC等数据集进行性能验证和分析.实验结果表明,Vehicle-YOLO模型在KITTI数据集上达到了96%的均值平均精度,传输速度约为40 f/s,在精度提升的情况下仍能保持良好的实时检测速率.此外,Vehicle-YOLO检测模型在VOC等其余数据集上的实验结果也展现了不同程度的精度提升,故该模型在常见物体的定位检测中有较好的普适性,相较于传统的物体检测算法模型有更好的表现.

关键词 :

卷积神经网络 车辆实时检测 YOLOv3 特征图 目标定位 深度残差网络

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 顾恭 , 徐旭东 . 改进YOLOv3的车辆实时检测与信息识别技术 [J]. | 计算机工程与应用 , 2020 , 56 (22) : 173-184 .
MLA 顾恭 et al. "改进YOLOv3的车辆实时检测与信息识别技术" . | 计算机工程与应用 56 . 22 (2020) : 173-184 .
APA 顾恭 , 徐旭东 . 改进YOLOv3的车辆实时检测与信息识别技术 . | 计算机工程与应用 , 2020 , 56 (22) , 173-184 .
导入链接 NoteExpress RIS BibTex
基于深度学习的工业自动化包装缺陷检测方法 CQVIP
期刊论文 | 2020 , 41 (7) , 175-184 | 包装工程
万方被引次数: 1
摘要 & 关键词 引用

摘要 :

目的 针对目前工业自动化生产中基于人工特征提取的包装缺陷检测方法复杂、专业知识要求高、通用性差、在多目标和复杂背景下难以应用等问题,研究基于深度学习的实时包装缺陷检测方法.方法 在样本数据较少的情况下,提出一种基于深度学习的Inception-V3图像分类算法和YOLO-V3目标检测算法相结合的缺陷检测方法,并设计完整的基于计算机视觉的在线包装缺陷检测系统.结果 实验结果显示,该方法的识别准确率为99.49%,方差为0.000 050 6,只使用Inception-V3算法的准确率为97.70%,方差为0.000 251.结论 相比一般基于人工特征提取的包装缺陷检测方法,避免了复杂的特征提取过程.相比只应用图像分类算法进行包装缺陷检测,该方法在包装缺陷区域占比较小的情况下能较明显地提高包装缺陷检测精度和稳定性,在复杂检测背景和多目标场景中体现优势.该缺陷检测系统和检测方法可以很容易地迁移到其他类似在线检测问题上.

关键词 :

MQTT Inception-v3 缺陷检测 迁移学习 YOLO-V3 TensorFlow Serving

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 李建明 , 杨挺 , 王惠栋 . 基于深度学习的工业自动化包装缺陷检测方法 [J]. | 包装工程 , 2020 , 41 (7) : 175-184 .
MLA 李建明 et al. "基于深度学习的工业自动化包装缺陷检测方法" . | 包装工程 41 . 7 (2020) : 175-184 .
APA 李建明 , 杨挺 , 王惠栋 . 基于深度学习的工业自动化包装缺陷检测方法 . | 包装工程 , 2020 , 41 (7) , 175-184 .
导入链接 NoteExpress RIS BibTex
Detection method of robot optimal grasp posture based on deep learning EI CSCD
期刊论文 | 2020 , 41 (5) , 108-117 | Chinese Journal of Scientific Instrument
摘要 & 关键词 引用

摘要 :

The service robot is faced with unstructured scene in the task of grasp. Because of the irregular placement and shape of the objects, it is difficult to accurately calculate the robot's grasp posture. Aiming at this problem, a robot optimal grasp posture detection algorithm with dual network architecture is proposed. Firstly, the YOLO V3 target detection model is improved, which improves the detection speed of the model and the recognition performance of small target objects. Secondly, convolutional neural network is used to design multi-target grasp detection network, which generates the robot grasp area in the image. In order to calculate the optimal grasp posture of the robot, the IOU area evaluation algorithm is established, which screens out the optimal grasp area of the target object. The experiment results show that the target detection accuracy of improved YOLO V3 reaches 91%, and the detection accuracy of the multi-target grasp reaches 86%, the detection accuracy of the robot optimal grasp posture reaches above 90%. In summary, the proposed method can efficiently and accurately calculate the optimal grasp area of the target object to meet the requirements of the grasp task. © 2020, Science Press. All right reserved.

关键词 :

Convolutional neural networks Deep learning Machine design Network architecture Robots

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Li, Xiuzhi , Li, Jiahao , Zhang, Xiangyin et al. Detection method of robot optimal grasp posture based on deep learning [J]. | Chinese Journal of Scientific Instrument , 2020 , 41 (5) : 108-117 .
MLA Li, Xiuzhi et al. "Detection method of robot optimal grasp posture based on deep learning" . | Chinese Journal of Scientific Instrument 41 . 5 (2020) : 108-117 .
APA Li, Xiuzhi , Li, Jiahao , Zhang, Xiangyin , Peng, Xiaobin . Detection method of robot optimal grasp posture based on deep learning . | Chinese Journal of Scientific Instrument , 2020 , 41 (5) , 108-117 .
导入链接 NoteExpress RIS BibTex
基于深度学习的机器人最优抓取姿态检测方法 CSCD CQVIP
期刊论文 | 2020 , 41 (05) , 108-117 | 仪器仪表学报
CNKI被引次数: 9
摘要 & 关键词 引用

摘要 :

服务型机器人在抓取任务中面临的是非结构化的场景。由于物体放置方式的不固定以及其形状的不规则,难以准确计算出机器人的抓取姿态。针对此问题,提出一种双网络架构的机器人最优抓取姿态检测算法。首先,改进了YOLO V3目标检测模型,提升了模型的检测速度与小目标物体的识别性能;其次,利用卷积神经网络设计了多目标抓取检测网络,生成图像中目标物体的抓取区域。为了计算机器人的最优抓取姿态,建立了IOU区域评估算法,筛选出目标物体的最优抓取区域。实验结果表明,改进后的YOLO V3目标检测精度达到91%,多目标抓取检测精度达到86%,机器人最优抓取姿态检测精度达到90%以上。综上所述,所提方法能够高效、精确地计...

关键词 :

目标检测 抓取检测 深度学习 机器人最优抓取

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 李秀智 , 李家豪 , 张祥银 et al. 基于深度学习的机器人最优抓取姿态检测方法 [J]. | 仪器仪表学报 , 2020 , 41 (05) : 108-117 .
MLA 李秀智 et al. "基于深度学习的机器人最优抓取姿态检测方法" . | 仪器仪表学报 41 . 05 (2020) : 108-117 .
APA 李秀智 , 李家豪 , 张祥银 , 彭小彬 . 基于深度学习的机器人最优抓取姿态检测方法 . | 仪器仪表学报 , 2020 , 41 (05) , 108-117 .
导入链接 NoteExpress RIS BibTex
Ocean ship detection and recognition algorithm based on aerial image EI Scopus
会议论文 | 2020 , 218-222 | 2020 Asia-Pacific Conference on Image Processing, Electronics and Computers, IPEC 2020
摘要 & 关键词 引用

摘要 :

Because the image of UAV aerial photography is easy to be affected by light, sea area and other conditions, there are many kinds of ships. Under different conditions, the characteristics of ships are different, which makes the target recognition more difficult. In order to improve the efficiency of sea surface supervision and make the sea surface management more intelligent, an ocean ship detection algorithm based on aerial photography image is proposed. In this paper, the improved Yolo algorithm is mainly used for high-efficiency ship detection of aerial video, which can achieve real-time performance and detection speed of 23fps. In order to improve the accuracy, this paper proposes a standardized mechanism of fixed frame length detection results, which uses deep learning mask RCNN algorithm for fine detection of specific frame images, and the detection map is 85%, which improves the detection speed without affecting the detection speed The accuracy of the algorithm forms an efficient and accurate algorithm for the detection of ships on the sea, which brings convenience to the management of the sea. © 2020 IEEE.

关键词 :

Antennas Aerial photography Ships Photographic equipment Deep learning Efficiency Image enhancement Surface waters

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Zhao, Dequn , Li, Xinmeng . Ocean ship detection and recognition algorithm based on aerial image [C] . 2020 : 218-222 .
MLA Zhao, Dequn et al. "Ocean ship detection and recognition algorithm based on aerial image" . (2020) : 218-222 .
APA Zhao, Dequn , Li, Xinmeng . Ocean ship detection and recognition algorithm based on aerial image . (2020) : 218-222 .
导入链接 NoteExpress RIS BibTex
基于深度卷积神经网络的停车位检测 CQVIP
期刊论文 | 2019 , 42 (21) , 105-108 | 电子测量技术
摘要 & 关键词 引用

摘要 :

提高大型停车场中的停车位检测精度和实时性具有重要意义.介绍了在基于深度学习框架tensorflow下搭建包括基础网络和辅助网络的网络结构.基础网络是Resnet网络,用于提取图像特征信息和图像分类信息;辅助网络是多尺度特征检测网络,用于提取不同尺度的特征图.最后通过非极大值抑制算法筛除重复检测框,得到停车位检测最佳位置.实验结果表明,该网络mAp值为81%,fps为32,与SSD、YOLO、Faster R-cnn相比,mAp值分别提高为2%,4.6%,0.5%,fps值分别提高为2,4,24,有效提高检测精度和实时性.

关键词 :

Tensorflow 停车位检测 非极大值抑制算法 Resnet

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 王马成 , 黎海涛 . 基于深度卷积神经网络的停车位检测 [J]. | 电子测量技术 , 2019 , 42 (21) : 105-108 .
MLA 王马成 et al. "基于深度卷积神经网络的停车位检测" . | 电子测量技术 42 . 21 (2019) : 105-108 .
APA 王马成 , 黎海涛 . 基于深度卷积神经网络的停车位检测 . | 电子测量技术 , 2019 , 42 (21) , 105-108 .
导入链接 NoteExpress RIS BibTex
每页显示 10| 20| 50 条结果
< 页,共 1 >

导出

数据:

选中

格式:
在线人数/总访问数:150/4732255
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司