• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索
高影响力成果及被引频次趋势图 关键词云图及合作者关系图

您的检索:

学者姓名:施云惠

精炼检索结果:

来源

应用 展开

合作者

应用 展开

清除所有精炼条件

排序方式:
默认
  • 默认
  • 标题
  • 年份
  • WOS被引数
  • 影响因子
  • 正序
  • 倒序
< 页,共 19 >
球面图像的SLIC算法 CSCD
期刊论文 | 2021 , 47 (3) , 216-223 | 北京工业大学学报
摘要&关键词 引用

摘要 :

简单线性迭代聚类(simple linear iterative clustering,SLIC)超像素分割算法可以直接用于等距柱状投影(equirectangular projection,ERP)的球面图像,但是投影所造成的球面数据局部相关性破坏,会导致SLIC算法在ERP图像的部分区域无法生成合适的超像素分类,从而影响该算法的性能.为解决这一问题,首先对ERP格式的球面图像进行重采样,生成球面上近似均匀分布的球面像元数据;然后在保持球面图像数据局部相关性的基础上,将重采样数据重组为一个新的球面图像二维表示;并基于此二维表示,将球面数据的几何关系整合到SLIC算法中,最终建立球面图像SLIC算法.针对多组ERP图像分别应用SLIC算法和本文提出的算法,对比2种算法在不同聚类数量下的超像素分割结果.实验结果表明:所提出的球面图像SLIC算法在客观质量上优于原SLIC算法,所生成的超像素分割结果不受球面区域变化影响,且轮廓闭合,在球面上表现出了较好的相似性和一致性.

关键词 :

SLIC算法 SLIC算法 聚类 聚类 超像素 超像素 重采样 重采样 球面图像 球面图像 图像分割 图像分割

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 吴刚 , 施云惠 , 尹宝才 . 球面图像的SLIC算法 [J]. | 北京工业大学学报 , 2021 , 47 (3) : 216-223 .
MLA 吴刚 等. "球面图像的SLIC算法" . | 北京工业大学学报 47 . 3 (2021) : 216-223 .
APA 吴刚 , 施云惠 , 尹宝才 . 球面图像的SLIC算法 . | 北京工业大学学报 , 2021 , 47 (3) , 216-223 .
导入链接 NoteExpress RIS BibTex
球面图像的SLIC算法 CQVIP
期刊论文 | 2021 , 47 (3) , 216-223 | 吴刚
摘要&关键词 引用

摘要 :

球面图像的SLIC算法

关键词 :

图像分割 图像分割 超像素 超像素 SLIC算法 SLIC算法 重采样 重采样 球面图像 球面图像 聚类 聚类

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 吴刚 , 施云惠 , 尹宝才 et al. 球面图像的SLIC算法 [J]. | 吴刚 , 2021 , 47 (3) : 216-223 .
MLA 吴刚 et al. "球面图像的SLIC算法" . | 吴刚 47 . 3 (2021) : 216-223 .
APA 吴刚 , 施云惠 , 尹宝才 , 北京工业大学学报 . 球面图像的SLIC算法 . | 吴刚 , 2021 , 47 (3) , 216-223 .
导入链接 NoteExpress RIS BibTex
一种基于残差学习及空间变换网络的光场超分辨率重建方法 incoPat
专利 | 2021-03-05 | CN202110254556.3
摘要&关键词 引用

摘要 :

一种基于残差学习及空间变换网络的光场超分辨率重建方法属于计算机视觉领域。本发明根据光场图像空间分辨率低的问题对图像进行超分辨率重建,首先设计了4+1的融合模型来学习光场图像的特征,此外,为了更好的提取特征,本发明将空间变换网络的定位器进行了改进。在具体操作中,将图像按照相对位置分别输入到改进的空间变换网络中,通过递归结构的卷积神经网络进行特征重建与融合,期间充分利用了相邻视图的特征信息,弥补了其他方法对相邻视点信息利用的不足,配合先进的网络结构设计,使得模型具有更好的重建效果。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 施云惠 , 马振轩 , 王瑾 et al. 一种基于残差学习及空间变换网络的光场超分辨率重建方法 : CN202110254556.3[P]. | 2021-03-05 .
MLA 施云惠 et al. "一种基于残差学习及空间变换网络的光场超分辨率重建方法" : CN202110254556.3. | 2021-03-05 .
APA 施云惠 , 马振轩 , 王瑾 , 尹宝才 . 一种基于残差学习及空间变换网络的光场超分辨率重建方法 : CN202110254556.3. | 2021-03-05 .
导入链接 NoteExpress RIS BibTex
基于事件相机的车辆目标检测方法 incoPat
专利 | 2021-02-09 | CN202110182127.X
摘要&关键词 引用

摘要 :

本发明公开了基于事件相机的车辆目标检测方法,基于事件相机,利用深度学习技术研究了一种极端场景下的车辆目标检测方法。事件相机可以异步生成帧和事件数据,对克服运动模糊和极端光照条件有很大帮助。首先将事件转为事件图像,然后将帧图像和事件图像同时送入融合卷积神经网络,增加对事件图像进行特征提取的卷积层;同时在网络的中间层通过融合模块将二者的特征进行融合;最后重新设计损失函数提高了车辆目标检测的有效性。本发明的方法可以弥补在极端场景下仅使用帧图像进行目标检测的不足,在使用帧图像的基础上在融合卷积神经网络中融合事件图像,增强了在极端场景中车辆目标检测的效果。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 孙艳丰 , 刘萌允 , 齐娜 et al. 基于事件相机的车辆目标检测方法 : CN202110182127.X[P]. | 2021-02-09 .
MLA 孙艳丰 et al. "基于事件相机的车辆目标检测方法" : CN202110182127.X. | 2021-02-09 .
APA 孙艳丰 , 刘萌允 , 齐娜 , 施云惠 , 尹宝才 . 基于事件相机的车辆目标检测方法 : CN202110182127.X. | 2021-02-09 .
导入链接 NoteExpress RIS BibTex
SMSIR: Spherical Measure Based Spherical Image Representation SCIE
期刊论文 | 2021 , 30 , 6377-6391 | IEEE TRANSACTIONS ON IMAGE PROCESSING
WoS核心集被引次数: 3
摘要&关键词 引用

摘要 :

This paper presents a spherical measure based spherical image representation(SMSIR) and sphere-based resampling methods for generating our representation. On this basis, a spherical wavelet transform is also proposed. We first propose a formal recursive definition of the spherical triangle elements of SMSIR and a dyadic index scheme. The index scheme, which supports global random access and needs not to be pre-computed and stored, can efficiently index the elements of SMSIR like planar images. Two resampling methods to generate SMSIR from the most commonly used ERP(Equirectangular Projection) representation are presented. Notably, the spherical measure based resampling, which exploits the mapping between the spherical and the parameter domain, achieves higher computational efficiency than the spherical RBF(Radial Basis Function) based resampling. Finally, we design high-pass and low-pass filters with lifting schemes based on the dyadic index to further verify the efficiency of our index and deal with the spherical isotropy. It provides novel Multi-Resolution Analysis(MRA) for spherical images. Experiments on continuous synthetic spherical images indicate that our representation can recover the original image signals with higher accuracy than the ERP and CMP(Cubemap) representations at the same sampling rate. Besides, the resampling experiments on natural spherical images show that our resampling methods outperform the bilinear and bicubic interpolations concerning the subjective and objective quality. Particularly, as high as 2dB gain in terms of S-PSNR is achieved. Experiments also show that our spherical image transform can capture more geometric features of spherical images than traditional wavelet transform.

关键词 :

Extraterrestrial measurements Extraterrestrial measurements Feature extraction Feature extraction Geometry Geometry Image representation Image representation image resampling image resampling Indexing Indexing indexing scheme indexing scheme Interpolation Interpolation Spherical images Spherical images spherical measure spherical measure spherical RBF spherical RBF Surface treatment Surface treatment

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Wu, Gang , Shi, Yunhui , Sun, Xiaoyan et al. SMSIR: Spherical Measure Based Spherical Image Representation [J]. | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2021 , 30 : 6377-6391 .
MLA Wu, Gang et al. "SMSIR: Spherical Measure Based Spherical Image Representation" . | IEEE TRANSACTIONS ON IMAGE PROCESSING 30 (2021) : 6377-6391 .
APA Wu, Gang , Shi, Yunhui , Sun, Xiaoyan , Wang, Jin , Yin, Baocai . SMSIR: Spherical Measure Based Spherical Image Representation . | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2021 , 30 , 6377-6391 .
导入链接 NoteExpress RIS BibTex
Sparse Coding of Intra Prediction Residuals for Screen Content Coding CPCI-S
会议论文 | 2021 | IEEE International Conference on Consumer Electronics (ICCE)
WoS核心集被引次数: 1
摘要&关键词 引用

摘要 :

High Efficiency Video Coding - Screen Content Coding (HEVC-SCC) is an extension to HEVC which adds sophisticated compression methods for computer generated content. A video frame is usually split into blocks that are predicted and subtracted from the original, which leaves a residual. These blocks are transformed by integer discrete sine transform (IntDST) or integer discrete cosine transform (IntDCT), quantized, and entropy coded into a bitstream. In contrast to camera captured content, screen content contains a lot of similar and repeated blocks. The HEVC-SCC tools utilize these similarities in various ways. After these tools are executed, the remaining signals are handled by IntDST/IntDCT which is designed to code camera-captured content. Fortunately, in sparse coding, the dictionary learning process which uses these residuals adapts much better and the outcome is significantly sparser than for camera captured content. This paper proposes a sparse coding scheme which takes advantage of the similar and repeated intra prediction residuals and targets low to mid frequency/energy blocks with a low sparsity setup. We also applied an approach which splits the common test conditions (CTC) sequences into categories for training and testing purposes. It is integrated as an alternate transform where the selection between traditional transform and our proposed method is based on a rate-distortion optimization (RDO) decision. It is integrated in HEVC-SCC test model (HM) HM-16.18+SCM-8.7. Experimental results show that the proposed method achieves a Bjontegaard rate difference (BD-rate) of up to 4.6% in an extreme computationally demanding setup for the "all intra" configuration compared with HM-16.18+SCM-8.7.

关键词 :

HEVC HEVC intra prediction intra prediction KSVD KSVD orthogonal matching pursuit orthogonal matching pursuit residual coding residual coding screen content coding screen content coding sparse coding sparse coding sparse representation sparse representation video coding video coding

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Schimpf, Michael G. , Ling, Nam , Shi, Yunhui et al. Sparse Coding of Intra Prediction Residuals for Screen Content Coding [C] . 2021 .
MLA Schimpf, Michael G. et al. "Sparse Coding of Intra Prediction Residuals for Screen Content Coding" . (2021) .
APA Schimpf, Michael G. , Ling, Nam , Shi, Yunhui , Liu, Ying . Sparse Coding of Intra Prediction Residuals for Screen Content Coding . (2021) .
导入链接 NoteExpress RIS BibTex
MS-Net: A lightweight separable ConvNet for multi-dimensional image processing SCIE
期刊论文 | 2021 , 80 (17) , 25673-25688 | MULTIMEDIA TOOLS AND APPLICATIONS
WoS核心集被引次数: 2
摘要&关键词 引用

摘要 :

As the core technology of deep learning, convolutional neural networks have been widely applied in a variety of computer vision tasks and have achieved state-of-the-art performance. However, it's difficult and inefficient for them to deal with high dimensional image signals due to the dramatic increase of training parameters. In this paper, we present a lightweight and efficient MS-Net for the multi-dimensional(MD) image processing, which provides a promising way to handle MD images, especially for devices with limited computational capacity. It takes advantage of a series of one dimensional convolution kernels and introduces a separable structure in the ConvNet throughout the learning process to handle MD image signals. Meanwhile, multiple group convolutions with kernel size 1 x 1 are used to extract channel information. Then the information of each dimension and channel is fused by a fusion module to extract the complete image features. Thus the proposed MS-Net significantly reduces the training complexity, parameters and memory cost. The proposed MS-Net is evaluated on both 2D and 3D benchmarks CIFAR-10, CIFAR-100 and KTH. Extensive experimental results show that the MS-Net achieves competitive performance with greatly reduced computational and memory cost compared with the state-of-the-art ConvNet models.

关键词 :

Feature extraction and representation Feature extraction and representation Matricization Matricization Multi-dimensional image processing Multi-dimensional image processing Separable convolution neural network Separable convolution neural network

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Hou, Zhenning , Shi, Yunhui , Wang, Jin et al. MS-Net: A lightweight separable ConvNet for multi-dimensional image processing [J]. | MULTIMEDIA TOOLS AND APPLICATIONS , 2021 , 80 (17) : 25673-25688 .
MLA Hou, Zhenning et al. "MS-Net: A lightweight separable ConvNet for multi-dimensional image processing" . | MULTIMEDIA TOOLS AND APPLICATIONS 80 . 17 (2021) : 25673-25688 .
APA Hou, Zhenning , Shi, Yunhui , Wang, Jin , Cui, Yingxuan , Yin, Baocai . MS-Net: A lightweight separable ConvNet for multi-dimensional image processing . | MULTIMEDIA TOOLS AND APPLICATIONS , 2021 , 80 (17) , 25673-25688 .
导入链接 NoteExpress RIS BibTex
基于生成对抗网络的数据驱动人群运动仿真方法 incoPat
专利 | 2020-04-01 | CN202010252751.8
摘要&关键词 引用

摘要 :

本发明提出一种基于生成对抗网络的数据驱动人群运动仿真方法,涉及人群仿真和深度学习等领域,用于在从行人运动视频数据中提取的行人轨迹数据集的基础上,在仿真场景中生成数据集中不存在的虚拟行人,并且为生成的虚拟行人根据给定的条件如初始位置、目的地等以及整个场景内的其它因素进行完整的、更贴近真实行人反应的路径规划。该方法应用了基于长短期记忆网络(LSTM)的生成对抗网络(GAN)来训练仿真模型。本发明的方法仿真出的虚拟行人的运动轨迹相较于传统的基于规则的人群仿真方法的仿真效果更具真实感、与现实行人运动情况更为相近。本发明完成了对虚拟行人进行的轨迹规划任务,并且有效地提高了人群运动仿真效果的真实性。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 施云惠 , 梁宇辰 , 张勇 et al. 基于生成对抗网络的数据驱动人群运动仿真方法 : CN202010252751.8[P]. | 2020-04-01 .
MLA 施云惠 et al. "基于生成对抗网络的数据驱动人群运动仿真方法" : CN202010252751.8. | 2020-04-01 .
APA 施云惠 , 梁宇辰 , 张勇 , 胡永利 , 尹宝才 . 基于生成对抗网络的数据驱动人群运动仿真方法 : CN202010252751.8. | 2020-04-01 .
导入链接 NoteExpress RIS BibTex
一种球面图像索引方法及装置 incoPat
专利 | 2020-11-16 | CN202011279655.9
摘要&关键词 引用

摘要 :

公开一种球面图像索引方法及装置,可以像平面图像的二维索引一样直接反映球面三角像元的在球面上的邻域关系,提高访问效率,方便上下采样,能够体现球面三角像元在原始球面上的邻域关系,并且能够像平面图像那样高效地索引三角像元,可以大大开发球面图像的应用难度,潜在应用十分广泛,几乎可以用于所有球面图像处理方法、工具的开发。该方法包括:剖分层次为0的球面三角形T0i投影为平面上的直角三角形,8个最大的球面三角形在某点展开并投影成一个平面上的正方形;该排列的中心点为球面上球面坐标为(0°, 0°)的点,四个顶点的球面坐标为(±180°, ±90°);令投影后的直角三角形的两个直角边与笛卡尔坐标系重合,则球面三角像元用笛卡尔坐标上的整数坐标直接进行索引。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 施云惠 , 吴刚 , 尹宝才 et al. 一种球面图像索引方法及装置 : CN202011279655.9[P]. | 2020-11-16 .
MLA 施云惠 et al. "一种球面图像索引方法及装置" : CN202011279655.9. | 2020-11-16 .
APA 施云惠 , 吴刚 , 尹宝才 , 王瑾 . 一种球面图像索引方法及装置 : CN202011279655.9. | 2020-11-16 .
导入链接 NoteExpress RIS BibTex
Data-Driven Redundant Transform Based on Parseval Frames SCIE
期刊论文 | 2020 , 10 (8) | APPLIED SCIENCES-BASEL
WoS核心集被引次数: 4
摘要&关键词 引用

摘要 :

The sparsity of images in a certain transform domain or dictionary has been exploited in many image processing applications. Both classic transforms and sparsifying transforms reconstruct images by a linear combination of a small basis of the transform. Both kinds of transform are non-redundant. However, natural images admit complicated textures and structures, which can hardly be sparsely represented by square transforms. To solve this issue, we propose a data-driven redundant transform based on Parseval frames (DRTPF) by applying the frame and its dual frame as the backward and forward transform operators, respectively. Benefitting from this pairwise use of frames, the proposed model combines a synthesis sparse system and an analysis sparse system. By enforcing the frame pair to be Parseval frames, the singular values and condition number of the learnt redundant frames, which are efficient values for measuring the quality of the learnt sparsifying transforms, are forced to achieve an optimal state. We formulate a transform pair (i.e., frame pair) learning model and a two-phase iterative algorithm, analyze the robustness of the proposed DRTPF and the convergence of the corresponding algorithm, and demonstrate the effectiveness of our proposed DRTPF by analyzing its robustness against noise and sparsification errors. Extensive experimental results on image denoising show that our proposed model achieves superior denoising performance, in terms of subjective and objective quality, compared to traditional sparse models.

关键词 :

parseval frame parseval frame sparse representation sparse representation transform transform

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Zhang, Min , Shi, Yunhui , Qi, Na et al. Data-Driven Redundant Transform Based on Parseval Frames [J]. | APPLIED SCIENCES-BASEL , 2020 , 10 (8) .
MLA Zhang, Min et al. "Data-Driven Redundant Transform Based on Parseval Frames" . | APPLIED SCIENCES-BASEL 10 . 8 (2020) .
APA Zhang, Min , Shi, Yunhui , Qi, Na , Yin, Baocai . Data-Driven Redundant Transform Based on Parseval Frames . | APPLIED SCIENCES-BASEL , 2020 , 10 (8) .
导入链接 NoteExpress RIS BibTex
每页显示 10| 20| 50 条结果
< 页,共 19 >

导出

数据:

选中

格式:
在线人数/总访问数:154/2883883
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司