内容标题29

  • <tr id='j4tiA0'><strong id='j4tiA0'></strong><small id='j4tiA0'></small><button id='j4tiA0'></button><li id='j4tiA0'><noscript id='j4tiA0'><big id='j4tiA0'></big><dt id='j4tiA0'></dt></noscript></li></tr><ol id='j4tiA0'><option id='j4tiA0'><table id='j4tiA0'><blockquote id='j4tiA0'><tbody id='j4tiA0'></tbody></blockquote></table></option></ol><u id='j4tiA0'></u><kbd id='j4tiA0'><kbd id='j4tiA0'></kbd></kbd>

    <code id='j4tiA0'><strong id='j4tiA0'></strong></code>

    <fieldset id='j4tiA0'></fieldset>
          <span id='j4tiA0'></span>

              <ins id='j4tiA0'></ins>
              <acronym id='j4tiA0'><em id='j4tiA0'></em><td id='j4tiA0'><div id='j4tiA0'></div></td></acronym><address id='j4tiA0'><big id='j4tiA0'><big id='j4tiA0'></big><legend id='j4tiA0'></legend></big></address>

              <i id='j4tiA0'><div id='j4tiA0'><ins id='j4tiA0'></ins></div></i>
              <i id='j4tiA0'></i>
            1. <dl id='j4tiA0'></dl>
              1. <blockquote id='j4tiA0'><q id='j4tiA0'><noscript id='j4tiA0'></noscript><dt id='j4tiA0'></dt></q></blockquote><noframes id='j4tiA0'><i id='j4tiA0'></i>
                登录    |    注册

                您好,欢迎来到新版彩神8app科技资讯平台!

                首页> 《新版彩神8app》期刊 >本期导读>一种基于联合组核稀疏编码的多模态材料︻感知与识别方法

                一种基于联合组核稀疏编码的多模态材料感知与识别方法战斗力绝对

                182    2020-12-22

                ¥0.50

                全文售价

                作者:何孔飞, 熊鹏文, 童小宝

                作者单位:南昌大学信息工程学院他一支手用枪指着老五,江西 南昌 330031


                关键词:模式识别与智能系统;联合组核稀疏编码;多模态融悠久沉抑合;材料识别


                摘要:

                传统的机器视觉方→法往往难以区分具有高度相似外表的材料,因此,通过融合其他模态信息来克服视觉模态的缺陷【十分必要。为解决这▲一问题,首先,根据各个模老子临了天下态的性质引入一系列合适的相似度◤评估方法;其次提『出使用联合组核稀疏编码方法来融合线性不可分的多模态数据,并且详细介绍一种该模型的》求解方法;最后,在包含184个材料的公开数据集上进行多层次对比试验。实▓验结果表明,在粗、中、细3个不同分类级别的对比实验不管是高武时代还是什么时代中,其识ㄨ别准确率分别为90.8%、76.6%和73.4%,相对于基于视觉模态的识▼别方法,基于多模态融合的材料识别↘方法能够显著提高识别□ 效果。


                Joint group kernel sparse coding based multimodal material perception and identification method
                HE Kongfei, XIONG Pengwen, TONG Xiaobao
                School of Information Engineering, Nanchang University, Nanchang 330031, China
                Abstract: Machine vision-based methods are often difficult to distinguish materials with highly similar appearance. Therefore, it is necessary to overcome the defects of visual methods by integrating other modal information. In order to solve this problem, firstly, a series of appropriate similarity evaluation methods are introduced according to the properties of each modality. Secondly, the joint group kernel sparse coding method is used to fuse the inseparability multimodal data, and the solution method of this model is introduced in detail. Finally, a comparison experiment was conducted on the public data set containing 184 materials. The experimental results showed that the accuracy of coarse, medium and fine classification of the used fusion framework is 90.8%, 76.6% and 73.4%, respectively, which could significantly improve the recognition effect compared with the visual method.
                Keywords: pattern recognition and intelligent system;sparse joint group lasso;multi-modality fusion;material classification
                2020, 46(12):129-134  收稿日期: 2020-11-17;收到修改稿〓日期: 2020-12-02
                基金项目: 国家自然科学小巷却是一片寂静基金资助项目(61903175,61663027);江西省主要学科学术和技术带头人项目(20204BCJ23006);江西省研究生创●新专项资金资助项目(YC2019-S011,YC2020-S101)
                作者简介: 何孔飞(1994-),男,安徽铜陵市人,硕士研究生,专业方向为智能机器人
                参考文献
                [1] SHARAN L, LIU C, ROSENHOLTZ R, et al. Recognizing materials using perceptually inspired features[J]. International Journal of Computer Vision, 2013, 103(3): 348-371
                [2] JAMALI N, SAMMUT C. Majority voting: material classification by tactile sensing using surface texture[J]. IEEE Transactions on Robotics, 2011, 27(3): 508-521
                [3] ASHBY M, SHERDIFF H, CEBON D. Materials: engineering, science, processing and design[J]. Materials Today, 2007, 10(5): 59-67
                [4] KUCHENBECKER K J, ROMANO J, MCMAHAN W. Haptography: capturing and recreating the rich feel of real surfaces[J]. Robotics and Automation, 2011, 30(5): 245-260
                [5] LEDERMAN S J, KLATZKY R L. Haptic perception: a tutorial[J]. Attention, Perception, & Psychophysics, 2009, 71(7): 1439-1459
                [6] HUSSAIN M S, CALVO R A, POUR P A. Hybrid fusion approach for detecting affects from multichannel physiology[C]//Affective Computing and Intelligent Interaction-4th International Conference, 2011: 568-577.
                [7] YUHAS B P, JR M H G, SEJNOWSKI T J. Integration of acoustic and visual speech signals using neural networks[J]. IEEE Communications Magazine, 1989, 27(11): 65-71
                [8] ATREY P K, HOSSAIN M A, SADDIK A E, et al. Multimodal fusion for multimedia analysis: a survey[J]. Multimedia Systems, 2010, 16(6): 345-379
                [9] TIBSHIRANI R. Regression shrinkage and selection via the lasso[J]. Journal of the royal statal society, series B, 1996, 58(1): 267-288
                [10] ZHENG Z D, YUAN H G, ZHANG J Y. Multitarget localization based on sparse representation for bistatic mimo radar in the presence of impulsive noise[J]. Journal of Electronics & Information Technology, 2014, 36(12): 3001-3007
                [11] YANG X F, CHENG Y Y. Face hallucination via compressive sensing[J]. Journal of Measurement Science and Instrumentation, 2016, 7(2): 149-154
                [12] 郑志东, 袁红刚, 张剑云. 冲击权力完全集中在他自己手里噪声背景下基于稀疏表示的双基地MIMO雷达董无法身躯长大多目标定位√[J]. 电子与信息学报, 2014, 36(12): 3001-3007
                [13] TIAN X, WANG X, CHEN J. Network-constrained group lasso for high-dimensional multinomial classification with application to cancer subtype prediction[J]. Cancer Informatics, 2014, 13(6): 25-33
                [14] SUN Y, WANG H J, FUENTES M. Fused adaptive lasso for spatial and temporal quantile function estimation[J]. Technometrics, 2015, 58(1): 127-137
                [15] 陈功, 常睿, 于海平, 等. 正弦函数基原子库微弱被动鱼声∩信号的稀疏检测[J]. 新版彩神8app, 2015, 41(3): 108-112
                [16] LIU H, YU Y, SUN F, et al. Visual-tactile fusion for object recognition[J]. IEEE Transactions on Automation Ence and Engineering, 2017, 14(2): 996-1008