English
首页 - 信息通知 - 新闻内容

2018年11月10日(周六)学术报告日程安排

2018年11月07日 16:16  点击:[]

欢迎感兴趣的老师及同学参加。

时间/地点

内容

9:00 - 9:15

/ 综合楼五楼会议室

 

开场 (刘日升老师)

9:15 - 9:55

/ 综合楼五楼会议室

        

主持人:刘日升

报告题目: Some new optimization theory for convergence analysis of first-order algorithms

摘要: We present some new theoretical results for the topics of optimality conditions, constraint qualifications and error bounds in optimization, and show how these theoretical results can be used for analyzing the convergence of some very popular first-order algorithms which have been finding wide applications in data science domains.  Some fundamental models such as the LASSO and grouped LASSO are studied, and it is shown that the linear convergence can be obtained if some algorithms are implemented for these models such as the proximal gradient method, the proximal alternating linearized minimization algorithm and the randomized block coordinate proximal gradient method. We provide a novel analytic framework based on variational analysis techniques (e.g., error bound, calmness, metric subregularity) for the convergence analysis of first-order algorithms. By this new analytic framework, we significantly improve some convergence rate results in the literature and obtain some new results.

报告人简介: 张进,1986年3月出生,香港浸会大学研究助理教授。2007年于大连理工大学人文社会科学学院获文学学士,2010年于大连理工大学数学科学学院获得理学硕士学位,2014年12月于加拿大维多利亚大学数学与统计系获得应用数学博士学位。2015年4月进入香港浸会大学数学系工作。张进主要从事最优化理论方面,特别是最优性条件和约束规格方面的研究,发表SCI检索论文10余篇,其中有论文发表在 Mathematical Programming,SIAM Journal on Optimization,SIAM journal on Numerical Analysis等优化领域顶级期刊上。

 

9:55 - 10:35

/ 综合楼五楼会议室

报告题目: Implementing the ADMM for big datasets: a case study of LASSO

摘要: The alternating direction method of multipliers (ADMM) has been extensively used in a wide variety of different applications. When large datasets with high-dimensional variables are considered, subproblems arising from the ADMM must be inexactly solved even though they may theoretically have closed-form solutions. Such a scenario immediately poses mathematical ambiguities such as how these subproblems should be accurately solved and whether the convergence can still be guaranteed. Although the ADMM is well known, it seems that these topics should be deeply investigated. In this paper, we study the mathematics of how to implement the ADMM for a large dataset scenarios. More specifically, we attempt to focus on the convex programming case where there is a quadratic function with extremely high-dimensional variables in the objective function of the model; thereby there is a huge-scale system for linear equations needing to be solved at each iteration of the ADMM. It is revealed that there is no need, indeed it is impossible, to exactly solve this linear system, and we attempt to propose an adjustable inexactness criterion to automatically and inexactly solve this linear system. We further attempt to identify the safe-guard number for the internally nested iterations that can sufficiently ensure this inexactness criterion if the linear system would be solved by a standard numerical linear algebra solver. The convergence, together with the worst-case convergence rate measured by the iteration complexity, is rigorously established for the ADMM with inexactly solved subproblems. Some numerical experiments for large datasets of the least absolute shrinkage and selection operator containing millions of variables are reported to show the efficiency of the mentioned inaccurate implementation of the ADMM.

报告人简介: 王祥丰,2009年毕业于南京大学数学系,2014年同样于南京大学数学系获得理学博士学位(导师:何炳生教授);攻读博士学位期间,获得国家留学基金委资助赴美国明尼苏达大学联合培养(导师:罗智泉教授),并赴香港浸会大学访问(导师:袁晓明教授)。毕业后,加入华东师范大学计算机科学与软件工程学院,主要研究方向是大规模最优化算法设计与理论分析,及其在机器学习中的应用(机器学习驱动非凸优化、最优传输),已在Mathematical Programming(MP), SIAM Journal on Scientific Computing(SISC), Mathematics of Operations Research(MOR)等数学规划和计算数学期刊以及IEEE Transactions on Pattern Analysis and Machine Intelligence(TPAMI), IEEE Transactions on Signal Processing(TSP), Neurocomputing等机器学习和信号处理应用期刊以及IJCAI, AAAI, ICMR, ICASSP, INTERSPEECH等会议发表论文二十余篇。


10:35 - 11:15

 / 综合楼五楼会议室

报告题目: Task Embedded Coordinate Update: A Realizable Framework for Multivariate Non-convex Optimization

摘要: We propose a realizable framework named TECU, which embeds task-specific strategies into update schemes of coordinate descent, for optimizing multivariate non-convex problems with coupled objective functions. On one hand, TECU is capable of improving algorithm efficiencies through embedding productive numerical algorithms, for optimizing univariate sub-problems with nice properties. From the other side, it also augments probabilities to receive desired results, by embedding advanced techniques in optimizations of realistic tasks. Integrating both numerical algorithms and advanced techniques together, TECU is proposed in a unified framework for solving a class of non-convex problems. Although the task embedded strategies bring inaccuracies in sub-problem optimizations, we provide a realizable criterion to control the errors, meanwhile, to ensure robust performances with rigid theoretical analyses. By respectively embedding ADMM and a residual-type CNN, the experimental results verify both efficiency and effectiveness of embedding task-oriented strategies in coordinate descent for solving practical problems.

报告人简介: 王怡洋,1989年7月出生,复旦大学博士后。2012年于大连理工大学数学科学学院获理学学士,2017年9月于大连理工大学数学科学学院计算数学专业获得理学博士学位。同年12月进入复旦大学大气科学研究院从事博士后科研工作。王怡洋主要从事优化方法在机器学习领域中的应用研究,尤其关注多变量的非凸优化问题。


10:15 – 11:30

 / 综合楼五楼会议室

讨论


上一条:(转发)「科创竞赛」关于参加2019年全国大学生机械产品数字化设计大赛的通知 下一条:《日语听说1》口试安排

关闭