Hong-Ming Chiu
Hong-Ming Chiu
Ph.D. Student @ UIUC
hmchiu2 [at] illinois.edu


Hong-Ming Chiu is a Electrical and Computer Engineering Ph.D. student at the University of Illinois Urbana-Champaign (UIUC), working with Prof. Richard Y. Zhang. He received his Bachelor's Degree in Electronics Engineering from National Chiao Tung University, Hsinchu, Taiwan (NCTU). He worked with Prof. Carrson C. Fung and Prof. Tian-Sheuan Chang during his undergrads. He is a member of IEEE-Eta Kappa Nu, Mu Sigma Chapter and a former member of IEEE Student Bench at NCTU. His research interests include Machine Learning and Optimization.


Aug. 2021 - Present
Ph.D. student @ UIUC
Coordinated Science Lab
Optimization, Machine Learning
July 2020 - Feb. 2021
Research Assistant @ NCTU
Artificial Intelligence and Multimedia Lab
Knowledge-Graph, Recommender System
Spring 2020
Exchange Student @ UIUC
Indep. Study @ Coordinated Science Lab
Model change detection system
Summer 2019
Summer Research @ USC
Signal Transformation, Analysis and Compression Group
Graph Learning, Variogram, Kriging
Mar. 2019 - July 2020
Undergraduate Researcher @ NCTU
Communication Electronics and Signal Processing Lab
Graph Learning, Machine Learning, Communications
Jun. 2018 - Jan. 2019
Undergraduate Researcher @ NCTU
VLSI Signal Processing Lab
Model Pruning, Machine Learning, Data Science
Fab. 1997
He was Born


Tight Certification of Adversarially Trained Neural Networks via Nonconvex Low-Rank Semidefinite Relaxations

To certify the robustness of neural networks to adversarial perturbations, most state-of-the-art techniques rely on a linear programming (LP) relaxation of the ReLU activation. Recent results suggest that LP relaxation faces an inherent "convex relaxation barrier". In this paper, we propose a nonconvex relaxation for the ReLU relaxation, based on a low-rank restriction of a semidefinite programming (SDP) relaxation. We show that the nonconvex relaxation has a similar complexity to the LP relaxation, and can almost completely overcome the "convex relaxation barrier" faced by the LP relaxation.

[ PDF ]
Accelerating SGD for Highly Ill-Conditioned Huge-Scale Online Matrix Completion

The matrix completion problem seeks to recover a $d \times d$ ground truth matrix of low rank $r \ll d$ from observations of its individual elements. Stochastic gradient descent (SGD) is one of the few algorithms capable of solving matrix completion on a huge scale. Unfortunately, SGD experiences a dramatic slow-down when the underlying ground truth is ill-conditioned; it requires at least $O(\kappa \log(1/\epsilon))$ iterations to get $\epsilon$-close to ground truth matrix with condition number $\kappa$. In this paper, we propose a preconditioned version of SGD that is agnostic to $\kappa$. For a symmetric ground truth and the Root Mean Square Error (RMSE) loss, we prove that the preconditioned SGD converges in $O(\log(1/\epsilon))$ iterations.

[ PDF ] [ Code ]
NeurIPS 2022 in New Orleans, LA
Graph Learning and Augmentation Based Interpolation of Signal Strength for Location-Aware Communications

A graph learning and augmentation (GLA) technique is proposed herein to solve the received signal power interpolation problem, which is important for preemptive resource allocation in location-aware communications. A graph parameterization results in the proposed GLA interpolator having superior mean-squared error performance and lower computational complexity than the traditional Gaussian process method. Simulation results and analytical complexity analysis are used to prove the efficacy of the GLA interpolator.

[ PDF ] [ Code ]
EUSIPCO 2020 in Amsterdam, NL
Run Time Adaptive Network Slimming for Mobile Environments

Modern convolutional neural network (CNN) models offer significant performance improvement over previous methods, but suffer from high computational complexity and are not able to adapt to different run-time needs. To solve the above problem, this paper proposes an inference-stage pruning method that offers multiple operation points in a single model, which can provide computational power-accuracy modulation during run time. This method can perform on shallow CNN models as well as very deep networks such as Resnet101. Experimental results show that up to 50% savings in the FLOP are available by trading away less than 10% of the top-1 accuracy.

[ PDF ] [ Code ]
ISCAS 2019 in Sapporo, Japan


Building Oscilloscope on FPGA

We built a simple oscilloscope using Nexys 4 DDR board and PCB. First, the PCB transforms the input voltage signal into a signal with an acceptable voltage range for the FPGA board input, as well as generating knobs' control signal. The FPGA board then takes the processed signal and control signal to display waveforms, change voltage scale, adjust sweep time, etc. This work is the final project of the Digital Laboratory class at NCTU.

[ Code ] [ Video ]
Contributors: Hong-Ming Chiu, Huan-Jung Lee
Graph Learning: Causal Graph Process (CGP) & Sparse Vector Autoregressive model (SVAR)

This work contains the impelmentation and comparison of two graph learning algorithms, Causal Graph Process (CGP) and Sparse Vector Autoregressive model (SVAR). These two graph learning methods can be used to derive the graph representation among a large number of unstructured time series data, and then make predictions on the future data.

[ Code ] [ Slides ]
Contributors: Hong-Ming Chiu
Huffman Coding Hardware

Implemented 8-bit Huffman coding algorithm using System Verilog. The system takes an image as an input, the image contains 100 pixels and each pixel value is an integer between 1 to 6 (inclusive). The system then outputs the Huffman Code for each pixel value based on the source probability distribution (more frequent pixel values will have the shorter codewords). This is the final project of the Digital Circuit and Systems class in NCTU.

[ Code ]
Contributors: Hong-Ming Chiu, Huan-Jung Lee