Chin-Lun (Allen) Fu
I am Chin-Lun (Allen) Fu from Taiwan and I am an incoming MSCS student at UCLA. In the past, I received B.S. degree from the Electrical Engineering Department at National Taiwan University. My research focuses on parameter-efficient-tuning with Large Language Models (LLMs) in different domains, spanning from Natural Language Processing (NLP), Speech, and Computer Vision (CV).
At National Taiwan University, I was fortunate to work with Prof. Hung-yi Lee on efficient tuning in NLP and Speech tasks, and with Prof. Yu-Chiang Frank Wang on domain generalized problem.
Email  / 
CV  / 
Google Scholar  / 
Github  / 
LinkedIn
|
|
Microsoft AI
[2022.04 - 2022.11]
Research Intern
|
Intel
[2021.01 - 2021.08]
Hardware Verification Intern
|
DeepQ
[2020.07 - 2020.11]
Research Intern
|
Research(* indicates equal contribution)
|
|
AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
Chin-Lun Fu*,
Zih-Ching Chen*,
Yun-Ru Lee,
Hung-yi Lee
Findings-NAACL, 2022
arXiv
/
code
In this work, we present AdapterBias. By adding token-dependent representation shifts to the PLM, AdapterBias shows competitive results even though it uses far fewer parameters than the existing methods.
|
|
Exploring Efficient-tuning Methods in Self-supervised Speech Models
Zih-Ching Chen*,
Chin-Lun Fu*,
Chih-Ying Liu,
Shang-Wen (Daniel) Li,
Hung-yi Lee
SLT, 2022
arXiv
In this study, we aim to explore efficient tuning methods for speech self-supervised learning. We show that the performance parity can be achieved with over 90% parameter reduction, and discussed the pros and cons of efficient tuning techniques. This is the first comprehensive investigation of various adapter types across speech tasks.
|
|
Learning Facial Liveness Representation for Domain Generalized Face Anti-spoofing
Zih-Ching Chen*,
Lin-Hsi Tsao*,
Chin-Lun Fu*,
Shang-Fu Chen,
Yu-Chiang Frank Wang
ICME, 2022
arXiv
Based on the idea of representation disentanglement, we present a network architecture that is able to extract facial liveness, content, and domain features.
|
|