Da Chang
|
Da Chang (昌达)
Ph.D student
Pengcheng Laboratory, Shenzhen, China
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
Email: changda24@mails.ucas.ac.cn
[Google Scholar]
[Linkedin]
[Github]
|
About me
I graduated from the Department of Intelligent Science and Technology, School of Automation, Central South University.
Currently, I am a jointly educated Ph.D candidate in a collaborative program between the Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences(SIAT) and Pengcheng Laboratory(PCL).
My major is Pattern Recognition and Intelligent Systems and my research interests focus on deep learning optimization and generalization and the application of deep models to various areas.
I am very interested in the theory and application of deep learning. I would like to communicate with you about neural network training techniques, application scenarios and optimization theories of deep learning.
News
Research that I lead
|
AlphaAdam:Asynchronous Masked Optimization with Dynamic Alpha for Selective Updates
Da Chang, Yu Li, Ganzhao Yuan
2025.1, Preprint
Developed AlphaAdam, an LLM optimization framework using intra-layer parameter updates. Created parameter masks based on historical momentum and gradient consistency, paired with adaptive mask strength for efficient optimization and guaranteed convergence.
Outperformed AdamW in convergence speed and efficiency across GPT-2, RoBERTa, and Llama-7B tasks.
|
|
IKUN: Initialization to Keep snn training and generalization great with sUrrogate-stable variaNce
Da Chang, Deliang Wang, Xiao Yang
2024.11
We propose IKUN, a variance-stabilizing initialization method integrated with surrogate gradient functions, specifically designed for SNNs. IKUN stabilizes signal propagation, accelerates convergence, and improves generalization.Hessian analysis further reveals that models trained with IKUN converge to flatter minima, which promotes better generalization.
Please note, this is currently just a course project, but it’s still quite interesting!
paper
|
|
DLoRA-TrOCR: Mixed Text Mode Optical Character Recognition Based On Transformer
Da Chang, Yu Li
ICONIP 2024, 2024.4
We explored the optimization of various full-parameter fine-tuning methods, such as LoRA in VLM. For OCR, a visual-text hybrid model, corresponding to the Transformer architecture, DoRA and LoRA have great improvements for visual encoders and text decoders in hybrid datasets including handwriting, print and Street View datasets, respectively.
paper,code
|
Research that I proudly participate in
|
SfMDiffusion: Self-Supervised Monocular Depth Estimation in Endoscopy Based on Diffusion Models
Yu Li, Da Chang, Jin Huang, Lan Dong, Du Wang, Liye Mei, Cheng Lei
International Journal of Computer Assisted Radiology and Surgery, 2024.6
For endoscope medical scenarios, we use the diffusion model for depth estimation. We build a teacher model, set knowledge distillation, optical appearance and ddim losses, and introduce the teacher's discriminative prior, which significantly enhances the accuracy and confidence of the results.
code
|
|
Research on National Image Based on Social Sentiment Analysis of Modern International Events
Xuechi Chen, Haifeng Lin, Da Chang
Third prize of the 9th National Statistical Modeling Competition for College Students, 2023.8
We selected the texts on the theme of "Beijing Winter Olympics" from the domestic Weibo social platform and the overseas Twitter social platform. Based on the fine-tuned BERT word segmentation and sentiment analysis to mine the latent detail tags of the text, and using the topic modeling method to determine the consistent topic, thereby constructing the national image visualization model and conducting qualitative analysis.
|
Honors and Awards
Skills
|