Hi, I am a graduate student in Collaborative Robotics and Intelligent Systems (CoRIS) Institute at Oregon State University where I am advised by Prof. Fuxin Li. My research interests mainly lie in the area of Computer vision, Interpretable AI models, Optimization, and Reinforcement Learning.

Before attending Oregon State, I was an undergraduate at Amirkabir University of Technology. I was a member of Multimedia Signal Processing Research Lab (MSPRL) under supervision of Prof. Hamid Sheikhzadeh.


    Visualizing Deep Networks by Optimizing with Integrated Gradients (AAAI 20)

    Zhongang Qi, Saeed Khorram, Fuxin Li
    [PDF] [Demo] [Code]

    Understanding and interpreting the decisions made by deep learning models is valuable in many domains. In computer vision, computing heatmaps from a deep network is a popular approach for visualizing and understanding deep networks. However, heatmaps that do not correlate with the network may mislead human, hence the performance of heatmaps in providing a faithful explanation to the underlying deep network is crucial. In this paper, we propose I-GOS, which optimizes for a heatmap so that the classification scores on the masked image would maximally decrease. The main novelty of the approach is to compute descent directions based on the integrated gradients instead of the normal gradient, which avoids local optima and speeds up convergence. Compared with previous approaches, our method can flexibly compute heatmaps at any resolution for different user needs. Extensive experiments on several benchmark datasets show that the heatmaps produced by our approach are more correlated with the decision of the underlying deep network, in comparison with other stateof-the-art approaches.

    Embedding Deep Networks into Visual Explanations (Under Review - PAMI)

    Zhongang Qi, Saeed Khorram, Fuxin Li
    [PDF] [Code] [User Sudy]

    We propose a novel Explanation Neural Network (XNN) to explain the predictions made by a deep network. The XNN works by embedding a high-dimensional activation vector of a deep network layer non-linearly into a low-dimensional explanation space while retaining faithfulness i.e., the original deep learning predictions can be constructed from the few concepts extracted by our explanation network. We then visualize such concepts for human to learn about the high-level concepts that deep learning is using to make decisions. We propose an algorithm called Sparse Reconstruction Autoencoder (SRAE) for learning the embedding to the explanation space. SRAE aims to reconstruct part of the original feature space while retaining faithfulness. A pull-away term is applied to SRAE to make the explanation space more orthogonal. A visualization system is then introduced for human understanding of the features in the explanation space. The proposed method is applied to explain CNN models in image classification tasks. We conducted a human study, which shows that the proposed approach outperforms a saliency map baseline, and improves human performance on a difficult classification task. Also, several novel metrics are introduced to evaluate the performance of explanations quantitatively without human involvement.

  • Mohamadreza Jafaryani, Saeed Khorram, Vahid Pourahmadi, and Minoo Shahbazi. “Sleep Stage Scoring Using Joint Frequency-Temporal and Unsupervised Features,”, International Conference on New Research Achievements in Electrical and Computer Engineering (ICNRAECE), IEEE, 2016. [PDF]

  • Mehran Soltani, Mohammad Hasan Shammakhi, Saeed Khorram, and Hamid Sheikhzadeh. ”Combined mRMR filter and sparse Bayesian classifier for analysis of gene expression data.”, International Conference of Signal Processing and Intelligent Systems (ICSPIS), IEEE, 2016. [PDF]