Non-Linear Anisotropic Diffusion for Memory-Efficient Computed Tomography Super-Resolution Reconstruction

Mohamed Elhoseiny*, Babak DSaleh, Ahmed Elgammal
(* Joint first authors)
ICCV 2013)


Abstract

The main question we address in this paper is how to use purely textual description of categories with no training images to learn visual classifiers for these categories. We propose an approach for zero-shot learning of object categories where the description of unseen categories comes in the form of typical text such as an encyclopedia entry, without the need to explicitly defined attributes. We propose and investigate two baseline formulations, based on regression and domain adaptation. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the classifier parameters for new classes. We applied the proposed approach on two fine-grained categorization datasets, and the results indicate successful classifier prediction.



Problem Definition: Zero-shot learning with textual description. Left: synopsis of textual descriptions for bird classes. Middle: images for “seen classes”. Right: classifier hyperplanes in the feature space. The goal is to estimate a new classifier parameter given only a textual description


Paper

paper [Elhoseiny2013ZeroShot.pdf] 
supplemental material [Elhoseiny2013ZeroShot_supplementary.zip]

Citation

@INPROCEEDINGS{6751432,
      author={Elhoseiny, Mohamed and Saleh, Babak and Elgammal, Ahmed},
      booktitle={2013 IEEE International Conference on Computer Vision}, 
      title={Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions}, 
      year={2013},
      volume={},
      number={},
      pages={2584-2591},
      keywords={Visualization;Training;Optimization;Correlation;Transfer functions;Semantics;Birds;Zero shot learning;object recognition;fine grained object recognition;computer vision;domain adaptation},
      doi={10.1109/ICCV.2013.321}}