Multimodal Social Media Video Classification with Deep Neural Networks

Classifying videos according to their content is a common task across various media types, as it allows effective content tagging, indexing and searching. In this work, we propose a general framework for video classification that is built on top of several neural network architectures. Since we rely on a multimodal approach, we extract both visual and textual features from videos and combine them in a final classification algorithm. When trained on a dataset of 30 000 social media videos and evaluated on 6 000 videos, our multimodal deep learning algorithm outperforms shallow single-modality classification methods by a large margin of up to 95%, achieving overall accuracy of 0.88.

Author: Tomasz Trzcinski
Conference: Title