TrUMAn: Trope Understanding in Movies and Animations

TrUMAn: Trope Understanding in Movies and Animations

[ad_1]

Recent deep learning models master tasks such as action recognition, video search, or video question answering. However, they struggle when deep cognition skills are required. For instance, in order to recommend a video with a similar story, plot, or sentiments to a recently watched one, higher-level concepts than shallow visual semantics are required.

Image credit: Pxhere, CC0 Public Domain

A study on arXiv.org proposes a novel task called Trope Understanding in Movies and Animations. The associated dataset consists of video and audio signals, reflecting the real-world interactions among people, together with associated tropes. The new model jointly understands tropes and performs storytelling on a latent space in a multi-task manner.

Experimental results show that systems that perform well on conventional video comprehension benchmarks reach at most 12% accuracy. The proposed approach boosts the model performance by 2%, still revealing room for improvement.

Understanding and comprehending video content is crucial for many real-world applications such as search and recommendation systems. While recent progress of deep learning has boosted performance on various tasks using visual cues, deep cognition to reason intentions, motivation, or causality remains challenging. Existing datasets that aim to examine video reasoning capability focus on visual signals such as actions, objects, relations, or could be answered utilizing text bias. Observing this, we propose a novel task, along with a new dataset: Trope Understanding in Movies and Animations (TrUMAn), intending to evaluate and develop learning systems beyond visual signals. Tropes are frequently used storytelling devices for creative works. By coping with the trope understanding task and enabling the deep cognition skills of machines, we are optimistic that data mining applications and algorithms could be taken to the next level. To tackle the challenging TrUMAn dataset, we present a Trope Understanding and Storytelling (TrUSt) with a new Conceptual Storyteller module, which guides the video encoder by performing video storytelling on a latent space. The generated story embedding is then fed into the trope understanding model to provide further signals. Experimental results demonstrate that state-of-the-art learning systems on existing tasks reach only 12.01% of accuracy with raw input signals. Also, even in the oracle case with human-annotated descriptions, BERT contextual embedding achieves at most 28% of accuracy. Our proposed TrUSt boosts the model performance and reaches 13.94% performance. We also provide detailed analysis to pave the way for future research. TrUMAn is publicly available at:this https URL

Research paper: Su, H.-T., Shen, P.-W., Tsai, B.-C., Cheng, W.-F., Wang, K.-J., and Hsu, W. H., “TrUMAn: Trope Understanding in Movies and Animations”, 2021. Link: https://arxiv.org/abs/2108.04542




[ad_2]

Source link

CATEGORIES
Share This

COMMENTS

Wordpress (0)
Disqus (0 )