赞
踩
We introduce GQA, a new dataset for real-world visual reasoning and compositional question answering, seeking to address key shortcomings ofprevious VQA datasets. We have developed a strong and robust question engine that leverages Visual Genome scene graph structures to create 22M diverse reasoning questions, which all come with functional programs that represent their semantics. We use the programs to gain tight control over the answer distribution and present a new tunable smoothing technique to mitigate question biases. Accompanying the dataset is a suite ofnew metrics that evaluate essential qualities such as consistency, grounding and plausibility. A careful analysis is performed for baselines as well as state-of-the-art models, providing fine-grained results for different question types and topologies. Whereas a blind LSTM obtains a mere 42.1%, and strong VQA models achieve 54.1%, human performance tops at 89.3%, offering ample opportunity for new research to explore. We hope GQA will provide an enabling resource for the next generation ofmodels with enhanced robustness, improved consistency, and deeper semantic understanding ofvision and language.
这里介绍了一个新的数据集GQA,用于对真实图像进行推理和对合成问题进行回答,来解决之前VQA数据集存在的缺陷。作者开发了一个问题引擎,利用Visual Genome数据集的场景图来构建2200万个不同的推理问题,问题都带有一个功能程序,该功能程序表示问题的语义信息,可以用来对答案进行严格控制,并使用新的技术来减少问题偏差。和数据集相对应的还有一套新的度量标准。对baseline和state-of-the-art进行了充分分析。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。