DISENTANGLING REASONING FACTORS FOR NATURAL LANGUAGE INFERENCE

Disentangling Reasoning Factors for Natural Language Inference

Disentangling Reasoning Factors for Natural Language Inference

Blog Article

Natural Language Inference (NLI) seeks to deduce the relations of two texts: a premise and a hypothesis.These two texts may share similar or different basic contexts, while three distinct reasoning factors emerge in the inference from premise to hypothesis: entailment, neutrality, and contradiction.However, the entanglement of the reasoning factor with the basic context in the learned representation space often complicates the task of NLI models, hindering CHEER UP BUTTERCUP accurate classification and determination based on the reasoning factors.

In this study, drawing inspiration from the successful application of disentangled variational autoencoders in other areas, we separate and extract the reasoning factor from the basic context of NLI data through latent variational inference.Meanwhile, we employ mutual information estimation when optimizing Variational AutoEncoders (VAE)-disentangled reasoning factors further.Leveraging disentanglement optimization in Riding Socks NLI, our proposed a Directed NLI (DNLI) model demonstrates excellent performance compared to state-of-the-art baseline models in experiments on three widely used datasets: Stanford Natural Language Inference (SNLI), Multi-genre Natural Language Inference (MNLI), and Adversarial Natural Language Inference (ANLI).

It particularly achieves the best average validation scores, showing significant improvements over the second-best models.Notably, our approach effectively addresses the interpretability challenges commonly encountered in NLI methods.

Report this page