ContraCLM: Contrastive Learning For Causal Language Model

Published in ACL, 2023


[Download Paper][Source Code]

Despite exciting progress in large-scale language generation, the expressiveness of its representations is severely limited by the anisotropy issue where the hidden representations are distributed into a narrow cone in the vector space. To address this issue, we present ContraGen, a novel contrastive learning framework to improve the representation with better uniformity and discrimination. We assess ContraGen on a wide range of downstream tasks in natural and programming languages. We show that ContraGen can effectively enhance both uniformity and discrimination of the representations and lead to the desired improvement on various language understanding tasks where discriminative representations are crucial for attaining good performance. Specifically, we attain 44% relative improvement on the Semantic Textual Similarity tasks and 34% on Code-to-Code Search tasks. Furthermore, by improving the expressiveness of the representations, ContraGen also boosts the source code generation capability with 9% relative improvement on execution accuracy on the HumanEval benchmark.