NeurlPS 2022 | 自然语言处理相关论文分类整理

2022 年 10 月 2 日 专知
  © 作者|王晓磊
  机构|中国人民大学高瓴人工智能学院 
 研究方向 | 对话式信息获取 

本文从NeurlPS 2022 的2000多篇接收论文中筛选出了与自然语言处理相关的论文200多篇,并按照研究主题进行分类整理,以供参考。


导读:

NeurIPS 2022 是 CCF A 类会议,人工智能领域方向的顶级国际会议之一。第36届神经信息处理系统会议将于今年 11 月 28 日至 12 月 9 日举行。官方发布的接收论文列表链接如下:https://nips.cc/Conferences/2022/Schedule?type=Poster。

本文从 2000 多篇接收论文中筛选出了与自然语言处理相关的论文 200 多篇,并按照研究主题进行分类整理,以供参考。论文列表也同步更新到 GitHub,欢迎大家关注和Star:github.com/RUCAIBox/Top-conference-paper-list。

目录:

  • Model 【模型】

  • Interpretability, Analysis and Evaluation 【可解释性、分析、评测】

  • Robustness and Safety 【鲁棒性与安全】

  • knowledge and reasoning 【知识与推理】

  • Information Extraction 【信息抽取】

  • Information Retrieval 【信息检索】

  • Text Classification 【文本分类】

  • Text Generation 【文本生成】

  • Machine Translation and Multilinguality 【机器翻译与多语言】

  • Multimodality 【多模态】

  • Special Tasks 【特殊任务】


01

Model 

【模型】


1. Model Design 【模型设计】

  • Recurrent Memory Transformer

  • Jump Self-attention: Capturing High-order Statistics in Transformers

  • Block-Recurrent Transformers

  • Staircase Attention for Recurrent Processing of Sequences

  • Non-Linguistic Supervision for Contrastive Learning of Sentence Embeddings

  • Transcormer: Transformer for Sentence Scoring with Sliding Language Modeling

  • Mixture-of-Experts with Expert Choice Routing

  • On the Representation Collapse of Sparse Mixture of Experts

  • Improving Transformer with an Admixture of Attention Heads

  • Your Transformer May Not be as Powerful as You Expect

  • Confident Adaptive Language Modeling

  • Decoupled Context Processing for Context Augmented Language Modeling

  • Unsupervised Cross-Task Generalization via Retrieval Augmentation

  • Revisiting Neural Scaling Laws in Language and Vision

  • Learning to Scaffold: Optimizing Model Explanations for Teaching

2. Model Compression 【模型压缩】

  • Information-Theoretic Generative Model Compression with Variational Energy-based Model

  • Towards Efficient Post-training Quantization of Pre-trained Language Models

  • Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models

  • Deep Compression of Pre-trained Transformer Models

  • LiteTransformerSearch: Training-free On-device Search for Efficient Autoregressive Language Models

  • GPT3.int8(): 8-bit Matrix Multiplication for Transformers at Scale

  • MorphTE: Injecting Morphology in Tensorized Embeddings

  • Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models

  • A Fast Post-Training Pruning Framework for Transformers


3. Model Training 【模型训练】
  • Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

  • Generating Training Data with Language Models: Towards Zero-Shot Language Understanding

  • A Data-Augmentation Is Worth A Thousand Samples

  • TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers

  • The Stability-Efficiency Dilemma: Investigating Sequence Length Warmup for Training GPT Models

  • Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction

  • Training and Inference on Any-Order Autoregressive Models the Right Way

  • Decentralized Training of Foundation Models in Heterogeneous Environment

4. Model Usage 【模型使用】

  • The Unreliability of Explanations in Few-Shot In-Context Learning

  • What Can Transformers Learn In-Context? A Case Study of Simple Function Classes

  • Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning

  • Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning

  • Training language models to follow instructions with human feedback

  • LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning

  • How to talk to your model: Instructions, descriptions, and learning

  • Data Distributional Properties Drive Emergent In-Context Learning in Transformers

  • Sparse Structure Search for Parameter-Efficient Tuning

  • Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively

  • Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits

  • LIFT: Language-Interfaced FineTuning for Non-language Machine Learning Tasks

  • Adapting to Domain Shift by Meta-Distillation from Mixture-of-Experts


02

Interpretability, Analysis and Evaluation 

【可解释性、分析、评测】


  • CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior

  • Rule-Based but Flexible? Evaluating and Improving Language Models as Accounts of Human Moral Judgment

  • Understanding the Failure of Batch Normalization for Transformers in NLP

  • AttCAT: Explaining Transformers via Attentive Class Activation Tokens

  • An empirical analysis of compute-optimal large language model training

  • Why GANs are overkill for NLP

  • Exploring Length Generalization in Large Language Models

  • Capturing Failures of Large Language Models via Human Cognitive Biases

  • Pre-Trained Model Reusability Evaluation for Small-Data Transfer Learning

  • First is Better Than Last for Language Data Influence

  • What are the best Systems? New Perspectives on NLP Benchmarking

  • Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models

  • FETA: Towards Specializing Foundational Models for Expert Task Applications

  • This is the way - lessons learned from designing and compiling LEPISZCZE, a comprehensive NLP benchmark for Polish

  • Rethinking Knowledge Graph Evaluation Under the Open-World Assumption

  • A Multi-Task Benchmark for Korean Legal Language Understanding and Judgement Prediction


03

Robustness and Safety 

【鲁棒性与安全】


  • Active Learning Helps Pretrained Models Learn the Intended Task

  • Improving Certified Robustness via Statistical Learning with Logical Reasoning

  • Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models

  • BadPrompt: Backdoor Attacks on Continuous Prompts

  • A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

  • Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models

  • AD-DROP: Attribution Driven Dropout for Robust Language Model Finetuning

  • Large (robust) models from computational constraints

  • Multitasking Models are Robust to Structural Failure: A Neural Model for Bilingual Cognitive Reserve

  • A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks

  • Recovering Private Text in Federated Learning of Language Models

  • LAMP: Extracting Text from Gradients with Language Model Priors

  • SeqPATE: Differentially Private Text Generation via Knowledge Distillation

  • Differentially Private Model Compression

  • Federated Learning from Pre-Trained Models: A Contrastive Learning Approach

04

Knowledge and Reasoning 

【知识与推理】


  • Learning to Sample and Aggregate: Few-shot Reasoning over Temporal Knowledge Graph

  • Retaining Knowledge for Learning with Dynamic Definition

  • Shadow Knowledge Distillation: Bridging Offline and Online Knowledge Transfer

  • What Makes a "Good" Data Augmentation in Knowledge Distillation - A Statistical Perspective

  • Learning to Reason with Neural Networks: Generalization, Unseen Data and Boolean Measures

  • Roadblocks for Temporarily Disabling Shortcuts and Learning New Knowledge

  • PALBERT: Teaching ALBERT to Ponder

  • Locating and Editing Factual Associations in GPT

  • OTKGE: Multi-modal Knowledge Graph Embeddings via Optimal Transport

  • Large Language Models are Zero-Shot Reasoners

  • STaR: Bootstrapping Reasoning With Reasoning

  • Chain of Thought Prompting Elicits Reasoning in Large Language Models

  • ELASTIC: Numerical Reasoning with Adaptive Symbolic Compiler

  • Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering

  • Inductive Logical Query Answering in Knowledge Graphs

  • Formalizing Coherence and Consistency Applied to Transfer Learning in Neuro-Symbolic Autoencoders

  • CoNSoLe: Convex Neural Symbolic Learning

  • Deep Bidirectional Language-Knowledge Pretraining

  • Neurosymbolic Deep Generative Models for Sequence Data with Relational Constraints

  • Instance-based Learning for Knowledge Base Completion

  • LogiGAN: Learning Logical Reasoning via Adversarial Pre-training

  • Learning robust rule representations for abstract reasoning via internal inferences

  • Solving Quantitative Reasoning Problems with Language Models

  • Towards Better Evaluation for Dynamic Link Prediction

  • Predictive Querying for Autoregressive Neural Sequence Models

  • Semantic Probabilistic Layers for Neuro-Symbolic Learning

  • End-to-end Symbolic Regression with Transformers

  • A Unified Framework for Deep Symbolic Regression

  • ZeroC: A Neuro-Symbolic Model for Zero-shot Concept Recognition and Acquisition at Inference Time


05

Information Extraction 

【信息抽取】


  • Unifying Information Extraction with Latent Adaptive Structure-aware Generative Language Model

  • TweetNERD - End to End Entity Linking Benchmark for Tweets

  • METS-CoV: A Dataset of Medical Entity and Targeted Sentiment on COVID-19 Related Tweets


06

Information Retrieval

【信息检索】


  • Transformer Memory as a Differentiable Search Index

  • Autoregressive Search Engines: Generating Substrings as Document Identifiers

  • A Neural Corpus Indexer for Document Retrieval


07

Text Classification 

【文本分类】


  • CascadeXML: End-to-end Multi-Resolution Learning for Extreme Multi-Label Text Classification

  • Text Classification with Born's Rule

  • Public Wisdom Matters! Discourse-Aware Hyperbolic Fourier Co-Attention for Social Text Classification


08

Text Generation 

【文本生成】


  • CoNT: Contrastive Neural Text Generation

  • A Character-Level Length Control Algorithm for Non-Autoregressive Sentence Summarization

  • Towards Improving Faithfulness in Abstractive Summarization

  • QUARK: Controllable Text Generation with Reinforced Unlearning

  • Teacher Forcing Recovers Reward Functions for Text Generation

  • Retrieve, Reason, and Refine: Generating Accurate and Faithful Patient Instructions

  • A Contrastive Framework for Neural Text Generation

  • Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation

  • COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics

  • Diffusion-LM Improves Controllable Text Generation

  • Factuality Enhanced Language Models for Open-Ended Text Generation

  • Controllable Text Generation with Neurally-Decomposed Oracle

  • InsNet: An Efficient, Flexible, and Performant Insertion-based Text Generation Model

  • Relation-Constrained Decoding for Text Generation

  • EHRSQL: A Practical Text-to-SQL Benchmark for Electronic Health Records

  • TGEA 2.0: A Large-Scale Diagnostically Annotated Dataset with Benchmark Tasks for Text Generation of Pretrained Language Models


09

Machine Translation and Multilinguality 

【机器翻译与多语言】


  • Exploring Non-Monotonic Latent Alignments for Non-Autoregressive Machine Translation

  • A new dataset for multilingual keyphrase generation

  • Less-forgetting Multi-lingual Fine-tuning

  • Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing

  • Refining Low-Resource Unsupervised Translation by Language Disentanglement of Multilingual Translation Model

  • OccGen: Selection of Real-world Multilingual Parallel Data Balanced in Gender within Occupations

  • Multilingual Abusive Comment Detection at Scale for Indic Languages

  • The BigScience Corpus A 1.6TB Composite Multilingual Dataset

  • Addressing Resource Scarcity across Sign Languages with Multilingual Pretraining and Unified-Vocabulary Datasets


10

Multimodality 

【多模态】


  • REVIVE: Regional Visual Representation Matters in Knowledge-Based Visual Question Answering

  • Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning

  • GLIPv2: Unifying Localization and Vision-Language Understanding

  • VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts

  • A Differentiable Semantic Metric Approximation in Probabilistic Embedding for Cross-Modal Retrieval

  • Egocentric Video-Language Pretraining

  • Flamingo: a Visual Language Model for Few-Shot Learning

  • Language Conditioned Spatial Relation Reasoning for 3D Object Grounding

  • Multi-Granularity Cross-modal Alignment for Generalized Medical Visual Representation Learning

  • Deep Multi-Modal Structural Equations For Causal Effect Estimation With Unstructured Proxies

  • OmniVL: One Foundation Model for Image-Language and Video-Language Tasks

  • Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models

  • Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning

  • TVLT: Textless Vision-Language Transformer

  • Divert More Attention to Vision-Language Tracking

  • CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers

  • Text-Adaptive Multiple Visual Prototype Matching for Video-Text Retrieval

  • BMU-MoCo: Bidirectional Momentum Update For Continual Video-Language Modeling

  • Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations

  • What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs

  • Flamingo: a Visual Language Model for Few-Shot Learning

  • Self-Supervised Multi-Granularity Map Learning for Vision-and-Language Navigation

  • UniCLIP: Unified Framework for Contrastive Language-Image Pre-training

  • Contrastive Language-Image Pre-Training with Knowledge Graphs

  • PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining

  • Enhancing and Scaling Cross-Modality Alignment for Contrastive Multimodal Pre-Training via Gradient Harmonization

  • Mutual Information Divergence: A Unified Metric for Multimodal Generative Models

  • Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching

  • MACK: Multimodal Aligned Conceptual Knowledge for Unpaired Image-text Matching

  • HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes

  • CyCLIP: Cyclic Contrastive Language-Image Pretraining

  • S-Prompts Learning with Pre-trained Transformers: An Occam’s Razor for Domain Incremental Learning

  • Delving into OOD Detection with Vision-Language Representations

  • Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding

  • Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners

  • DetCLIP: Dictionary-Enriched Visual-Concept Paralleled Pre-training for Open-world Detection

  • Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts

  • Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone

  • CoupAlign: Coupling Word-Pixel with Sentence-Mask Alignments for Referring Image Segmentation

  • Relational Language-Image Pre-training for Human-Object Interaction Detection

  • Fine-Grained Semantically Aligned Vision-Language Pre-Training

  • Cross-Linked Unified Embedding for cross-modality representation learning

  • Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP

  • Kernel Multimodal Continuous Attention

  • Paraphrasing Is All You Need for Novel Object Captioning

  • Long-Form Video-Language Pre-Training with Multimodal Temporal Contrastive Learning

  • CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encoders

  • One Model to Edit Them All: Free-Form Text-Driven Image Manipulation with Semantic Modulations

  • LGDN: Language-Guided Denoising Network for Video-Language Modeling

  • Zero-Shot Video Question Answering via Frozen Bidirectional Language Models

  • WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models

  • VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation

  • ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models

  • LAION-5B: An open large-scale dataset for training next generation image-text models

  • Towards Video Text Visual Question Answering: Benchmark and Baseline

  • TaiSu: A 166M Large-scale High-Quality Dataset for Chinese Vision-Language Pre-training

  • Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training Benchmark

  • Understanding Aesthetics with Language: A Photo Critique Dataset for Aesthetic Assessment

  • Multi-modal Robustness Analysis Against Language and Visual Perturbations

  • CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks

  • OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal Regression


11

Special Tasks 

【特殊任务】


1. Code 【代码】

  • CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning

  • Fault-Aware Neural Code Rankers

  • NS3: Neuro-symbolic Semantic Code Search

  • Pyramid Attention For Source Code Summarization

2. Mathematics 【数学】

  • HyperTree Proof Search for Neural Theorem Proving

  • NaturalProver: Grounded Mathematical Proof Generation with Language Models

  • Autoformalization with Large Language Models

  • Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers

3. Others 【其他】

  • Measuring and Reducing Model Update Regression in Structured Prediction for NLP

  • Learning to Follow Instructions in Text-Based Games

  • WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents

  • LISA: Learning Interpretable Skill Abstractions from Language

  • Inherently Explainable Reinforcement Learning in Natural Language

  • Using natural language and program abstractions to instill human inductive biases in machines

  • Semantic Exploration from Language Abstractions and Pretrained Representations

  • Pre-Trained Language Models for Interactive Decision-Making

  • Knowledge-Aware Bayesian Deep Topic Model

  • Improving Intrinsic Exploration with Language Abstractions

  • Improving Policy Learning via Language Dynamics Distillation

  • Meta-Complementing the Semantics of Short Texts in Neural Topic Models

  • Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset

  • BigBio: A Framework for Data-Centric Biomedical Natural Language Processing

专知便捷查看

便捷下载,请关注专知公众号(点击上方蓝色专知关注)

  • 后台回复“D251” 就可以获取【牛津大学博士论文】深度学习临床前药物发现,251页pdf》专知下载链接

                       
专知,专业可信的人工智能知识分发 ,让认知协作更快更好!欢迎注册登录专知www.zhuanzhi.ai,获取100000+AI(AI与军事、医药、公安等)主题干货知识资料!
欢迎微信扫一扫加入专知人工智能知识星球群,获取最新AI专业干货知识教程资料和与专家交流咨询
点击“ 阅读原文 ”,了解使用 专知 ,查看获取100000+AI主题知识资料
登录查看更多
3

相关内容

ACM/IEEE第23届模型驱动工程语言和系统国际会议,是模型驱动软件和系统工程的首要会议系列,由ACM-SIGSOFT和IEEE-TCSE支持组织。自1998年以来,模型涵盖了建模的各个方面,从语言和方法到工具和应用程序。模特的参加者来自不同的背景,包括研究人员、学者、工程师和工业专业人士。MODELS 2019是一个论坛,参与者可以围绕建模和模型驱动的软件和系统交流前沿研究成果和创新实践经验。今年的版本将为建模社区提供进一步推进建模基础的机会,并在网络物理系统、嵌入式系统、社会技术系统、云计算、大数据、机器学习、安全、开源等新兴领域提出建模的创新应用以及可持续性。 官网链接:http://www.modelsconference.org/
自然语言处理顶会NAACL2022最佳论文出炉!
专知会员服务
41+阅读 · 2022年6月30日
近期必读的五篇 EMNLP 2020【反事实推理】相关论文和代码
专知会员服务
25+阅读 · 2020年11月23日
专知会员服务
123+阅读 · 2020年9月8日
专知会员服务
59+阅读 · 2020年3月19日
100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
161+阅读 · 2020年3月18日
NeurlPS2022推荐系统论文集锦
机器学习与推荐算法
1+阅读 · 2022年9月26日
ACL 2022 主会长文论文分类整理
RUC AI Box
4+阅读 · 2022年4月20日
SIGIR 2022 | 推荐系统相关论文分类整理
RUC AI Box
6+阅读 · 2022年4月18日
【ACL2020放榜!】事件抽取、关系抽取、NER、Few-Shot 相关论文整理
深度学习自然语言处理
18+阅读 · 2020年5月22日
自然语言处理常见数据集、论文最全整理分享
深度学习与NLP
11+阅读 · 2019年1月26日
(精品干货)ACL 2018最新论文归类(最全最细)分享
深度学习与NLP
19+阅读 · 2018年5月14日
国家自然科学基金
0+阅读 · 2015年12月31日
国家自然科学基金
0+阅读 · 2014年12月31日
国家自然科学基金
1+阅读 · 2014年12月31日
国家自然科学基金
0+阅读 · 2014年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
3+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
1+阅读 · 2012年12月31日
国家自然科学基金
1+阅读 · 2011年12月31日
国家自然科学基金
0+阅读 · 2008年12月31日
Arxiv
0+阅读 · 2022年11月18日
Transformers in Medical Image Analysis: A Review
Arxiv
39+阅读 · 2022年2月24日
A Comprehensive Survey on Graph Neural Networks
Arxiv
13+阅读 · 2019年3月10日
Exploring Visual Relationship for Image Captioning
Arxiv
14+阅读 · 2018年9月19日
Arxiv
12+阅读 · 2018年9月5日
VIP会员
相关VIP内容
自然语言处理顶会NAACL2022最佳论文出炉!
专知会员服务
41+阅读 · 2022年6月30日
近期必读的五篇 EMNLP 2020【反事实推理】相关论文和代码
专知会员服务
25+阅读 · 2020年11月23日
专知会员服务
123+阅读 · 2020年9月8日
专知会员服务
59+阅读 · 2020年3月19日
100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
161+阅读 · 2020年3月18日
相关基金
国家自然科学基金
0+阅读 · 2015年12月31日
国家自然科学基金
0+阅读 · 2014年12月31日
国家自然科学基金
1+阅读 · 2014年12月31日
国家自然科学基金
0+阅读 · 2014年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
3+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
1+阅读 · 2012年12月31日
国家自然科学基金
1+阅读 · 2011年12月31日
国家自然科学基金
0+阅读 · 2008年12月31日
Top
微信扫码咨询专知VIP会员