Language-Model 16
- [Paper Review] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
- [Paper Review] Qwen 2.5
- [Paper Review] Llama 3
- [Paper Review] Llama 2: Open Foundation and Fine-Tuned Chat Models
- [Paper Review] QLoRA: Efficient Finetuning of Quantized LLMs
- [Paper Review] Alpaca: A Strong, Replicable Instruction-Following Model
- [Paper Review] LLaMA: Open and Efficient Foundation Language Models
- [Paper Review] LoRA: Low-Rank Adaptation of Large Language Models
- [Paper Review] GPT-3: Language Models are Few-Shot Learners
- BERT, ELMo, GPT-2 모델 비교
- [Paper Review] GPT-2: Language Models are Unsupervised Multitask Learners
- [Paper Review] ELMo: Deep contextualized word representations
- [Paper Review] Attention is All You Need
- 기계번역 분야에서의 RNN
- Language Model: n-gram에서 RNN으로의 발전
- [Paper Review] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding