mally的技术笔记mally的技术笔记
AIGC相关
关于作者
AIGC相关
关于作者
  • AIGC相关
  • 长文

    • 长文
    • MoE 梳理
    • Tokenization 分词
    • VERL 框架看 GRPO 过程
  • 细碎大模型知识

    • 细碎大模型知识
    • clip-higher 为什么有效
    • F1 分数是什么
    • 温度 temperature 是什么
    • 分词和嵌入的关系
    • SwiGLU 激活函数
    • 拒绝采样微调是什么
    • 策略熵和交叉熵
    • GRPO 流程
    • DPO 公式推导
  • 论文阅读

    • 论文阅读
    • Demons in the Detail: On Implementing Load Balancing Loss for Training Specialized Mixture-of-Expert
    • Scaling Relationship on Learning Mathematical Reasoning with Large Language Models
    • switch transfomer 论文
    • RAGEN: Understanding Self-Evolution in LLM Agents via Multi-Turn Reinforcement Learning
    • Mixture of Experts Explained
    • Demons in the Detail: On Implementing Load Balancing Loss for Training Specialized Mixture-of-Expert

      • Demons in the Detail: On Implementing Load Balancing Loss for Training Specialized Mixture-of-Expert
      • 背景知识

Scaling Relationship on Learning Mathematical Reasoning with Large Language Models

情境学习 ICL

发现 1: 预训练损失

pre-training loss is approximately negatively linear correlated to the SFT and ICL accuracy in a given interval which is a better performance indicator than pre-trained model sizes or pre-trained token counts.

提高 LLMs 数学推理能力的核心思想是在微调或推理过程中汇总各种采样推理路径。

k 固定,经过算法“选择路径推理”过滤后,小模型比大模型能生产更多的推理路径,是因为大模型容易过拟合。

Prev
Demons in the Detail: On Implementing Load Balancing Loss for Training Specialized Mixture-of-Expert
Next
switch transfomer 论文