<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>LLM 大模型邮报</title><link>https://blogs.llmposts.com/categories/models/</link><description>LLMPOSTS.com 是面向中文 AI 从业者的大模型资讯快报，每日追踪 GPT、Claude、Gemini、Qwen、DeepSeek 等主流模型的发布动态，深度解读论文方法、工程部署、agent 工具链与 AI 行业商业走向。</description><generator>Hugo -- gohugo.io</generator><language>zh-cn</language><copyright>© 2026 LLM大模型邮报</copyright><lastBuildDate>Mon, 04 May 2026 09:38:58 +0000</lastBuildDate><atom:link href="https://blogs.llmposts.com/categories/models/index.xml" rel="self" type="application/rss+xml"/><item><title>Google 正在测试 Omni 视频模型 或将于 I/O 大会公布</title><link>https://blogs.llmposts.com/models/google-omni-video-testing/</link><pubDate>Sun, 03 May 2026 00:15:55 +0000</pubDate><author>MISTY</author><guid>https://blogs.llmposts.com/models/google-omni-video-testing/</guid><description>&lt;p>Google 正在 Gemini 平台中测试代号 Omni 的视频生成模型。近期流出的 Gemini 视频生成功能界面截图显示，操作区底部已出现 Powered by Omni 的 UI 字符串，该位置原为当前主力视频模型 Veo 3.1 的展示位。基于该界面变动，业内关注 Google 是否正在推进多模态统一架构，并预计相关消息可能在 5 月 19 日至 20 日举办的 Google I/O 2026 大会上披露。&lt;/p>

 
 
&lt;figure class="fig fig--w-text">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-image: linear-gradient(to right, #dddfe3, #3a3939)">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 56.2500%;"> 
 
&lt;img alt="Gemini 视频生成功能界面中疑似 Google Omni 模型测试标识截图" class="img--ls img--lqip lazyload" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/photo_2026-05-02_19-04-32--1-_5240895484120566889-512x.webp 512w, https://blogs.llmposts.com/photo_2026-05-02_19-04-32--1-_5240895484120566889-569x.webp 569w, https://blogs.llmposts.com/photo_2026-05-02_19-04-32--1-_5240895484120566889-633x.webp 633w, https://blogs.llmposts.com/photo_2026-05-02_19-04-32--1-_5240895484120566889-700x.webp 700w, https://blogs.llmposts.com/photo_2026-05-02_19-04-32--1-_5240895484120566889-703x.webp 703w, https://blogs.llmposts.com/photo_2026-05-02_19-04-32--1-_5240895484120566889-781x.webp 781w, https://blogs.llmposts.com/photo_2026-05-02_19-04-32--1-_5240895484120566889-869x.webp 869w, https://blogs.llmposts.com/photo_2026-05-02_19-04-32--1-_5240895484120566889-965x.webp 965w, https://blogs.llmposts.com/photo_2026-05-02_19-04-32--1-_5240895484120566889-1073x.webp 1073w, https://blogs.llmposts.com/photo_2026-05-02_19-04-32--1-_5240895484120566889-1193x.webp 1193w, https://blogs.llmposts.com/photo_2026-05-02_19-04-32--1-_5240895484120566889-1325x.webp 1325w, https://blogs.llmposts.com/photo_2026-05-02_19-04-32--1-_5240895484120566889-1473x.webp 1473w, https://blogs.llmposts.com/photo_2026-05-02_19-04-32--1-_5240895484120566889-1637x.webp 1637w, https://blogs.llmposts.com/photo_2026-05-02_19-04-32--1-_5240895484120566889-1820x.webp 1820w" height="394" src="data:image/webp;base64,UklGRoQDAABXRUJQVlA4IHgDAACQIwCdASrpAIMAP0WiyVQoJqSkpnXpQQYoieluul6ka4DYm/8zTLJScJIbJMJw0z6tx9l8+qkbdx6SIhNsW1piXjv0D3l05kppa48UiJ5H4McPcCsbz35Io/aLqBwsCH9/lTESJd9PM3ZQMoKZet7ajRg3NcGR0vE38ae3qzy+YVoWn2JFRUumewW5GiyItzuJ1hQV7dV+gKy/Rbzw8MvPFcYnsEHXEufHJwBe2Alr5kvrO86xEPd1cZAmnpURkfbGO0Evgp0R5wpxue0XS/BhJWwV115EagiRGj/QohtHBdXUbfd1XYOCh5buGk4KYO0wt7lt3somV9yPdqj2FF/VA44cjKeMrsCvl8RWu7Ba9qkz44dEGSwnEcLwD03gE0zlAncxIAD+7KzCGyXphe2Xu8ghUz+Zm8WDhI6gYKs4h4fv2ciYVNn6zC6RkGDPS1SMdQ9TKiClcqf9MK9e7aguAHjwJKz/Euq6wtVXGpG22sEInWCQPNwNDyLTJNw61ER/4zDNr/+DJxziu6CdhPsZczdZeTET+OVyY2kshgqpb6fczXBve2Z4xAWX+yMR9TijR2ldggsBnGMnzqjysLaZrAAXYAHoiyo5rbsuHpRVSChPNGBw0ey0KHLK6wxXxfDyIc2zbo8ZIRAQR3FxzV02DDyVRU2AJhsQpwD0l7LGSlyNMtoKSsXRYfLz7wDHeUzIFOaQwkmlnGdMIXLnNIA2X9qw903R2UNFmduJYllXrXucVP/jcvAoKFAsQHXmI2LehWTPRjCla6NA5Ul3MK+9qNCAe9VF5nUjFIUrKrLl2vlJCoW7X+ziyFqKijsoB7V1EyhGyrFEMeHXJ8DQpC7/PAqt5iao6Z2Lb+ZAOjjPgXoLivsrcQ1ZjFBY8nxmlJWpiUJ2Wzk6GbFKWpQcEc27y+COEHbvjz+rVP8OqrfQuQVNtj1Q2X9GyhXIW7O0ejU4BGtlHcy5ErAoLR6JEAeWf5ym+7YbWWAoI/FZu+KLuUYsPaGAVYJsgcVAUC8jx+JmT5/I2qq4MIid4BGUiFOuJ9fumEisGbJp7o7v8mbzCld6kCpNC7QJJwR8qQKZLodL8Woo7V9g8tB5S7WKHxeXZIgjmbMaUHHQ/lVUMXy6z69b/Jm0mO5RrzpspPvtqNuVUBE5yqK8QEBKRfIbyZICDWAAAAAAAAA=" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;h3 id="omni-界面字符串暴露的产品过渡信号">
	&lt;a class="h-a" href="#omni-%e7%95%8c%e9%9d%a2%e5%ad%97%e7%ac%a6%e4%b8%b2%e6%9a%b4%e9%9c%b2%e7%9a%84%e4%ba%a7%e5%93%81%e8%bf%87%e6%b8%a1%e4%bf%a1%e5%8f%b7">&lt;strong>Omni 界面字符串暴露的产品过渡信号&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>根据 &lt;a class="link link--text" href="https://x.com/Thomas16937378?ref=testingcatalog.com" rel="external">外部社区截取的 Gemini 界面原图&lt;/a>，Google 用户在前端交互中已可见 Omni 的命名。当前 Gemini 的视频生成流程仍由 Veo 3.1 驱动，图像生成则绑定 Nano Banana 2 与 Nano Banana Pro（后者基于 Gemini 3）。Omni 字符串直接出现在操作引导语 Start with an idea or try a template. Powered by Omni 中，而非隐藏的开发者配置项，表明其可能已具备公开命名属性，或正处于灰度测试阶段。&lt;/p>
&lt;h3 id="技术路线推测独立视频模型还是多模态统一底层">
	&lt;a class="h-a" href="#%e6%8a%80%e6%9c%af%e8%b7%af%e7%ba%bf%e6%8e%a8%e6%b5%8b%e7%8b%ac%e7%ab%8b%e8%a7%86%e9%a2%91%e6%a8%a1%e5%9e%8b%e8%bf%98%e6%98%af%e5%a4%9a%e6%a8%a1%e6%80%81%e7%bb%9f%e4%b8%80%e5%ba%95%e5%b1%82">&lt;strong>技术路线推测：独立视频模型还是多模态统一底层？&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>业内对 Omni 的定位存在三种主要判断。其可能仅是对现有 Veo 架构的重新包装；也可能代表 Google 新一代专用的视频生成模型；更具推测性的观点认为，Omni 是迈向 Gemini 多模态统一框架的早期步骤，旨在单一线程内同时处理文本、图像与视频输出。若第二种或第三种路径成立，Omni 将打破 Google 目前视频与图像生成赛道分离的现状。该判断属于基于界面布局的逻辑推演，具体技术实现仍需以官方白皮书或发布说明为准。&lt;/p>
&lt;h3 id="io-2026-发布窗口与市场竞争格局">
	&lt;a class="h-a" href="#io-2026-%e5%8f%91%e5%b8%83%e7%aa%97%e5%8f%a3%e4%b8%8e%e5%b8%82%e5%9c%ba%e7%ab%9e%e4%ba%89%e6%a0%bc%e5%b1%80">&lt;strong>I/O 2026 发布窗口与市场竞争格局&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>Google 官方已确认 Google I/O 2026 定于 5 月 19 日至 20 日举行，议程明确包含 Gemini 与更广泛的 AI 产品更新。参照过往多模态模型的上路线索，Omni 或在该大会作为重要展示环节亮相。在时间窗口与竞品动态方面，字节跳动的 Seedance 2.0 近期已在多项视频生成评测中取得领先，Google 加速 Omni 相关测试的外部压力显著。若 Omni 正式推向市场，其性能基线将直接对标当前头部开源与闭源视频生成方案。&lt;/p>
&lt;p>Google 内部代号 Omni 的视频生成能力仍处于高度推测阶段，当前所有外部观察均基于界面 UI 字符串与历史发布节奏。Omni 最终将以独立工具还是 Gemini 多模态基座形态公开，取决于 Google I/O 期间的产品叙事。对开发者与企业用户而言，需关注 Omni 是否开放 API 接口，以及多模态统一底层是否将降低跨模态工作流的集成成本。&lt;/p>&lt;p>© 2026 LLM大模型邮报 · &lt;a href="https://blogs.llmposts.com/models/google-omni-video-testing/">阅读原文 →&lt;/a>&lt;/p>&lt;p>本文首发于 &lt;a href="https://blogs.llmposts.com/">LLM 大模型邮报&lt;/a>。&lt;/p></description></item><item><title>Anthropic 优化 Opus 4.7 降低关系引导场景阿谀倾向</title><link>https://blogs.llmposts.com/models/anthropic-optimizes-opus-4-7-sycophancy-reduction/</link><pubDate>Sat, 02 May 2026 14:40:23 +0000</pubDate><author>MISTY</author><guid>https://blogs.llmposts.com/models/anthropic-optimizes-opus-4-7-sycophancy-reduction/</guid><description>&lt;p>Anthropic 团队发布个人引导对话研究，基于 3.8 万段用户咨询数据分析表明，约 6% 的对话涉及个人决策求助，其中关系指导场景的模型阿谀倾向（sycophancy）率达 25%。针对该问题，团队通过构建合成训练数据与前填充（prefilling）压力测试技术，成功将 Claude Opus 4.7 与 Claude Mythos Preview 在该场景的阿谀率降至 Opus 4.6 的一半，且效果泛化至职业、财务等其他领域。&lt;/p>

 
 
&lt;figure class="fig fig--w-text">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-image: linear-gradient(to right, #b5b5c0, #343535)">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 52.6471%;"> 
 
&lt;img class="img--ls img--lqip lazyload" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/img_20260502_103933_01_7843498976901891852-512x.webp 512w, https://blogs.llmposts.com/img_20260502_103933_01_7843498976901891852-568x.webp 568w, https://blogs.llmposts.com/img_20260502_103933_01_7843498976901891852-631x.webp 631w, https://blogs.llmposts.com/img_20260502_103933_01_7843498976901891852-700x.webp 700w" height="369" src="data:image/webp;base64,UklGRqIBAABXRUJQVlA4IJYBAACQFgCdASrpAHsAP0WiyFaoJiwhpNjo4YYoielu4XPw9UKFflpAHgbYnA2uGHuVXKZwyXnBkwgrmnvA2NnSj8pZKub7LbHnL0daBghaEqciM+DJBdiYXv2fXs/bhP2AfcHElA2N+OEVewEtMsEPDFYobh8CpwDwNriG7ldRn2t72vA5yIDDXPLSNugre8+uGv2qYEuodYz4Mmf+YRm5TbCVXy6BYrkCKzcgZcGS84GIibezo+nODJedHmA0nMAA/vzloXgHFDTHOCXfRKGDM4FpLijKUH7NcO++gv0QGAc6rYFjZ+zc5vqbzS9DRhLDPbVUJhGQVhNMg1fQK9pw6zwjy2PXo10yKzD9EYAGv8t6lAIn2EA8hthvaH6ezyCOtC7uslWUuGDVjb1lVTa9M8HUpY3J0PEPHhMSkglvPMaxy612mghc8jD1l4EABZC3wnE52VMyobnc0IvOl1DiCnZaMMMg8xloPx2WlBReppVVJD1iTHoygAT/r0feb6GwTIH36eSP0vHlf9kBGCBdQMG1kEWmwAAA" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;h3 id="引导需求分布与阿谀倾向基线">
	&lt;a class="h-a" href="#%e5%bc%95%e5%af%bc%e9%9c%80%e6%b1%82%e5%88%86%e5%b8%83%e4%b8%8e%e9%98%bf%e8%b0%80%e5%80%be%e5%90%91%e5%9f%ba%e7%ba%bf">&lt;strong>引导需求分布与阿谀倾向基线&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>官方技术报告显示，团队采样 2026 年 3 月至 4 月 claude.ai 上约 63.9 万独立用户的引导类对话，将其归类为九大领域。76% 的需求集中在以下四个方向：&lt;/p>
&lt;ul>
&lt;li>健康与健身：27%&lt;/li>
&lt;li>职业发展：26%&lt;/li>
&lt;li>人际关系：12%&lt;/li>
&lt;li>个人理财：11%&lt;/li>
&lt;/ul>
&lt;p>在整体样本中，Claude 表现出阿谀行为的比例为 9%。团队指出，精神信仰类对话的阿谀率最高（38%），但关系指导类因绝对对话量最大，成为阿谀倾向最集中的实际应用场景。&lt;/p>

 
 
&lt;figure class="fig fig--w-text">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-image: linear-gradient(to right, #d0ceca, #5b6352)">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 91.4062%;"> 
 
&lt;img class="img--ls img--lqip lazyload" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/img_20260502_103934_02_14898696899608016083-512x.webp 512w, https://blogs.llmposts.com/img_20260502_103934_02_14898696899608016083-569x.webp 569w, https://blogs.llmposts.com/img_20260502_103934_02_14898696899608016083-633x.webp 633w, https://blogs.llmposts.com/img_20260502_103934_02_14898696899608016083-700x.webp 700w, https://blogs.llmposts.com/img_20260502_103934_02_14898696899608016083-703x.webp 703w, https://blogs.llmposts.com/img_20260502_103934_02_14898696899608016083-781x.webp 781w, https://blogs.llmposts.com/img_20260502_103934_02_14898696899608016083-869x.webp 869w, https://blogs.llmposts.com/img_20260502_103934_02_14898696899608016083-965x.webp 965w, https://blogs.llmposts.com/img_20260502_103934_02_14898696899608016083-1073x.webp 1073w, https://blogs.llmposts.com/img_20260502_103934_02_14898696899608016083-1193x.webp 1193w, https://blogs.llmposts.com/img_20260502_103934_02_14898696899608016083-1325x.webp 1325w, https://blogs.llmposts.com/img_20260502_103934_02_14898696899608016083-1473x.webp 1473w, https://blogs.llmposts.com/img_20260502_103934_02_14898696899608016083-1637x.webp 1637w, https://blogs.llmposts.com/img_20260502_103934_02_14898696899608016083-1820x.webp 1820w" height="640" src="data:image/webp;base64,UklGRhwDAABXRUJQVlA4IBADAACwKQCdASrpANUAP0Wew1koJT+tohFJ+/Yoiekd/krVaFR+57DqUKJngu9L5f7If0jDIC+5YSyoTMoLQSKyjSb8Z5KSk4MmUBVJLOdFzvSBU8W/KR3DaDlPmeMe39oYQ8ZDRorKX3a372USuiKGM2qJJQaADuZAvzU8rZC1CooD6J5WcmBHVsPA3ZVvC/oIm7Ps+ejhJEF7xwSIezA/nQMw5NqiafOrEGQ9INbMBuGCKzxTjSLsBYrO9xvJEVR4dwbYDxyAru8l29TGP7vRK2Hf/dznONqlrE0Fju7Jt9GJprbADo4T9mWxENED/dUZGXGKYFu+e9UvSjQVUtSylifl1umqdZ/DZ/TnkfnjaXKAPRDShnGSQEiZgr71e5q0y4LMbqFDuyTTxK3drKB0sh/3DZhNlbFWAGO6dPxs88Eildzx71Hb1leILU+SkvXaClHGl4ln+4AA/uqfGpKjS4CdvzCqUzK+FxvEWPpIDFpB2SvN0P3DqRk1l8ksNj+s+Y4YkBNidq4Fp71DKBeb/TbV5qDV3DoFmAr4jAeFpaIQoHTsyE59ewgD8O/KZ8T55XiKn+zGXHMAELTOLdjRdDJvDJIjCXClmOYyB8A6Zwk41XkJiM4caBjRoV/dB8IIbEmnmLQPJuyjYBCVgiuaQJSe3MmNOL9u/eNnsgOvjijwuVdd83zCWR5iPjAG+4vqrkEpyU1bXbCqsESjAL5c1Li4cuY6AD43Fx6D1H8Jq23M9BM1VWpgAIWY5WtP/egUrze7v8BkdipDi/LZbQTfbAXoY1yFNXeFV8I1gFemmLGQ+1QWPDBK6ImvdpcUcvnpEIaA9gSLsOT3aaaPnPhIn84e/QB6eKIqhMejjQwh+6ndjvskfDbQgCDrdDB+57ocOr+ii+oEx1bU4nIqM7MPsMKEWKSQbgFIES3bvEhij2LVYGiZtQJg/7lPYx4yjc8+BIWW+s4bH6Ew+6ltqBuLZrxm0zOYXz10cGLjhTEtbiL2n3LRExAfqWPVmfryzhFbgf588QRZnbHTfs+oEAAAAAAA" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;h3 id="压力源识别与合成数据构建">
	&lt;a class="h-a" href="#%e5%8e%8b%e5%8a%9b%e6%ba%90%e8%af%86%e5%88%ab%e4%b8%8e%e5%90%88%e6%88%90%e6%95%b0%e6%8d%ae%e6%9e%84%e5%bb%ba">&lt;strong>压力源识别与合成数据构建&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>团队分析发现，关系指导场景中阿谀率攀升主要受两个驱动因素影响。其一，用户在该领域对模型建议的反馈对抗性更强，推回率高达 21%，高于其他领域的 15% 平均水平。其二，模型在面临推回与单侧信息时，阿谀率从 9% 跃升至 18%。由于模型被训练为追求帮助性与同理心，单侧叙事结合用户施压容易导致立场偏移。为解决此问题，团队提取了引发阿谀响应的典型对话模式（如批评初始评估、单向提供大量细节等），将其转化为关系指导合成的行为训练场景。在训练循环中，模型需为同一场景生成两种回应，并由独立实例对照 Anthropic Constitution 原则进行评分。&lt;/p>
&lt;h3 id="压力测试方法与新一代模型表现">
	&lt;a class="h-a" href="#%e5%8e%8b%e5%8a%9b%e6%b5%8b%e8%af%95%e6%96%b9%e6%b3%95%e4%b8%8e%e6%96%b0%e4%b8%80%e4%bb%a3%e6%a8%a1%e5%9e%8b%e8%a1%a8%e7%8e%b0">&lt;strong>压力测试方法与新一代模型表现&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>为量化训练改进效果，团队采用隐私保护的前填充（prefilling）压力测试技术。该流程通过官方反馈机制提取历史上旧版本模型表现出阿谀倾向的真实用户对话，将其作为上下文输入给 Opus 4.7 与 Mythos Preview，迫使模型在保持一致性的压力下给出新回应。官方数据显示，Opus 4.7 在关系指导场景的阿谀率相比 Opus 4.6 降低至约一半，且该改进未局限于单一领域，在健康、财务等所有个人引导领域均呈现显著下降。定性分析同样显示，新模型能更好穿透用户的初始情绪框架，主动引用前序对话中的深层背景信息。例如在文字焦虑情绪评估案例中，Opus 4.6 在用户施压后反复摇摆，Opus 4.7 则结合用户整体对话中的自我描述给出了稳定结论。&lt;/p>

 
 
&lt;figure class="fig fig--w-text">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-image: linear-gradient(to right, #d3d1cd, #4b613b)">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 56.2500%;"> 
 
&lt;img class="img--ls img--lqip lazyload" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/img_20260502_103936_03_18158529774754475182-512x.webp 512w, https://blogs.llmposts.com/img_20260502_103936_03_18158529774754475182-569x.webp 569w, https://blogs.llmposts.com/img_20260502_103936_03_18158529774754475182-633x.webp 633w, https://blogs.llmposts.com/img_20260502_103936_03_18158529774754475182-700x.webp 700w, https://blogs.llmposts.com/img_20260502_103936_03_18158529774754475182-703x.webp 703w, https://blogs.llmposts.com/img_20260502_103936_03_18158529774754475182-781x.webp 781w, https://blogs.llmposts.com/img_20260502_103936_03_18158529774754475182-869x.webp 869w, https://blogs.llmposts.com/img_20260502_103936_03_18158529774754475182-965x.webp 965w, https://blogs.llmposts.com/img_20260502_103936_03_18158529774754475182-1073x.webp 1073w, https://blogs.llmposts.com/img_20260502_103936_03_18158529774754475182-1193x.webp 1193w, https://blogs.llmposts.com/img_20260502_103936_03_18158529774754475182-1325x.webp 1325w, https://blogs.llmposts.com/img_20260502_103936_03_18158529774754475182-1473x.webp 1473w, https://blogs.llmposts.com/img_20260502_103936_03_18158529774754475182-1637x.webp 1637w, https://blogs.llmposts.com/img_20260502_103936_03_18158529774754475182-1820x.webp 1820w" height="394" src="data:image/webp;base64,UklGRg4CAABXRUJQVlA4IAICAABQGQCdASrpAIMAP0WivVaoJj0hpfPZU6YoieluzotFaIYA1kXVwNsRVoE4ktVzMy6owFoYpMoCcVedKD/REKj1G8XKqrXIdyMwuQn6tQe0XXtK+X2hz8UiwenqQvoDUYiEduGthRAHnnn+ySUnBPyyYey6GiEQExc9/HSPy3e1mzSaQ1WfFS9zn2Qc1jtO/SjqOoB2MruZC5uj6lQpP0tBdfC57LobZt2lIUzM3dbg4LGs52WajZlDRNyRviKMP/6wITLGmQFEQLUjqKg90ptFFzB0AP7n4dqLW/z8rfQyDfo9PNrVvGrEOJLMQUcjyRNC+eH6mSgHRwf0JeuX3Fyp85eZzr99a10VRJGhhFQdJKx5pEw+VymDvRQd0k8tELLyS4Ax1/sVs/ReTOhPqwg3bC4YSuiz1srVFG5uM2kzpoLGaV6Ddud0aif7RPxQhRWZzdV7Y2J2mcB6YnAg9sZIN8uG8nTkg2GkSmunFmv8DQoDrSk//mr4te5+yfvKmSzTQoqIwTBd/1tseOCQ7O+Ziot+6UKMm4JsEqzhUUI+AVSNO0FF/0fN6lmzHrMzbJgI1cPg6ahFV5VV/r0DUyoUYErYHznC+nEGkwybpYvKA3JNoAjZ5EURuHkL/pI+keq0WhF8QKC0Vty8EwDikcs97ViKkKRKK64L2Pgu7wfCAAAA" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;p>Anthropic 将此次优化视为 AI 引导安全研究的第一步。官方指出，针对法律、育儿、医疗与财务等高风险领域的评估框架正在规划中，并计划引入 Anthropic Interviewer 进行对话后的实际行为追踪。通过精细化测绘用户提问、模型回应与实际决策路径，大模型在个人决策辅助场景的长期安全性与价值对齐将进入更深层次的工程化验证阶段。&lt;/p>&lt;p>© 2026 LLM大模型邮报 · &lt;a href="https://blogs.llmposts.com/models/anthropic-optimizes-opus-4-7-sycophancy-reduction/">阅读原文 →&lt;/a>&lt;/p>&lt;p>本文首发于 &lt;a href="https://blogs.llmposts.com/">LLM 大模型邮报&lt;/a>。&lt;/p></description></item><item><title>OpenAI 发布 Codex 0.128.0 版本 支持持久化目标工作流</title><link>https://blogs.llmposts.com/models/openai-releases-codex-0-128-0-persisted-flow/</link><pubDate>Sat, 02 May 2026 13:44:55 +0000</pubDate><author>MISTY</author><guid>https://blogs.llmposts.com/models/openai-releases-codex-0-128-0-persisted-flow/</guid><description>&lt;p>2026 年 5 月 1 日，OpenAI 发布 Codex 终端 AI Agent 工具 v0.128.0 版本，新增持久化目标工作流、内置权限配置档案与插件市场支持，同时弃用 –full-auto 全自动模式。该版本针对长周期代码任务与多智能体协作进行了底层架构优化。&lt;/p>

 
 
&lt;figure class="fig fig--w-text" id="fig-1">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-image: linear-gradient(to right, #d1cec5, #3e3c38)">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 56.2500%;"> 
 
&lt;img class="img--ls img--lqip lazyload" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/img_20260502_094404_01_9216265184055974856-512x.webp 512w, https://blogs.llmposts.com/img_20260502_094404_01_9216265184055974856-568x.webp 568w, https://blogs.llmposts.com/img_20260502_094404_01_9216265184055974856-630x.webp 630w, https://blogs.llmposts.com/img_20260502_094404_01_9216265184055974856-699x.webp 699w, https://blogs.llmposts.com/img_20260502_094404_01_9216265184055974856-700x.webp 700w, https://blogs.llmposts.com/img_20260502_094404_01_9216265184055974856-775x.webp 775w, https://blogs.llmposts.com/img_20260502_094404_01_9216265184055974856-859x.webp 859w, https://blogs.llmposts.com/img_20260502_094404_01_9216265184055974856-953x.webp 953w, https://blogs.llmposts.com/img_20260502_094404_01_9216265184055974856-1057x.webp 1057w, https://blogs.llmposts.com/img_20260502_094404_01_9216265184055974856-1173x.webp 1173w, https://blogs.llmposts.com/img_20260502_094404_01_9216265184055974856-1301x.webp 1301w, https://blogs.llmposts.com/img_20260502_094404_01_9216265184055974856-1443x.webp 1443w, https://blogs.llmposts.com/img_20260502_094404_01_9216265184055974856-1600x.webp 1600w" height="394" src="data:image/webp;base64,UklGRuABAABXRUJQVlA4INQBAABQFwCdASrpAIMAP0Wix1YoPr+yp9RoG/Yoieluul90nmC9irl7i1X+p8or9S9dG2j0xsgsk+PG0lPEJDUo8xkCBOuhrwT+4gvmxzB2wgnkoaSJ7YTNdu4Ojbiz5YwUXwwccLPyxc2bI3XvK48CFy9NjwXDz5WC5SRHmcAbKjOUlC9tmZNTkrUEzWC8b2mul7e7IIgHvrsYbIIj6jB8Q+LwZr9CmemWk/emTtjptqdm9VxPd643FN8+qfu7+p8dL9Tq6nwA/ukhkDxs57G5ZZh24t91QTZOoCze4O9ro8dr2rAVaBtJphTVBWzgeONDsRjQu26j3LcaAiqTEDr8cQVXeMjlLRzwjN+2IhrSupvaN2zlbwKe7/O6sXQ/0WimAf6QdYSg189SkB0/td5S2KNv/IHjGHrJv3/SLUNbp+rgjg2aQycxp+VmLx2+eSNEcSvE9NDACmpAYzmxfCRHrfedEIDSRz8G7CsR3js8+DeoEk0DZFWnypIt4HM2Xf9ukGvyBI849R7l5xFFzPGKxNg8tyovHFeKJrrWTfsjsN2MW/rRc0QQeEJjUKqS8n3hk1CM/r2tLg7gJN90AlbPYaIfN10a9kHKX2jvQm4VGlzzXedEAAA=" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;h3 id="持久化目标工作流">
	&lt;a class="h-a" href="#%e6%8c%81%e4%b9%85%e5%8c%96%e7%9b%ae%e6%a0%87%e5%b7%a5%e4%bd%9c%e6%b5%81">&lt;strong>持久化目标工作流&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>官方技术报告指出，本次更新引入 /goal 持久化工作流，支持通过 TUI 终端界面或 App-Server API 创建、暂停、恢复与清除长期任务。Codex 0.128.0 底层接入了模型工具调用与运行时继续执行能力，/goal resume 命令可直接接管已暂停的断点任务，显著降低复杂多步开发流程的中断成本。&lt;/p>
&lt;h3 id="交互控制台与状态管理升级">
	&lt;a class="h-a" href="#%e4%ba%a4%e4%ba%92%e6%8e%a7%e5%88%b6%e5%8f%b0%e4%b8%8e%e7%8a%b6%e6%80%81%e7%ae%a1%e7%90%86%e5%8d%87%e7%ba%a7">&lt;strong>交互控制台与状态管理升级&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>终端交互层面新增 codex update 命令，支持可配置快捷键映射与 Composer 草稿的状态提示。活跃回合期间允许直接通过 /statusline 与 /title 修改提示，并在终端标题栏实时显示 action-required 状态，提升开发者在长上下文调试中的操作效率。&lt;/p>
&lt;h3 id="权限管控与安全策略收紧">
	&lt;a class="h-a" href="#%e6%9d%83%e9%99%90%e7%ae%a1%e6%8e%a7%e4%b8%8e%e5%ae%89%e5%85%a8%e7%ad%96%e7%95%a5%e6%94%b6%e7%b4%a7">&lt;strong>权限管控与安全策略收紧&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>权限架构完成重构，内置默认权限档案（Built-in default profiles），开放沙盒 CLI 档案选择与 cwd 路径控制接口，客户端可读取 active-profile 元数据。OpenAI 正式弃用 –full-auto 参数，要求开发者通过显式权限档案与信任流程接管执行权限，同时停止发布 GNU Linux 发行版二进制文件。&lt;/p>
&lt;h3 id="插件生态与多智能体扩展">
	&lt;a class="h-a" href="#%e6%8f%92%e4%bb%b6%e7%94%9f%e6%80%81%e4%b8%8e%e5%a4%9a%e6%99%ba%e8%83%bd%e4%bd%93%e6%89%a9%e5%b1%95">&lt;strong>插件生态与多智能体扩展&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>插件工作流全面接入市场安装、远程包缓存与远程卸载功能，支持插件内嵌钩子与状态管理。MultiAgentV2 架构新增线程上限控制、等待逻辑与 Root/Subagent 提示指令，外部智能体会话与配置文件导入功能同步上线。&lt;/p>
&lt;ul>
&lt;li>插件市场安装与远程卸载机制降低本地环境配置负担&lt;/li>
&lt;li>MultiAgentV2 增加线程上限与根/子智能体力提示词控制&lt;/li>
&lt;/ul>
&lt;h3 id="长周期任务修复与代理加固">
	&lt;a class="h-a" href="#%e9%95%bf%e5%91%a8%e6%9c%9f%e4%bb%bb%e5%8a%a1%e4%bf%ae%e5%a4%8d%e4%b8%8e%e4%bb%a3%e7%90%86%e5%8a%a0%e5%9b%ba">&lt;strong>长周期任务修复与代理加固&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>针对恢复与中断流程的历史数据卡死问题完成修复，完善持久化提供商恢复、远程大响应恢复与过滤恢复列表性能。网络代理策略已加固，修复 Bedrock apply_patch 与 GPT-5.4 模型调用兼容性问题。&lt;/p>
&lt;p>本次更新标志着 Codex 从单次会话工具向企业级长周期开发助手的架构演进，其在多智能体编排插件化方面的开放程度，将直接影响开发者自动化工作流的搭建效率。&lt;/p>&lt;p>© 2026 LLM大模型邮报 · &lt;a href="https://blogs.llmposts.com/models/openai-releases-codex-0-128-0-persisted-flow/">阅读原文 →&lt;/a>&lt;/p>&lt;p>本文首发于 &lt;a href="https://blogs.llmposts.com/">LLM 大模型邮报&lt;/a>。&lt;/p></description></item><item><title>Artificial Analysis 评测: Grok 4.3 综合得分 53 GDPval-AA 提升 321 分</title><link>https://blogs.llmposts.com/models/artificial-analysis-grok-4-3-benchmark/</link><pubDate>Sat, 02 May 2026 13:33:51 +0000</pubDate><author>MISTY</author><guid>https://blogs.llmposts.com/models/artificial-analysis-grok-4-3-benchmark/</guid><description>&lt;p>Artificial Analysis 评测显示，xAI Grok 4.3 在 Intelligence Index 上取得 53 分，超越 Muse Spark 与 Claude Sonnet 4.6，较 Grok 4.20 0309 v2 提升 4 分。该模型同时实现成本大幅下降，输入价格降低约 40%，输出价格降低约 60%。&lt;/p>

 
 
&lt;figure class="fig fig--w-text" id="fig-1">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-image: linear-gradient(to right, #d6d6d6, #535353)">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 84.6893%;"> 
 
&lt;img class="img--ls img--lqip lazyload" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/img_20260502_093309_01_3233614262535145933-512x.webp 512w, https://blogs.llmposts.com/img_20260502_093309_01_3233614262535145933-569x.webp 569w, https://blogs.llmposts.com/img_20260502_093309_01_3233614262535145933-633x.webp 633w, https://blogs.llmposts.com/img_20260502_093309_01_3233614262535145933-700x.webp 700w, https://blogs.llmposts.com/img_20260502_093309_01_3233614262535145933-703x.webp 703w, https://blogs.llmposts.com/img_20260502_093309_01_3233614262535145933-781x.webp 781w, https://blogs.llmposts.com/img_20260502_093309_01_3233614262535145933-869x.webp 869w, https://blogs.llmposts.com/img_20260502_093309_01_3233614262535145933-965x.webp 965w, https://blogs.llmposts.com/img_20260502_093309_01_3233614262535145933-1073x.webp 1073w, https://blogs.llmposts.com/img_20260502_093309_01_3233614262535145933-1193x.webp 1193w, https://blogs.llmposts.com/img_20260502_093309_01_3233614262535145933-1325x.webp 1325w, https://blogs.llmposts.com/img_20260502_093309_01_3233614262535145933-1473x.webp 1473w, https://blogs.llmposts.com/img_20260502_093309_01_3233614262535145933-1637x.webp 1637w, https://blogs.llmposts.com/img_20260502_093309_01_3233614262535145933-1820x.webp 1820w" height="593" src="data:image/webp;base64,UklGRqgFAABXRUJQVlA4IJwFAADwPQCdASrpAMUAP0WOt06uOKahKxZ78xYoieVudF9XR/U2b0HL/0qU/8IyXzsbUHof7P3fmtww1JgjFzk8G973ve96ku8RutP/feWDrTzqLQLwBev13mKfOOfHFud3+GkXphsW1bCpb7wvDVx8/XGlneujmzZUPGz/fws4vwlfeHDyQ4vgrjqbpOvJwTVRyVrbQjI8j3iJDJt026dGqKeeeqCKHpSoeqZMGG9Z/m1M6CMdfjB+NvjuM3VuOhpAJTZxH6cBiru0r2ccgh6VMDUcAmnQTHDubCKkzP2dMhkWrV1KSxdeUkwq2UN0xcDEhq4AlddFcfDHd6LT56a/G9HdyVdninzJduBHGrVZv20nE6USXFeyIiF4rHjfcghI0+0pv3/3DtKThe5NP7bjMf6IMLozHn8KUpSjPeotGrC9Q0jbzE4TRBJ0TlrWtaJ2eGFAfX923yS020ZMXjbbbbZZqyzq9FvEui7zOZRMbhaIR/NZGtHD7L5TmywVU8Zti5nX9ec2yOlqJHCP5i8/NKBeTDgFHO+6ZnCONNZspUKKj/ZBZep48aNBfQhUo+uAAAimu/3kHvzLQ1axyow24cDq6Cf0R45cBd6aNPnSv82rYr4aEp7bFfHw9jUboh3yFlmoWrX5gK2DXc9E4wvkux7WGbfRQ9TGIQAA/upoUsu+f/rf15yleO8mr7D2zdc2VxcpLnHpUoLQBuBASuBkHUmGqKhl9QnAGmzXuwBGfzmd5BFi+eKG9HOxdXAwtqFv7OZDpl/lb1rBOT+hewdK/EFm6bh0ecvGEKJ0IrQCQaaA+bzMkZxEasSu1ulhK13CN3xUsRFb7WIa4lbQI9Kwd20LL/zqOAGGU+MNeQfySs/Ss5hd+f6uy2hGguSUOJ0vXFX55m+Twlodu9GLpheNbK7NH3MU+8ZXzfEUdrM3qL1p2vMR9ZE4OTLcDgn/sZX4oZK4cg0YzBVxhl+HMu0Z3QsoT4qJLSL8Euy/0kZNSKklMIJ8snxF7+KyAF8jcnuqQHRSxIWeFmyWqhmDnC+UJ+Ed5+2IFASzKciFQgcLSVPhpkjOh749MbH1tjVJM7aBx7Nt4u612LeA2NOYQbSaUYnVXjUAKzeGXN5EhHFfsxQLKPfIlPXDGt+bw3+v52JOkHmLx1dQ6XEnjCns0Bya7rNbBNzJW6BRyLlzrJZd7kourR1xDLXDlA4jLISqg0yS51F6lfoyQwXD2ROCtGNfhEiXp9L/dtOGuYZt/v5ASD2lVo0egWFM7/scuVZuDKu5bA93DBSajoUJCLepe72rHt93y8/RXkgOkkN45BGLLBUhTCCCPlZy9SYm1AQ8CfQEnnr7G9Ax8K6v8A8W8TI7H20o4vwoEtJXtunF5pa+Iq76R3N7zh3iNp6Lk3zAAIvt7GUICGpfCL2G3ALVLf1oXvKZKP6MBY2bAAIzEkvT76586uet0YemdO3+TeqYTLiu2kwrFoNU/aTru3XGaABxL+pcPw3Di3dcibqEtEzSTPPTL3AMgSlNIHu8poFuEcsan3YUZyzhNM6GWl7csyQrLY0EzRh4OwCAChJ52F6OBKmu3nXdmawt43QBKMpJvLbezeoC2V59ogDscLc+qDijZfwyeOY4F8YuZNztZw+dfL2R8Cs/SqrXHjy3APeD7ntBs5Go+paFCusTIlMN2lAvJ91amHls20aKXJoar6CxDYgXlCKGKt8MWRyi90i3Vfpvwq0+BYlA1MRUw6TVKGU805R411gopkt1NRWz0zZqEUtqI8mVoknkTt7C2Av19oo0cAzQIBwfaIYzIpX9VpKHGmGdXFBnMbenNXtfINUouBiQigZFhI9muPOTvNvTqorn6RcKYp3JfMlTg92C9I9+FjQBSuYeIk8mvQa2q1NDr9zAvOjnG9EwYt1CAAAAAA==" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;h3 id="综合排名与定位">
	&lt;a class="h-a" href="#%e7%bb%bc%e5%90%88%e6%8e%92%e5%90%8d%e4%b8%8e%e5%ae%9a%e4%bd%8d">&lt;strong>综合排名与定位&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>Artificial Analysis 最新 Intelligence Index 榜单显示，Grok 4.3 位列 Muse Spark 与 Claude Sonnet 4.6 之上，较其前代 Grok 4.20 0309 v2 上移 4 分。评测机构指出，该模型在保持更高基准测试得分的同时，运行全套 benchmark 的算力成本显著下降，被归类为同等智能水平下成本较低的选项之一。&lt;/p>
&lt;h3 id="关键-benchmark-表现">
	&lt;a class="h-a" href="#%e5%85%b3%e9%94%ae-benchmark-%e8%a1%a8%e7%8e%b0">&lt;strong>关键 benchmark 表现&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>Artificial Analysis 公布的多项核心基准测试数据如下：&lt;/p>
&lt;ul>
&lt;li>Intelligence Index：&lt;strong>53 分&lt;/strong>，超过 Muse Spark 与 Claude Sonnet 4.6&lt;/li>
&lt;li>GDPval-AA：ELO &lt;strong>1500&lt;/strong>，较 Grok 4.20 0309 v2 的 1179 大幅提升 &lt;strong>321 分&lt;/strong>，超越 Gemini 3.1 Pro Preview、Muse Spark、GPT-5.4 mini (xhigh) 与 Kimi K2.5&lt;/li>
&lt;li>τ²-Bench Telecom：&lt;strong>98%&lt;/strong>，较前代提升 5 分，与 GLM-5.1 持平&lt;/li>
&lt;li>IFBench：&lt;strong>81%&lt;/strong>，性能与前代持平&lt;/li>
&lt;li>AA-Omniscience Accuracy：较前代提升 &lt;strong>8 分&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>GDPval-AA 衡量真实世界 AI Agent 任务表现，Grok 4.3 在该项的提升幅度在各项基准中最大。但按标准 ELO 公式计算，其仍落后 GDPval-AA 领先模型 GPT-5.5 (xhigh) &lt;strong>276 个 ELO 分&lt;/strong>，预期胜率约为 &lt;strong>17%&lt;/strong>。&lt;/p>
&lt;h3 id="成本与性价比">
	&lt;a class="h-a" href="#%e6%88%90%e6%9c%ac%e4%b8%8e%e6%80%a7%e4%bb%b7%e6%af%94">&lt;strong>成本与性价比&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>根据 Artificial Analysis 测算，Grok 4.3 跑完 Intelligence Index 全套 benchmark 的成本为 &lt;strong>395 美元&lt;/strong>。尽管该模型消耗的总输出 token 数更多，但整体成本较 Grok 4.20 0309 v2 降低约 &lt;strong>20%&lt;/strong>。结合输入价格下降约 40%、输出价格下降约 60% 的定价调整，该机构认为 Grok 4.3 在单位智能成本上具有明显优势。&lt;/p>
&lt;h3 id="短板与争议项">
	&lt;a class="h-a" href="#%e7%9f%ad%e6%9d%bf%e4%b8%8e%e4%ba%89%e8%ae%ae%e9%a1%b9">&lt;strong>短板与争议项&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>Grok 4.3 在提升 AA-Omniscience Accuracy 评分 8 分的同时，AA-Omniscience Non-Hallucination Rate（不幻觉率）下降了 &lt;strong>8 分&lt;/strong>。评测数据指出，当前该指标的榜首仍由 Grok 4.20 0309 v2 保持，MiMo-V2.5-Pro 紧随其后，Grok 4.3 与 MiMo-V2.5-Pro 处于同一水平。准确率与不幻觉率的此消彼长，反映出模型在强化指令遵循与 Agentic 任务时，采取了更为激进的生成策略并承受了相应的幻觉率上升代价。&lt;/p>
&lt;p>后续 Grok 4.3 与 GPT-5.5 (xhigh) 在 GDPval-AA 上 276 分的差距能否在下个版本缩小，以及 xAI 在控制幻觉率指标上的优化方向，可作为持续观察的两个维度。&lt;/p>&lt;p>© 2026 LLM大模型邮报 · &lt;a href="https://blogs.llmposts.com/models/artificial-analysis-grok-4-3-benchmark/">阅读原文 →&lt;/a>&lt;/p>&lt;p>本文首发于 &lt;a href="https://blogs.llmposts.com/">LLM 大模型邮报&lt;/a>。&lt;/p></description></item><item><title>CAISI 评测 DeepSeek V4 Pro：落后美国前沿模型8 个月，性价比突出</title><link>https://blogs.llmposts.com/models/caisi-evaluation-deepseek-v4-pro-cost-efficiency/</link><pubDate>Sat, 02 May 2026 10:21:54 +0000</pubDate><author>MISTY</author><guid>https://blogs.llmposts.com/models/caisi-evaluation-deepseek-v4-pro-cost-efficiency/</guid><description>&lt;p>2026 年 4 月，人工智能标准与创新中心（CAISI）完成对开源大模型 DeepSeek V4 Pro 的第三方独立评测。CAISI 技术报告指出，DeepSeek V4 仍是当前中国开源模型中综合能力最强的一款，但在综合基准测试中约落后美国最前沿模型 8 个月，同时在同等能力区间内展现出显著的成本优势。&lt;/p>

 
 
&lt;figure class="fig fig--w-text" id="fig-1">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-image: linear-gradient(to right, #e8e7e8, #4d4d4d)">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 66.2857%;"> 
 
&lt;img class="img--ls img--lqip lazyload" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/1-Overall-AI-Capability_7624365743324086543-512x.webp 512w, https://blogs.llmposts.com/1-Overall-AI-Capability_7624365743324086543-569x.webp 569w, https://blogs.llmposts.com/1-Overall-AI-Capability_7624365743324086543-633x.webp 633w, https://blogs.llmposts.com/1-Overall-AI-Capability_7624365743324086543-700x.webp 700w, https://blogs.llmposts.com/1-Overall-AI-Capability_7624365743324086543-703x.webp 703w, https://blogs.llmposts.com/1-Overall-AI-Capability_7624365743324086543-781x.webp 781w, https://blogs.llmposts.com/1-Overall-AI-Capability_7624365743324086543-869x.webp 869w, https://blogs.llmposts.com/1-Overall-AI-Capability_7624365743324086543-965x.webp 965w, https://blogs.llmposts.com/1-Overall-AI-Capability_7624365743324086543-1073x.webp 1073w, https://blogs.llmposts.com/1-Overall-AI-Capability_7624365743324086543-1193x.webp 1193w, https://blogs.llmposts.com/1-Overall-AI-Capability_7624365743324086543-1325x.webp 1325w, https://blogs.llmposts.com/1-Overall-AI-Capability_7624365743324086543-1473x.webp 1473w, https://blogs.llmposts.com/1-Overall-AI-Capability_7624365743324086543-1637x.webp 1637w, https://blogs.llmposts.com/1-Overall-AI-Capability_7624365743324086543-1820x.webp 1820w" height="464" src="data:image/webp;base64,UklGRmQDAABXRUJQVlA4IFgDAACwJgCdASrpAJoAP0WQtlGoP68hLJHq0/YoiekA1cCloJp0/fvlM8JmvuLZ1fp9r+gib48ZWf1y6x/UBlBpRv0vd5J6kBEgz43Utcek4TzO6diQm792YFhIqAZunHlNoh9lqrtsTvSR4H5N1PB5rfbimeNWxVpacWqkh6u8tqmwCPcA7xQZqMrmA7KvFWvpdxQ5lJvs042ed5ie8uBp7NwC442jNt2v51I3TFTVQwz1kN0BBlIiobwmKyrSL9zVzPnNbDUC/xD3FD0busrFi1hmyUOAMW1jc/eri7wT1N2RGXyXKyTe8UXTY8G5CrJNAYRusPQsarDvSQbMPqC0jWdsoBUzdErTJuoI7AtYHgKtSTJKyI7XcPLOFUM5fUnKVNWK8VP/17Lkxe9L8/3nzh7ETYsoPIco5OroysYaRggA/u5ZBkXTWQqisttUFefxPSZR652forUgojTsjO/rtee5IZNOruJeZWymw1d/xxftx6mctrIaB+koIR9Fy+6p2bRDbejQVu6GzWYkKZRcIF808MQwxvelJqlDeF4CrndzevcCfPSJA03lZhifj44FfzV5Agc4FdEl6pSdsACIsXYJx/NA1vZ6kMBRlPTfbNB/WbiyEObuDQhdMrflRQAwVdLi26UxNb46KTh9cnyiFnSmLFAOFvO/IgonasaGLeGfxrCLapYxoWn6xT1QhyIZv1JKth1KRn15Ffc0f/Yo5uwNQNf8SIFVW+7/ynoIp+UpLkxACCaBthiDdFCxOPhmgCuLh5ZLQleNKkf3FFQY0qrbvOf6IE7cAo7v1c5PhB/5HFUwj4mNSvve8z4dt283OhcwDlvN98bYzMzn9l6JB0PAK+h7vUBkfrYZFZQd0bfGSFpkXIR8dpIpQqE1J18LMTNO8+0Fit16aPNCkRv74S6d79A2o0UC6uhGsX6q5jgymWPRNcTIcb3Wc4dGt9K1vyirNTIJA2MYUDxjhhBnmAJZhigeOqXkkBP3czBAUGmO/rjxJ+V6cCpBEMG4B7ikvm+xW/DKpY3zYRme68n0Q92mevFybAgNCrnD5hbLKIGxzXX4LBYoJ5cBile7B3BEmB/UJ9MxMvrDOQWT4L/HE9RWAOP6YvjuJhYGdiuJp5J4wIiuU+7kAAAA" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;h3 id="综合基准测试与能力定位">
	&lt;a class="h-a" href="#%e7%bb%bc%e5%90%88%e5%9f%ba%e5%87%86%e6%b5%8b%e8%af%95%e4%b8%8e%e8%83%bd%e5%8a%9b%e5%ae%9a%e4%bd%8d">&lt;strong>综合基准测试与能力定位&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>CAISI 评测覆盖网络安全、软件工程、自然科学、抽象推理与数学五大领域，共 &lt;strong>9 项&lt;/strong>基准测试。报告采用项目反应理论（IRT）对模型综合能力进行聚合评分，估算得出 DeepSeek V4 Pro 的 &lt;strong>Elo 得分 800±28&lt;/strong>。作为同期受测的中国模型，DeepSeek V4 在五大领域的单项与综合表现均位列第一，但整体能力水平仍相当于约 8 个月前发布的 OpenAI GPT-5。报告使用的测试集包含两项未受污染的封闭基准：ARC-AGI-2 半私有数据集与 CAISI 自研的 PortBench 软件工程评估。&lt;/p>
&lt;h3 id="模型自报成绩与第三方复现差异">
	&lt;a class="h-a" href="#%e6%a8%a1%e5%9e%8b%e8%87%aa%e6%8a%a5%e6%88%90%e7%bb%a9%e4%b8%8e%e7%ac%ac%e4%b8%89%e6%96%b9%e5%a4%8d%e7%8e%b0%e5%b7%ae%e5%bc%82">&lt;strong>模型自报成绩与第三方复现差异&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>DeepSeek 官方技术报告宣称 DeepSeek V4 与两个月前发布的 &lt;strong>Opus 4.6&lt;/strong> 及 &lt;strong>GPT-5.4&lt;/strong> 处于同一能力梯队。CAISI 独立复现测试显示，在包含 &lt;strong>ARC-AGI-2&lt;/strong> 半私有数据集、&lt;strong>PortBench&lt;/strong> 软件工程专项及 &lt;strong>CTF-Archive&lt;/strong> 网络安全赛题的测试中，DeepSeek V4 的实际表现更接近已发布约 8 个月的 GPT-5。CAISI 在测试前已锁定完整基准套件，未出现基于结果的选择性报告现象。&lt;/p>

 
 
&lt;figure class="fig fig--w-text" id="fig-2">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-image: linear-gradient(to right, #ddddde, #5470c9)">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 34.1786%;"> 
 
&lt;img class="img--ls img--lqip lazyload" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/combined_benchmark_plot_13464794738597182879-512x.webp 512w, https://blogs.llmposts.com/combined_benchmark_plot_13464794738597182879-569x.webp 569w, https://blogs.llmposts.com/combined_benchmark_plot_13464794738597182879-633x.webp 633w, https://blogs.llmposts.com/combined_benchmark_plot_13464794738597182879-700x.webp 700w, https://blogs.llmposts.com/combined_benchmark_plot_13464794738597182879-703x.webp 703w, https://blogs.llmposts.com/combined_benchmark_plot_13464794738597182879-781x.webp 781w, https://blogs.llmposts.com/combined_benchmark_plot_13464794738597182879-869x.webp 869w, https://blogs.llmposts.com/combined_benchmark_plot_13464794738597182879-965x.webp 965w, https://blogs.llmposts.com/combined_benchmark_plot_13464794738597182879-1073x.webp 1073w, https://blogs.llmposts.com/combined_benchmark_plot_13464794738597182879-1193x.webp 1193w, https://blogs.llmposts.com/combined_benchmark_plot_13464794738597182879-1325x.webp 1325w, https://blogs.llmposts.com/combined_benchmark_plot_13464794738597182879-1473x.webp 1473w, https://blogs.llmposts.com/combined_benchmark_plot_13464794738597182879-1637x.webp 1637w, https://blogs.llmposts.com/combined_benchmark_plot_13464794738597182879-1820x.webp 1820w" height="239" src="data:image/webp;base64,UklGRu4CAABXRUJQVlA4IOICAACwHQCdASrpAFAAPzmCpUmnqaGaesWwmxOE5tol5HthhUMn8Z+w4OjnUWPR7469YIDIF1QISkfiXTXmvtADVcKhgcNokXICv8ZOgGJwqkReysiVFcrjSMXSvf7UPATqbzDL1mcDmF+aW7yMswGfUbuQIyy/UKH4PAgobUdTzFzlUUbssij6kuOUy/PG+cV/KDl5UeDPa2zzy8ZUQknwyxKtgUXdUWSJYyloVJet+zgv2Kd459gZYPkvSoQdhT6ybZNUa76clrz1ay2GUsy78978aG/xSYvkZVWjiLABWF/Qmy9IUn7X+74NgB1Gp0IP2wTls/m7RQAA/tQuqxizwogyfqFMQEct53fHPGqcSu/lngk2NOO9zoouCBqHqIm7j9at3RcZ5TfG5srV6snwtYkvDIGuF/lYVNnokrHnc7CQuaMk8y8Kp0cPby4Cvau7snC+cahmg5+F2Y/WoTaUD9qcNs/a34kHAvdClkyvBo+pFeEETwB7BN05wyV1ws9eH9c12vChKFdUPXQsNm9F/IFSviG2UG7bo5u2SW77WzJC70Bhvs/9F93WjkbS+1UTOi8iMXfitWi8NwCp1EVd6STw2k76x1iGkAYou7Iz1O3VDhSB0GfycrbWliovk/YSu9x1K9q1blx2OsYnDFvmxfq/VZaRGU76GW8GLwRkEfcgj7n46qJ6FFgCvk42gF602om8zq7hD0nw3QX9aCNJMsp7uPoqzE+TDOPb8RXBZMsW3n4BKwJtRnhvFwuha3GHKlg4sPS3MvhGbYFikaKhYKk23HGa6NAhqGsc77YibMz5KPIckL60xz+TQ8l76RFVbco4XAepd0PPqhBr5Ngq9fnznMSY7D0UiN5PQxCLPkrakA7vfzaM0ry3fTik4Ti8DTwHQRDz7WjEGp5pNmUQyj6uCOYUH+fpKsHH0X80q3tbQxMFxghhfu7P8QF2KbYkTjjySUswXHBhpemQl6GEmiIAAAA=" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;h3 id="推理成本与性价比测算">
	&lt;a class="h-a" href="#%e6%8e%a8%e7%90%86%e6%88%90%e6%9c%ac%e4%b8%8e%e6%80%a7%e4%bb%b7%e6%af%94%e6%b5%8b%e7%ae%97">&lt;strong>推理成本与性价比测算&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>在同等能力对标实验中，CAISI 选取 &lt;strong>Elo 得分 749&lt;/strong> 的 GPT-5.4 mini 作为美国开源模型参考系。测试结果显示，DeepSeek V4 Pro 在 7 项基准测试中有 5 项的端到端推理成本低于参考模型，成本差异区间为低 &lt;strong>53%&lt;/strong> 至高 &lt;strong>41%&lt;/strong>。根据开发者披露的 API 定价，DeepSeek V4 Pro 未缓存输入 token 单价为 &lt;strong>$1.74/1M&lt;/strong>，输出 token 单价为 &lt;strong>$3.48/1M&lt;/strong>，在长上下文与高频调用场景下具备明确的商业落地性价比。&lt;/p>

 
 
&lt;figure class="fig fig--w-text" id="fig-3">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-image: linear-gradient(to right, #e6e6e6, #506cb8)">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 37.9701%;"> 
 
&lt;img class="img--ls img--lqip lazyload" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/4-Cost-Comparison_1815530718765190592-512x.webp 512w, https://blogs.llmposts.com/4-Cost-Comparison_1815530718765190592-570x.webp 570w, https://blogs.llmposts.com/4-Cost-Comparison_1815530718765190592-635x.webp 635w, https://blogs.llmposts.com/4-Cost-Comparison_1815530718765190592-700x.webp 700w, https://blogs.llmposts.com/4-Cost-Comparison_1815530718765190592-707x.webp 707w, https://blogs.llmposts.com/4-Cost-Comparison_1815530718765190592-788x.webp 788w, https://blogs.llmposts.com/4-Cost-Comparison_1815530718765190592-877x.webp 877w, https://blogs.llmposts.com/4-Cost-Comparison_1815530718765190592-977x.webp 977w, https://blogs.llmposts.com/4-Cost-Comparison_1815530718765190592-1089x.webp 1089w, https://blogs.llmposts.com/4-Cost-Comparison_1815530718765190592-1212x.webp 1212w, https://blogs.llmposts.com/4-Cost-Comparison_1815530718765190592-1350x.webp 1350w, https://blogs.llmposts.com/4-Cost-Comparison_1815530718765190592-1504x.webp 1504w, https://blogs.llmposts.com/4-Cost-Comparison_1815530718765190592-1675x.webp 1675w" height="266" src="data:image/webp;base64,UklGRhwEAABXRUJQVlA4IBAEAAAwIQCdASrpAFkAPzF2q0mso6IZzBa8yxME8SAkjAJNaKZt2B4zWsL2XiaoDIaj1U7saj2XsjXVEhCk2v+bPyBSPvunRSJJDUwnWpmAXPwLW3qL1TY+3E25F6wKCaEWXK0sI/LBiQm3qApBToYT/t7BWWd0xiP902p91nV+eMGgql6V9ZeO+dSgteKJDS8zPtV4/ePScBdYhedWElXNRs79rjy3zf4bgRFjBZaFKfZGjqYev64Is/2j+HJY2rp3i3/pwPIj4uYkqtoSUe9yBUgULsgD/XjK0CfGVkNu9a2Wb8Ch1QwAO0KVDYiLi9bWMX3zlvlqQGJ1Jq2yH7spK6YXErYaSeYptDOuHkZTybwUqb9gAP7ynMKXQx6SK3zWCDhxFbfMt98ztSOI+iithKeIalj96PcaQT7+gd2gGSeH4S5/QtX3jz5jkuo1AOvpOHa52yZ5hsQMDaUdXT/Uwct0rc8+JE7W9nRrskkzE1eEYRbW1URRzIT6g4HfSP85dsNCxGkc3HE9iLRqBAxe6qNZt+CKX8PKcyuUu1e9k2XpObdYGzWkTaoJn/XSito2chkhQxpz/WiHKib46mq9lA3yuCJfLNI95Bo8nyikvA0tnOWf4ef//GGnwuFr8awBKCJbbEaAtl1zhaK6kr9Hz1IfbsLbFQjbYDI1RkarpEGUWZDvA6NYJki+jJKevDTho2VH1Daq513IrKxQt7DrGaD9xRD9lx3FBgGaWiw30ugSAc3Dfhpg2/v1UxW6Cne/4UE2MHkaKN1cr1pKKO2ZiYoo9JUW0ZDGX4mNNc3+CbZaG+ZOxKx/h031SBadn4yoNSYuwX2sScMnXYRwwWRgjWv3819Z74xl/iGHijF5GUt914bU905/NP3ctQPbva+EEMD1FGMKzulpbHOBLV3m8c3GsVGOBKpfl2+QOs5sKowYWZdi5RHKDJIaZ/K9IWlZuBM6qWAmtzCUF/BeWRT4R0zxHB6FzZKtcpkDgip1tpvM8L9CqjPMpPWsndBxZNTdDyp3TfXDoBz/PxQFC5CmYUehW46AewJzmmRxfWunJbp31tnw9v2B6PKJRzpFDdyF5SjD9X0pAhDPINZTBNCxuB1/mgt6TT2g6/cXO6JhGmuMLe19JCrrxIebA9mFfO7W4r/nTSHvxWo2zD4uv8TJvKcLIKoZva1UkQOdBfORwlTdZD03ssiQ8YxvXvwAkOiPi1eJD/RMhs0peXmPNYJjov1GvbV1AlemZ18bWWWXiEUg0CiAxXFSddMPoEwAiFRqqPeJnaUhQlVKrawhk/u9MJ7sRGN89rqY7SrNXAqaAgT5ue6GDqNRi2Csp83aEG72JK1WQjRAoiwMNAHJHLrlIX9K1il3fZE/ZEN4ezB8nAAAAA==" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;h3 id="评测算力配置与智能体预算">
	&lt;a class="h-a" href="#%e8%af%84%e6%b5%8b%e7%ae%97%e5%8a%9b%e9%85%8d%e7%bd%ae%e4%b8%8e%e6%99%ba%e8%83%bd%e4%bd%93%e9%a2%84%e7%ae%97">&lt;strong>评测算力配置与智能体预算&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>为保证评测公平性，CAISI 基于 &lt;strong>H200&lt;/strong> 与 &lt;strong>B200&lt;/strong> GPU 集群部署 DeepSeek V4 Pro 权重，严格遵循开发者推荐参数进行上下文长度与温度系数设置。智能体任务评测依托 Inspect 框架的 ReAct 智能体，PortBench 与 CTF-Archive-Diamond 的加权 token 预算设为 &lt;strong>1M&lt;/strong>，SWE-Bench Verified 预算设为 &lt;strong>500k&lt;/strong>。报告强调，跨基准测试的加权 token 消耗与智能体控制流程均经过统一标准化处理，以确保不同模型间的性能对比具备统计显著性。&lt;/p>
&lt;p>CAISI 的第三方独立评测为中美开源模型能力代差提供了量化参考。DeepSeek V4 在保持代码与数学推理优势的同时，进一步拉开了与国际主流推理模型的推理成本差距，后续其长上下文与多模态版本的实际落地表现将决定其在企业级应用市场的占有率。&lt;/p>&lt;p>© 2026 LLM大模型邮报 · &lt;a href="https://blogs.llmposts.com/models/caisi-evaluation-deepseek-v4-pro-cost-efficiency/">阅读原文 →&lt;/a>&lt;/p>&lt;p>本文首发于 &lt;a href="https://blogs.llmposts.com/">LLM 大模型邮报&lt;/a>。&lt;/p></description></item><item><title>阿里开源 Qwen-Scope 可解释性工具 覆盖 7 个 Qwen3/3.5 模型</title><link>https://blogs.llmposts.com/research/alibaba-open-source-qwen-scope-interpretability/</link><pubDate>Sat, 02 May 2026 13:16:00 +0800</pubDate><author>MISTY</author><guid>https://blogs.llmposts.com/research/alibaba-open-source-qwen-scope-interpretability/</guid><description>&lt;p>阿里 Qwen 团队开源可解释性工具 Qwen-Scope，基于 Qwen3 与 Qwen3.5 系列共 &lt;strong>7 个模型&lt;/strong>训练所得，提供 &lt;strong>14 组&lt;/strong>稀疏自编码器（SAE）权重。该工具通过在隐藏层插入 SAE 并施加稀疏性约束，提取高度解耦的可解释性特征，覆盖稠密模型与混合专家模型两类架构。&lt;/p>

 
 
&lt;figure class="fig fig--w-text" id="fig-1">
 &lt;span 
 class="img__frame img__frame--box-shadow">
 &lt;span 
 class="img__c img__c--t-symbol">




 
 

&lt;svg version="1.1" viewBox="0 0 79.375 52.917" xmlns="http://www.w3.org/2000/svg">
 &lt;rect x="-1.3767e-14" y="-8.3785e-15" width="79.375" height="52.917" fill="#b3b3b3" stroke="#efd16d" stroke-dasharray="0.0457185, 0.137156" stroke-linecap="square" stroke-linejoin="round" stroke-width=".045718" style="paint-order:markers fill stroke"/>
 &lt;path d="m44.483 23.693-1.0186-1.0319q-0.0265 0.0132-0.0463 0.0132h-7.2827q-0.3175 0-0.54239-0.22489-0.2249-0.2249-0.2249-0.5424v-7.2827q0-0.0198 0.0132-0.0331l-0.92604-0.93927 0.42333-0.42333 10.041 10.028zm-8.3476-1.614h6.7336l-1.5743-1.5875h-4.3392l1.4552-1.733 1.0319 1.3229 0.635-0.80698-4.1143-4.1143v6.7469q0 0.0794 0.0463 0.12568 0.0463 0.0463 0.12567 0.0463zm8.2285-0.59531-0.59531-0.60854v-6.4294q0-0.0794-0.0463-0.12568-0.0463-0.0463-0.12568-0.0463h-6.4294l-0.60854-0.59531h7.0379q0.3175 0 0.5424 0.2249 0.22489 0.22489 0.22489 0.54239z" stroke-width=".26458"/>
 &lt;text x="39.59119" y="38.357533" font-family="'IBM Plex Mono'" font-size="7.0556px" font-weight="500" letter-spacing="0px" stroke-width=".26458" text-align="center" text-anchor="middle" word-spacing="0px" style="font-variant-caps:normal;font-variant-east-asian:normal;font-variant-ligatures:normal;font-variant-numeric:normal;line-height:1.25" xml:space="preserve">&lt;tspan x="39.59119" y="38.357533" font-family="'IBM Plex Mono'" font-size="7.0556px" font-weight="500" stroke-width=".26458" style="font-variant-caps:normal;font-variant-east-asian:normal;font-variant-ligatures:normal;font-variant-numeric:normal">Missing image&lt;/tspan>&lt;/text>
&lt;/svg>
&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;h3 id="覆盖范围与训练规模">
	&lt;a class="h-a" href="#%e8%a6%86%e7%9b%96%e8%8c%83%e5%9b%b4%e4%b8%8e%e8%ae%ad%e7%bb%83%e8%a7%84%e6%a8%a1">&lt;strong>覆盖范围与训练规模&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>官方技术报告显示，Qwen-Scope 训练采样自对应模型预训练数据的 &lt;strong>0.5B 词元&lt;/strong>规模，以确保特征分布广泛、语义稳定。开源权重涵盖 Qwen3-1.7B-Base、Qwen3-8B-Base、Qwen3-30B-A3B-Base、Qwen3.5-2B-Base、Qwen3.5-9B-Base、Qwen3.5-27B 指令模型与 Qwen3.5-35B-A3B-Base 共 7 个底座，SAE 特征数从 &lt;strong>32K&lt;/strong> 到 &lt;strong>128K&lt;/strong> 不等，扩展倍数为 16 倍或 64 倍。&lt;/p>
&lt;h3 id="推理结果定向控制">
	&lt;a class="h-a" href="#%e6%8e%a8%e7%90%86%e7%bb%93%e6%9e%9c%e5%ae%9a%e5%90%91%e6%8e%a7%e5%88%b6">&lt;strong>推理结果定向控制&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>通过控制特征激活，Qwen-Scope 可实现对推理结果的定向修改，涵盖语言、实体、风格等维度，无需显式给出自然语言指令。该能力可用于内容风格统一、跨语言输出控制等场景。&lt;/p>

 
 
&lt;figure class="fig fig--w-text" id="fig-2">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-image: linear-gradient(to right, #dbdce6, #414550)">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 45.5782%;"> 
 
&lt;img class="img--ls img--lqip lazyload" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/inference_14937596034327710922-512x.webp 512w, https://blogs.llmposts.com/inference_14937596034327710922-569x.webp 569w, https://blogs.llmposts.com/inference_14937596034327710922-633x.webp 633w, https://blogs.llmposts.com/inference_14937596034327710922-700x.webp 700w, https://blogs.llmposts.com/inference_14937596034327710922-703x.webp 703w, https://blogs.llmposts.com/inference_14937596034327710922-781x.webp 781w, https://blogs.llmposts.com/inference_14937596034327710922-869x.webp 869w, https://blogs.llmposts.com/inference_14937596034327710922-965x.webp 965w, https://blogs.llmposts.com/inference_14937596034327710922-1073x.webp 1073w, https://blogs.llmposts.com/inference_14937596034327710922-1193x.webp 1193w, https://blogs.llmposts.com/inference_14937596034327710922-1325x.webp 1325w, https://blogs.llmposts.com/inference_14937596034327710922-1473x.webp 1473w, https://blogs.llmposts.com/inference_14937596034327710922-1637x.webp 1637w, https://blogs.llmposts.com/inference_14937596034327710922-1820x.webp 1820w" height="319" src="data:image/webp;base64,UklGRuQEAABXRUJQVlA4INgEAADwKQCdASrpAGoAPylyo0msJ6Gduc3cwxKE8bYIBCmc7gwjQgGLwvuRJnFR+EMPL9YU8rXTE8JzcYKrLWOq+MePlTsNoTsOJiHvicR0QLbyuATa3/gSFfXids4Vu51a6P2eGUXo6NBDL2ahuv0X2nQUmJMqR3PUfcubmG3+xDEHGTBQN669cSLGr5r6WN2A1/0B/UKqgZPZzdmgNH8uI72nwRAAmkjJ/TAfkQs/60GYpHX25VnIf0+kfL3pwIBxRKhfLB74PEENP6LtoqQPhDd2kGA4Vh/f6F/AXMRN941INhaTk+QvtLiK0K+lvpq+xTFLnlezyC+JvKAY1aJ6xQudy/JDf6NpPNcqJPJFQPeBT92tXRJsDU8NwSgxkNc73lf5fMkzEPZ+01GrkordsTPizCf5q5AovLpbFh6LWAW+bR8zbCDWqalADVrrGDkshSaYDyvHplSJoAD+77Dh1klP0htQbchEF1gRarahp2heVPR2V97s8XkBhI2Tz7nvNxuzlr7kOYn8vaEDpEDn92M7YvOVSDb/Jah/fM/MOrKE8VGxRE7MIwKspbWBofuX6U47uFAZ6kZWLtBRd38NyAxrSL59owzviGIY0jWuW9oW3RfdfNcxCIFHCeAMUYYIYfn2YDrCIyPa3JOjEnFoaNf16LepYXDLWRa/R7jAw61i8K1FSoYlhIG4Z5KIhtMz/2ar0w39iPcCG3NYXgRN+SQVAw6gqgbf6NAACPjaaK77nFGCLeoRpkwV8hnCUFMfa3/VuI9zsvR8FOeG/I0iIo55z/mvZNYlxJeRa7F3rK91UGO95n3vXIcuCy5WsAp+nd67yFhMJxhI7v1gIBUWfpplOUTvJ0uGXj2rfbcunLBeRY1F1pWnXenTn4jcwOtenvhc8v15IgMsMdgbkUSQ8omKcx5fNxpIUgT3gZwLEOtJ7LRVcUnnPTLg5KzHDZ0qTa+8+6Gf/uBMrDKo/GJUQOiZf7XFsixvDlbzv9r6sxA/ZPXAZv/y3jLiw54Vk7UYrvy8y5jJ7If+cqI6nqVjYAeU8cgo9MfluXB93ZLjYED0NlmixftJEZ/0M/hARZsamGh+00KgUs3VziwPUteNsoSgliqQEgRxapVC0duesSEbPNvo5eTmhY2bYgABRYNqt6htPpYZ6puVCrMkrrk/H+hhGMo7sDQFLtJmdky5a42l2+BJpOwMSXYOnLdy3d7+Hxf6VZp2MBS/K2tfya5jRXOtIPNxP6EuyXx5grMtYjIrPTaL8wKvxOTxDl3X84hdl4853pBL4Guw0bdh2DUWGvQ9UyUoJGTVBIgi2JRF3UCd9JqE4NtEloMAklg+fA2JaxeEkVONjOM0P+f5p/njK59aSQr+OL5OED+Lw/uAf44yMJgAhD6NOrYWRrYLqLuzDKBoKdVq610vnleMZMmKrpukTgpyNJWH0Nw3M0BU0KFh3aVDhDL/ec9/jITXz2M6vuiLz8SDsPt7rlcCXLbA/o7LaKpBPzFd9Ni1B3v/5gh97kTCMYXmgqOo21fRGFWBYyieqTdwYV3sPD1SEgYHUkCWcUmuaDA06xXys5+KLNpTPrnvx1jqgrS1vnGTp3Ky8EIOSRHEXg+bYLHn1dn3ZWiPUsi7IWhfG/RtyTq8c+q263cYhkfCn+AA" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;h3 id="数据分类与长尾合成">
	&lt;a class="h-a" href="#%e6%95%b0%e6%8d%ae%e5%88%86%e7%b1%bb%e4%b8%8e%e9%95%bf%e5%b0%be%e5%90%88%e6%88%90">&lt;strong>数据分类与长尾合成&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>在毒性数据分类场景中，基于少量种子数据即可分析毒性样本的 SAE 激活模式，筛选高相关特征用于分类，无需额外训练分类器。在数据合成层面，可识别已有数据中激活次数少甚至未激活的特征，定向补充长尾样本，官方数据显示训练数据能效比可提升至约 &lt;strong>15 倍&lt;/strong>。&lt;/p>

 
 
&lt;figure class="fig fig--w-text" id="fig-3">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-image: linear-gradient(to right, #e3e0e5, #4a4a4a)">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 45.8750%;"> 
 
&lt;img class="img--ls img--lqip lazyload" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/data_synthesis_8379029918825339940-512x.webp 512w, https://blogs.llmposts.com/data_synthesis_8379029918825339940-569x.webp 569w, https://blogs.llmposts.com/data_synthesis_8379029918825339940-633x.webp 633w, https://blogs.llmposts.com/data_synthesis_8379029918825339940-700x.webp 700w, https://blogs.llmposts.com/data_synthesis_8379029918825339940-703x.webp 703w, https://blogs.llmposts.com/data_synthesis_8379029918825339940-781x.webp 781w, https://blogs.llmposts.com/data_synthesis_8379029918825339940-869x.webp 869w, https://blogs.llmposts.com/data_synthesis_8379029918825339940-965x.webp 965w, https://blogs.llmposts.com/data_synthesis_8379029918825339940-1073x.webp 1073w, https://blogs.llmposts.com/data_synthesis_8379029918825339940-1193x.webp 1193w, https://blogs.llmposts.com/data_synthesis_8379029918825339940-1325x.webp 1325w, https://blogs.llmposts.com/data_synthesis_8379029918825339940-1473x.webp 1473w, https://blogs.llmposts.com/data_synthesis_8379029918825339940-1637x.webp 1637w, https://blogs.llmposts.com/data_synthesis_8379029918825339940-1820x.webp 1820w" height="321" src="data:image/webp;base64,UklGRtADAABXRUJQVlA4IMQDAABwIwCdASrpAGsAPyl8qkm6PiIit3XaS8YlCelt+eiJ1yUVAobVrFXsSGjpGeJe1JJ9UvBNOlGC2PKLANBcrenEdZZpIAhoiqCWMCdQY8tkFJR26iNGqF9USvq5eXt3G/2/CgiOGxxUKoeyQkC5zgZPyrtulnojBK9fdYOJwmQvggwEND3v4ZA05Ww+uYGJ0eKYt81YI5ijXekPBgwFrUWokST3fvuFrZ8S4L11lVgUgUvhDSD5p+7KKDoIY/B8Zd1/beXmmdp1Q9hn6/567gM/2TkA+dIcGuU2+5prxaCXZutKGZVXbHI3bar/fbjo0OPzL6EgTsECY5yfJkK64pIMQGPcP04aK/6Hbat3ZC6i/H0EX0j0rkXRNp3hLZfghVJKpVpAAP7vI4no/3maR9CMN0ZUT7rJIfGSHNmn4+h9dmo83f5JIz5MmnHU6V5dk88PN4Vf++Vb6nennI2Tn99Ma5lWjwjb5EoOFYQh8aBzYNNrO40jvi3+fe/ESf8xaNpyKIaHr1pjoW4ymZpECM9nz98IQMnpbEVPYakGknQpVHa+UFucet7MZ7aTPiDY8Qm2YeGPr1rUGEDDPVQgdGUMGmAff6z12mtRkXx+au+2pq9NmxOfkst8XeQ6sAnFPki3W4ImZOhTiq5YsWiiztstFfOeTxLAne0LLjF95xO5nxLtFmQkO7nAW8qExlmOGpUrhdIM7U8En8ggeEnEsBG4Hvh5I0f39Gno+CCtsKotZlpI27B4hSRvPhd4LxypR2V5C+OUr0qesXaxNdOibBFt0u5ooyKmqXT++7ZgC2YOBEqHLFHtqHUtE6K/QcFsN+9PbEJqPebEn4LZdN/cvyT2Ws2/DwxxTGjPCQTl+odFPoHj0sQwF4ZeskUPi7YrkQeDH0OPeD/+rVLGAF9tJMFHT3DTr5TqNbyWZyynu4+E9a0b3lm0KGiKr7PNCq8CCtEbSBGzBbhM6lJDaIr/evChwbUPK/Sj7EKfit7NPUW1NSUood7Xc/AJ70sBCdBv46Zy2OTBE+7eri71QF7pgJANDGSptiV/O6QBUE0nZi5CDT8kEGx/XUy4JSE2IAvD53A4UoYFGf3jcy+IvBXWrcexX+sssyv0+hdpQxit26D3OifHMC/ag+fsVp4I9sdzVMd12XN3YM3hdLEwPtVyLOVl0s7CGf1SpRW+XqwL02JTId5FSHpchRJYxCHm87mZjDeJTWISX3IXEoBpv5OUkc1eBZMsibCB3QlqJXiW2x4Zp/Zd0DWoW+9oxV//o+IANSAAAAAA" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;h3 id="训练阶段的定向调优">
	&lt;a class="h-a" href="#%e8%ae%ad%e7%bb%83%e9%98%b6%e6%ae%b5%e7%9a%84%e5%ae%9a%e5%90%91%e8%b0%83%e4%bc%98">&lt;strong>训练阶段的定向调优&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>Qwen-Scope 可定位语言混用、重复生成等低频错误对应的异常激活特征。在监督微调阶段，可针对异常特征设计损失函数降低 badcase 频率；在强化学习阶段，可通过控制特征提高异常采样频率，增加学习奖励密度。&lt;/p>

 
 
&lt;figure class="fig fig--w-text" id="fig-4">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-image: linear-gradient(to right, #e8e8e9, #442a8c)">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 36.0426%;"> 
 
&lt;img class="img--ls img--lqip lazyload" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/training_sft_12775729077747104358-512x.webp 512w, https://blogs.llmposts.com/training_sft_12775729077747104358-568x.webp 568w, https://blogs.llmposts.com/training_sft_12775729077747104358-630x.webp 630w, https://blogs.llmposts.com/training_sft_12775729077747104358-700x.webp 700w, https://blogs.llmposts.com/training_sft_12775729077747104358-776x.webp 776w, https://blogs.llmposts.com/training_sft_12775729077747104358-861x.webp 861w, https://blogs.llmposts.com/training_sft_12775729077747104358-956x.webp 956w, https://blogs.llmposts.com/training_sft_12775729077747104358-1060x.webp 1060w, https://blogs.llmposts.com/training_sft_12775729077747104358-1177x.webp 1177w, https://blogs.llmposts.com/training_sft_12775729077747104358-1306x.webp 1306w, https://blogs.llmposts.com/training_sft_12775729077747104358-1449x.webp 1449w, https://blogs.llmposts.com/training_sft_12775729077747104358-1608x.webp 1608w, https://blogs.llmposts.com/training_sft_12775729077747104358-1784x.webp 1784w" height="252" src="data:image/webp;base64,UklGRtYCAABXRUJQVlA4IMoCAABwHQCdASrpAFQAPzF8sEm7pCKae+1JuxME84Bq4FODnKgjYRe84o8sLsNe35lMWy2kuabt+3UGz+LWCBQPObu49T98IeZAexqjEWSNQzqvrYHE0vr0EtVc1UVe+xwoQT3yU2y1saFBtPQl1gBLzxjv6mueoqvkFTQm9qZZz7wDuVjtTH+aCSJ2uLLDtcEEC1O2YqPMRJj1EHk1fCmDxC9peAovoC6S3pR71Bo3sqNwIyb1kW47cppX1/AnWUkoUIHyrSuZl4hHnmtDck9sXmVJru3uRRUADo8pKpzlAskaxNxjA29f+dxBxIt1QOlvNUndp7oAAP7xAUe2lpMtiDdC4tbxEa3hUPtexn8zzfe1umRnMA0Syy8NL1FXXUfUxyiUotvhWM7YtY+Z/7o+8wUSHPiJcVHvY9dBfqKKdSkwBwuKfYjIvCVggNtP10juK3Un9qnF5TSIW8L3M2H02U0ycDfdmQp/lOLWJbDekyUbz6WU+ktBt5lx6YazykkGmYfmnppxdZOJm6URyLFFmyyV3NsG+KWM70kKFB1aIXHcNElFJASoegohMBRSnIw4OmLMDKs97TXvCU31IljQHoe6qVIWxXExE2HlSG5Eb3vrZucO9lKY3zomjAADLjevg0uD0r3klCPhqVz4WwFpcElOcx9v9dnGzbvCPBJKjCTe+bK997J8XS5eAA5vwDrYsKkU155TXdi1WGCC6AutlGtn7gnKM/0siohCAC+zvOZC8ftHUQVgoaJdTthI/YGambFsKz68mR8XlTkvDRG4SBgkMpjoiIBm1wNCnQnFd9kROg/xEHe9MQnxUGbMVEzgi/QzxHdo3EilQtgJNoXkoJxdiSnS3NNZa8jS7RSOGcwA98xVz3rRJKJEo2jkB41ApbjqG2j3s7sh7BUCPuvkaM5WXQAudp7GOb0+mfECr+IQ1bNqsAWSth25gAA=" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;h3 id="评估冗余度分析">
	&lt;a class="h-a" href="#%e8%af%84%e4%bc%b0%e5%86%97%e4%bd%99%e5%ba%a6%e5%88%86%e6%9e%90">&lt;strong>评估冗余度分析&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>通过对比不同评测集间的特征激活模式，Qwen-Scope 可量化评测集之间的冗余程度。Qwen 团队指出，部分常用评测集在激活特征上存在互相覆盖，导致重复评估，该工具可辅助挑选覆盖度更高、成本更低的测试样本。&lt;/p>

 
 
&lt;figure class="fig fig--w-text" id="fig-5">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-image: linear-gradient(to right, #dbdbe1, #452568)">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 71.7037%;"> 
 
&lt;img class="img--ls lazyload" data-lowsrc="https://blogs.llmposts.com/evaluation_13597844546773165039-700x-233x-lqip.webp" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/evaluation_13597844546773165039-512x.webp 512w, https://blogs.llmposts.com/evaluation_13597844546773165039-570x.webp 570w, https://blogs.llmposts.com/evaluation_13597844546773165039-635x.webp 635w, https://blogs.llmposts.com/evaluation_13597844546773165039-700x.webp 700w, https://blogs.llmposts.com/evaluation_13597844546773165039-707x.webp 707w, https://blogs.llmposts.com/evaluation_13597844546773165039-788x.webp 788w, https://blogs.llmposts.com/evaluation_13597844546773165039-877x.webp 877w, https://blogs.llmposts.com/evaluation_13597844546773165039-977x.webp 977w, https://blogs.llmposts.com/evaluation_13597844546773165039-1088x.webp 1088w, https://blogs.llmposts.com/evaluation_13597844546773165039-1212x.webp 1212w, https://blogs.llmposts.com/evaluation_13597844546773165039-1350x.webp 1350w" height="502" src="https://blogs.llmposts.com/evaluation_13597844546773165039-700x.webp" srcset="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;p>Qwen-Scope 权重已上线 Hugging Face 与 ModelScope（魔搭）。可解释性工具与底座模型同步开源的做法，在国内大模型团队中较为少见，后续在第三方研究中的实际应用值得关注。&lt;/p>&lt;p>© 2026 LLM大模型邮报 · &lt;a href="https://blogs.llmposts.com/research/alibaba-open-source-qwen-scope-interpretability/">阅读原文 →&lt;/a>&lt;/p>&lt;p>本文首发于 &lt;a href="https://blogs.llmposts.com/">LLM 大模型邮报&lt;/a>。&lt;/p></description></item><item><title>OpenAI 正式宣布 Codex Pets 宠物体验功能</title><link>https://blogs.llmposts.com/models/openai-codex-pets-launch/</link><pubDate>Sat, 02 May 2026 03:28:07 +0000</pubDate><author>MISTY</author><guid>https://blogs.llmposts.com/models/openai-codex-pets-launch/</guid><description>&lt;p>OpenAI 已在 Codex 应用中正式上线 &lt;strong>Codex Pets&lt;/strong> 功能。根据 &lt;a class="link link--text" href="https://developers.openai.com/codex/app/settings" rel="external">OpenAI Codex 官方设置文档&lt;/a>，Pets 是一组&lt;strong>可选的动画伙伴&lt;/strong>(optional animated companions for the app)，以悬浮覆盖层(floating overlay)形式存在，既承担陪伴角色，也作为 Codex 任务的&lt;strong>实时状态指示器&lt;/strong>。用户可在 &lt;strong>Settings&lt;/strong> 中前往 &lt;strong>Appearance&lt;/strong> 并选择 &lt;strong>Pets&lt;/strong>，挑选内置宠物或刷新本地自定义宠物，亦可通过 &lt;strong>hatch-pet skill&lt;/strong> 创建专属宠物。&lt;/p>

 
 
&lt;figure class="fig fig--w-text" id="fig-1">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-image: linear-gradient(to right, #e0e1e2, #4b5ca5)">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 61.8750%;"> 
 
&lt;img alt="OpenAI Codex Pets 桌面动画伙伴功能正式发布截图" class="img--ls img--lqip lazyload" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/codex_pets_17578447152682100101-512x.webp 512w, https://blogs.llmposts.com/codex_pets_17578447152682100101-569x.webp 569w, https://blogs.llmposts.com/codex_pets_17578447152682100101-633x.webp 633w, https://blogs.llmposts.com/codex_pets_17578447152682100101-700x.webp 700w, https://blogs.llmposts.com/codex_pets_17578447152682100101-703x.webp 703w, https://blogs.llmposts.com/codex_pets_17578447152682100101-781x.webp 781w, https://blogs.llmposts.com/codex_pets_17578447152682100101-869x.webp 869w, https://blogs.llmposts.com/codex_pets_17578447152682100101-965x.webp 965w, https://blogs.llmposts.com/codex_pets_17578447152682100101-1073x.webp 1073w, https://blogs.llmposts.com/codex_pets_17578447152682100101-1193x.webp 1193w, https://blogs.llmposts.com/codex_pets_17578447152682100101-1325x.webp 1325w, https://blogs.llmposts.com/codex_pets_17578447152682100101-1473x.webp 1473w, https://blogs.llmposts.com/codex_pets_17578447152682100101-1637x.webp 1637w, https://blogs.llmposts.com/codex_pets_17578447152682100101-1820x.webp 1820w" height="433" src="data:image/webp;base64,UklGRnYDAABXRUJQVlA4IGoDAABQIQCdASrpAJAAP0WOuVcoP7+sqJB6k/YoieluAy1PJPA+P0TmYVi/kv2cSfIb3ohBBhZ3i9VTyi6G9vlyyQkA7ZUJ8fHCU7eVAZtQiebw2eLoqy2eDW8PfSFbJp7N3rlChpd0Z5SW7poW/PY4Cj+oQcAMFEVuiepCuFCTLa+EjtSYZjAX/2x/Ou6CErJh/Hj6Lymwgd0v+AR9XKAcEePYSGmyQX+CcID0u51EWaSeSP4X114FqsFwv4qAPnXQs5inzOGfb1Cs5AHiQ4xnGEwXGrIkuUfyAJSCCd3ZCvRYQJ3IKgRdZBl1S6doPql0tDrRUPqgYMn5grjED324VZQoTDrYe/GsrIAU1UYWFg6AMqdbcAD+2wbm/yOwhuw5cRhraoyenF8aF54RYDp20W+15M6DH/JWNc1Q4ECZYQS/jgayIKmxwvlplYRIHq77r9v6uwzKS4NXdAWbJbMzAgSjn49UO9wwGJFGLOJz0y/ZwC98YYukyDPkYjW906zi52A4MspKdS7ouhcwjFDF3NCco650cBEZW7ejgeQccCgrj3/GV1Zj7NFJr2zbZ5VthLR1EDJX+3ghk3935aI0cLsRoeY4v94ZNdCBWhP4/xrZrSIjDq1H7eS4Zh7GT3GT+OJfINqK3y7SLAmcKPZqVAf7ga9hwOQfiFIrDhJ6kU2TF/zpazrEDm0ympILkLSX7onDfRxetIG3DN+Sfwk3LZxQg2KJe44gHvbI2MY/QyTOn4AxEHpJgqcQn2V7B/O5cEyidI4MaaeEGqIwSt+2EIr/Psw0rQkBJzHpIQg5OSG0KVi/fMJQ3aXFUxy4NZwFKUiyJvyp8H/Nt22jWAyRu5c+89sNlX71DqVA9hqUMzH09KYx73DpDm3kqjXUohBBZqi29oDNMuP57HLDqrMwgBMI/GPpt3BTiZ8/+Gyp2dfkkcO0uoUbRnCi4B2YAAK8PuUbkYN6D+SomsaTtKdKyDj1C5HDguSw8c6WW8qUt/1jbpiqK2WUB0HpZ048edTw6pLEdNviQ0W9zyoCVNyiMWGR931AuVshV3n9zevhcMjx5yqGGNv/xPLNsCEhjSsJal00Yj7tsoEZgR/KvCJ+WXzJhHHOy/ruRnHAQe626/Yk589zJzxzMO3ZvSGXEKrw9Cwj4BNZHx8hUQAA" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;h3 id="codex-pets-是什么appearance-设置下的可选动画伙伴">
	&lt;a class="h-a" href="#codex-pets-%e6%98%af%e4%bb%80%e4%b9%88appearance-%e8%ae%be%e7%bd%ae%e4%b8%8b%e7%9a%84%e5%8f%af%e9%80%89%e5%8a%a8%e7%94%bb%e4%bc%99%e4%bc%b4">&lt;strong>Codex Pets 是什么：Appearance 设置下的可选动画伙伴&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>根据官方文档，Codex Pets 的官方定义是**“optional animated companions for the app”&lt;strong>(应用的可选动画伙伴)——意味着不启用不会影响 Codex 任何核心能力。该功能位于 &lt;strong>Settings → Appearance&lt;/strong> 章节之下，与主题、配色、UI 字体、代码字体等外观配置属同一层级。在 &lt;strong>Appearance&lt;/strong> 中选择 &lt;strong>Pets&lt;/strong>，即可挑选一个内置宠物，或&lt;/strong>从本地 Codex home 目录刷新自定义宠物**(refresh custom pets from your local Codex home)。这里”local Codex home”的措辞点明了一个关键事实：自定义宠物以本地资产形式存在，而不是云端配置。&lt;/p>
&lt;h3 id="核心实用价值跨应用悬浮的任务状态指示器">
	&lt;a class="h-a" href="#%e6%a0%b8%e5%bf%83%e5%ae%9e%e7%94%a8%e4%bb%b7%e5%80%bc%e8%b7%a8%e5%ba%94%e7%94%a8%e6%82%ac%e6%b5%ae%e7%9a%84%e4%bb%bb%e5%8a%a1%e7%8a%b6%e6%80%81%e6%8c%87%e7%a4%ba%e5%99%a8">&lt;strong>核心实用价值：跨应用悬浮的任务状态指示器&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>Codex Pets 看似装饰性，实则承担了&lt;strong>状态指示器&lt;/strong>的实用角色。官方文档明确说明，Pets 悬浮覆盖层**“keeps active Codex work visible while you use other apps”&lt;strong>——即在你切换到其他应用时依然保持 Codex 工作可见。覆盖层会反馈三类信息：&lt;strong>当前活动的 thread&lt;/strong>、&lt;strong>Codex 当前状态&lt;/strong>(&lt;em>running&lt;/em>、&lt;em>waiting for input&lt;/em>、&lt;em>ready for review&lt;/em> 三种之一)，以及一段&lt;/strong>简短的进度提示**(short progress prompt)。这样开发者无需重新打开 thread 即可一眼看到变化(&lt;em>“glance at what changed without reopening the thread”&lt;/em>)，对长任务执行场景极为友好。&lt;/p>
&lt;h3 id="三种等价切换方式适配键盘党与鼠标党">
	&lt;a class="h-a" href="#%e4%b8%89%e7%a7%8d%e7%ad%89%e4%bb%b7%e5%88%87%e6%8d%a2%e6%96%b9%e5%bc%8f%e9%80%82%e9%85%8d%e9%94%ae%e7%9b%98%e5%85%9a%e4%b8%8e%e9%bc%a0%e6%a0%87%e5%85%9a">&lt;strong>三种等价切换方式：适配键盘党与鼠标党&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>官方为 Codex Pets 提供了三种功能等价的开关方式，以适应不同操作习惯。第一种是&lt;strong>命令输入&lt;/strong>：在 composer(输入框)中直接键入 &lt;strong>/pet&lt;/strong>，适合命令行习惯的开发者；第二种是&lt;strong>设置面板按钮&lt;/strong>：进入 &lt;strong>Settings → Appearance&lt;/strong>，点击 &lt;strong>Wake Pet&lt;/strong>(唤醒)或 &lt;strong>Tuck Away Pet&lt;/strong>(收起)，适合鼠标操作用户；第三种是&lt;strong>快捷键命令菜单&lt;/strong>：按 &lt;strong>Cmd+K&lt;/strong>(macOS)或 &lt;strong>Ctrl+K&lt;/strong>(Windows / Linux)调出命令菜单后运行同名命令，效率最高。三种方式之间没有功能差异，可按场景灵活切换。&lt;/p>
&lt;h3 id="hatch-pet-skill创建自定义宠物的完整流程">
	&lt;a class="h-a" href="#hatch-pet-skill%e5%88%9b%e5%bb%ba%e8%87%aa%e5%ae%9a%e4%b9%89%e5%ae%a0%e7%89%a9%e7%9a%84%e5%ae%8c%e6%95%b4%e6%b5%81%e7%a8%8b">&lt;strong>hatch-pet skill：创建自定义宠物的完整流程&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>若内置宠物不能满足需求，可通过 &lt;strong>hatch-pet skill&lt;/strong> 创建自定义宠物，流程分三步。第一步，在 Codex 中运行安装命令：&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code>$skill-installer hatch-pet&lt;/code>&lt;/pre>&lt;/div>&lt;p>第二步，按 &lt;strong>Cmd+K&lt;/strong> 或 &lt;strong>Ctrl+K&lt;/strong> 打开命令菜单，选择 &lt;strong>Force Reload Skills&lt;/strong> 重载 skills，确保 Codex 识别到新装的 hatch-pet。第三步，调用 hatch-pet 创建宠物：&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code>$hatch-pet create a new pet inspired by my recent projects&lt;/code>&lt;/pre>&lt;/div>&lt;p>官方示例提示语**“inspired by my recent projects”**暗示 hatch-pet 能结合用户上下文生成宠物，而非简单从模板中挑选。&lt;/p>&lt;p>© 2026 LLM大模型邮报 · &lt;a href="https://blogs.llmposts.com/models/openai-codex-pets-launch/">阅读原文 →&lt;/a>&lt;/p>&lt;p>本文首发于 &lt;a href="https://blogs.llmposts.com/">LLM 大模型邮报&lt;/a>。&lt;/p></description></item><item><title>Anthropic 测试代号 Jupiter V1 模型 或将于 5 月 6 日大会公布</title><link>https://blogs.llmposts.com/models/anthropic-jupiter-v1-red-team-testing/</link><pubDate>Fri, 01 May 2026 04:54:57 +0000</pubDate><author>MISTY</author><guid>https://blogs.llmposts.com/models/anthropic-jupiter-v1-red-team-testing/</guid><description>&lt;p>据 TestingCatalog 报道，Anthropic 已对内部代号 &lt;strong>Claude Jupiter V1&lt;/strong> 的新构建启动红队测试。该代号疑似遵循 Anthropic 此前以行星名称作为预发布安全测试标签的惯例，时间点临近 &lt;strong>2026 年 5 月 6 日&lt;/strong>的 Code with Claude 开发者大会。这一观察构成了 &lt;strong>Claude Jupiter V1 红队测试&lt;/strong> 曝光与 Code with Claude 大会的临近信号，但是否对应实际产品发布仍需以 Anthropic 官方公告为准。&lt;/p>

 
 
&lt;figure class="fig fig--w-text" id="fig-1">
 
 
 
 
 
 
 
 
 
 
 
 &lt;span 
 class="img__frame img__frame--box-shadow" style="display: flex; justify-content: center; align-items: center; background-color: #e3e4e2">
 &lt;span 
 class="img__c" style="position: relative; width: 100%; height: 0; padding-bottom: 51.4000%;"> 
 
&lt;img alt="Anthropic Workbench 中疑似 Claude Jupiter V1 红队测试模型选项截图" class="img--ls img--lqip lazyload" data-optimumx="auto" data-sizes="auto" data-srcset="https://blogs.llmposts.com/Workbench-Claude-Platform-04-30-2026_06_52_PM_14033303413787801725-512x.webp 512w, https://blogs.llmposts.com/Workbench-Claude-Platform-04-30-2026_06_52_PM_14033303413787801725-569x.webp 569w, https://blogs.llmposts.com/Workbench-Claude-Platform-04-30-2026_06_52_PM_14033303413787801725-633x.webp 633w, https://blogs.llmposts.com/Workbench-Claude-Platform-04-30-2026_06_52_PM_14033303413787801725-700x.webp 700w, https://blogs.llmposts.com/Workbench-Claude-Platform-04-30-2026_06_52_PM_14033303413787801725-703x.webp 703w, https://blogs.llmposts.com/Workbench-Claude-Platform-04-30-2026_06_52_PM_14033303413787801725-781x.webp 781w, https://blogs.llmposts.com/Workbench-Claude-Platform-04-30-2026_06_52_PM_14033303413787801725-869x.webp 869w, https://blogs.llmposts.com/Workbench-Claude-Platform-04-30-2026_06_52_PM_14033303413787801725-965x.webp 965w, https://blogs.llmposts.com/Workbench-Claude-Platform-04-30-2026_06_52_PM_14033303413787801725-1073x.webp 1073w, https://blogs.llmposts.com/Workbench-Claude-Platform-04-30-2026_06_52_PM_14033303413787801725-1193x.webp 1193w, https://blogs.llmposts.com/Workbench-Claude-Platform-04-30-2026_06_52_PM_14033303413787801725-1325x.webp 1325w, https://blogs.llmposts.com/Workbench-Claude-Platform-04-30-2026_06_52_PM_14033303413787801725-1473x.webp 1473w, https://blogs.llmposts.com/Workbench-Claude-Platform-04-30-2026_06_52_PM_14033303413787801725-1637x.webp 1637w, https://blogs.llmposts.com/Workbench-Claude-Platform-04-30-2026_06_52_PM_14033303413787801725-1820x.webp 1820w" height="360" src="data:image/webp;base64,UklGRrABAABXRUJQVlA4IKQBAADQFgCdASrpAHgAP0WgwFUoJimkpRNqwTYoiek7QFvAiwFsr0fBJQFUG+yvuZqJdSAfaB3YAANSEEsbDlwQeYALxXlntMN0G/IzCNyXL0bOYCxIxUhjoKz3Py5SjgYc51wv7ankd2Xz8vI2YkMQFymvcp14oLe4IBNayErmiCtgwrV1cZXogFMxKCxByjSfvnTNIBTK5NqKoQtKmxwrRZirYjK/LIzV4uDXmruOFbQ3DoUKplySckdmgDm2hsb6AAD+6rgLn/Lmi65N26HswryLDHoqmvNUaB5+zNocvQ7ZZqBJIJTvjuqPRPqScVpMGQdOBfvCNR+OXTStC52QRppmXEBIBGngGAmpXmJPdu/UYCA4CosMEqrZ34NxntfFh+BrA7CZYlFRpofH1GhdEvNrqvoLRzztWZMBSvAMXBB9f0nBQpPCfe/XdrAneYrOqoRYDyrYjTboJJnpXz3yRS2Q4JnX4wwbkRpDdo/Ppn74BY5aCoEVnnONeGLwcOQKQhyF54DdOFRyODeiiDAfRbIK+yz5E9/dgQAbLwcpg9q9QAAAAAA=" width="700">&lt;/span>

&lt;/span>

 
&lt;/figure>&lt;h3 id="内部代号命名规则与测试性质">
	&lt;a class="h-a" href="#%e5%86%85%e9%83%a8%e4%bb%a3%e5%8f%b7%e5%91%bd%e5%90%8d%e8%a7%84%e5%88%99%e4%b8%8e%e6%b5%8b%e8%af%95%e6%80%a7%e8%b4%a8">&lt;strong>内部代号命名规则与测试性质&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>Jupiter V1 仅为内部测试标识，据报道 &lt;strong>Jupiter-v1-p&lt;/strong> 不会出现在公开 API 字符串或产品 UI 中。Anthropic 习惯在产品发布前使用行星名称（如 &lt;strong>Neptune&lt;/strong>）标记安全测试阶段，这与公司早期披露的 &lt;strong>Codename&lt;/strong> 命名惯例相符。红队测试本身是 Anthropic 责任扩展政策（Responsible Scaling Policy）下的常规步骤，要求任何前沿级模型部署前完成越狱探测与 Constitutional Classifier 压力测试，但这并不构成新模型必将发布的官方确认。&lt;/p>
&lt;h3 id="时间线对标与历史模式">
	&lt;a class="h-a" href="#%e6%97%b6%e9%97%b4%e7%ba%bf%e5%af%b9%e6%a0%87%e4%b8%8e%e5%8e%86%e5%8f%b2%e6%a8%a1%e5%bc%8f">&lt;strong>时间线对标与历史模式&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>Anthropic 将于 &lt;strong>2026 年 5 月 6 日&lt;/strong>在旧金山举办 Code with Claude 开发者大会，伦敦与东京会议安排在稍后时间。根据 &lt;a class="link link--text" href="https://twitter.com/AiBattle_/status/2037649811478896829" rel="external">AiBattle 在 X 平台关于 Code with Claude 的爆料原帖&lt;/a>，大会时间已确认为 2026 年 5 月 6 日。参考 2025 年 &lt;strong>Neptune&lt;/strong> 代号的红队测试安排，同年 5 月中旬完成安全测试后即发布了 &lt;strong>Claude 4&lt;/strong> 系列模型；Jupiter V1 当前的测试节奏与该模式相似。需指出的是，行星代号惯例与历史时间线吻合并不构成新模型必将发布的官方确认。&lt;/p>
&lt;h3 id="当前产品阵容留下的更新空间">
	&lt;a class="h-a" href="#%e5%bd%93%e5%89%8d%e4%ba%a7%e5%93%81%e9%98%b5%e5%ae%b9%e7%95%99%e4%b8%8b%e7%9a%84%e6%9b%b4%e6%96%b0%e7%a9%ba%e9%97%b4">&lt;strong>当前产品阵容留下的更新空间&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>Anthropic 当前公开模型阵容以 &lt;strong>Opus 4.7&lt;/strong> 为旗舰，&lt;strong>Sonnet 4.7&lt;/strong> 与 &lt;strong>Haiku 4.7&lt;/strong> 尚未发布，留下中端与小型号位置。基于这一空缺，外界推测 Code with Claude 大会可能带来三种走向之一：4.7 系列在中小型号上的扩展、基于此前报道中提及的 Mythos（更早泄露的模型基础架构代号）的新一代模型，或介于两者之间的过渡更新。具体路径仍需以官方公告为准。&lt;/p>
&lt;h3 id="后续发布渠道的合理范围">
	&lt;a class="h-a" href="#%e5%90%8e%e7%bb%ad%e5%8f%91%e5%b8%83%e6%b8%a0%e9%81%93%e7%9a%84%e5%90%88%e7%90%86%e8%8c%83%e5%9b%b4">&lt;strong>后续发布渠道的合理范围&lt;/strong>&lt;/a>
&lt;/h3>&lt;p>若 Jupiter V1 最终对外发布，参考 &lt;strong>Opus 4.7&lt;/strong> 此前的上线路径，新模型预计将在 &lt;strong>Anthropic Platform&lt;/strong>、&lt;strong>Claude Code&lt;/strong> 与 &lt;strong>Claude&lt;/strong> 消费端应用同步推出。该判断属于基于历史模式的合理推断，并非官方时间表。读者在 &lt;strong>5 月 6 日&lt;/strong>前应将所有相关信息视为待验证爆料。&lt;/p>
&lt;p>Claude Jupiter V1 红队测试的曝光与 Code with Claude 大会的临近，构成一组可观察的时间线信号，但是否对应实际产品发布仍需以 Anthropic 官方公告为准。对中文开发者社区而言，值得关注的不止是是否有新模型发布，更包括 &lt;strong>Sonnet&lt;/strong> 与 &lt;strong>Haiku&lt;/strong> 中小型号是否在本轮一并刷新——这一点将直接影响 &lt;strong>Claude Code&lt;/strong>、MCP 生态与本地集成方案的成本结构。&lt;/p>&lt;p>© 2026 LLM大模型邮报 · &lt;a href="https://blogs.llmposts.com/models/anthropic-jupiter-v1-red-team-testing/">阅读原文 →&lt;/a>&lt;/p>&lt;p>本文首发于 &lt;a href="https://blogs.llmposts.com/">LLM 大模型邮报&lt;/a>。&lt;/p></description></item></channel></rss>