Published on

科技推特精选 - 2026年2月7日

Authors

今日科技要闻:随着 Anthropic 的 Claude Opus 4.6 显著提升智能体(agentic)能力,软件格局正迎来一场复兴,目前其已贡献了 GitHub 公开提交总量的 4%。这一浪潮由科技巨头惊人的资本支出以及有望在 2026 年突破 1 万亿美元大关的半导体行业共同推动。在初创公司积极探索 AI 驱动的沙盒游戏及智能体 SaaS 替代方案之际,监管审查亦在同步收紧,欧盟近期针对 TikTok 的成瘾性设计特征提起指控,力求在技术飞速创新与必要的数字监管之间达成平衡。


1. bibryam (Group Score: 101.0 | Individual: 51.9)

Cluster: 4 tweets | Engagement: 9412 (Avg: 508) | Type: Tech

RT @claudeai: Introducing Claude Opus 4.6. Our smartest model got an upgrade.

Opus 4.6 plans more carefully, sustains agentic tasks for l…

See 3 related tweets

  • @bcherny: RT @claudeai: Announcing Built with Opus 4.6: a Claude Code virtual hackathon.

Join the Claude Code...

  • @bcherny: RT @AnthropicAI: New Engineering blog: We tasked Opus 4.6 using agent teams to build a C compiler. T...
  • @rohanpaul_ai: RT @rohanpaul_ai: Anthropic just released Claude Opus 4.6, with major gains across coding, knowledge...

2. rohanpaul_ai (Group Score: 76.1 | Individual: 32.7)

Cluster: 3 tweets | Engagement: 39 (Avg: 62) | Type: Tech

The chip industry is on track to cross $1T in annual revenue in 2026, pushed by AI data center spending and chips spreading into more products and infrastructure.

Semiconductor Industry Association puts 2025 global chip sales at 791.7Bandexpectsabouta26791.7B and expects about a 26% jump in 2026, which gets the market to the 1T range.

The old growth pattern leaned heavily on phones and PCs and swung with consumer upgrade cycles, so even strong years rarely lifted every chip category at once.

AI changes the mix because training and running modern LLMs needs lots of “advanced computing” chips and lots of memory chips, and both rise together when new data centers are built.

In 2025, SIA says advanced computing chips grew 39.9% to 301.9Bandmemorygrew34.8301.9B and memory grew 34.8% to 223.1B, which is the core of the current surge.

For builders, the constraint is less “can GPUs exist” and more packaging, power delivery, and memory supply keeping pace with demand spikes.


reuters. com/business/global-chip-sales-expected-hit-1-trillion-this-year-industry-group-says-2026-02-06/

See 2 related tweets

  • @business: The semiconductor industry will reach $1 trillion in revenue this year for the first time ever, fuel...

  • @rohanpaul_ai: Big tech is gearing up to spend huge on AI in 2026.

  • Amazon leading at $200B,

  • Google $180B,

  • ...


3. aakashgupta (Group Score: 66.8 | Individual: 33.8)

Cluster: 2 tweets | Engagement: 150 (Avg: 344) | Type: Tech

Apple spent 12.7billiononcapexinfiscal2025.Alphabetjustguided12.7 billion on capex in fiscal 2025. Alphabet just guided 92 billion. Amazon raised to 125billion.Metaisprojecting125 billion. Meta is projecting 115 to 135billion.Microsoftburned135 billion. Microsoft burned 37.5 billion in a single quarter, and the stock dropped 12% in its worst day in six years.

Add those up. The Big Four AI spenders are collectively committing over 500 billion in 2026 capex. The market looked at that number and panicked. The S&P software index lost 830 billion in market value in six days. Jefferies traders started calling it the “SaaSpocalypse.” Apollo cut its lending exposure to software companies nearly in half.

And while all of this was happening, Apple posted 143.8billioninQ1revenue,beatestimatesby143.8 billion in Q1 revenue, beat estimates by 3 billion, grew EPS 19% year over year, and returned $32 billion to shareholders. iPhone 17 demand was so strong Tim Cook said it was “off the chart” and supply was constrained on multiple models.

The stock is up 6% this month while the Nasdaq is down over 3%. Apple just passed Alphabet to reclaim the #2 market cap spot at $4 trillion.

This tells you the market has quietly flipped its valuation framework. For two years, AI capex was rewarded. Spend more, stock goes up. Now investors are doing the math on whether $500 billion in annual infrastructure spending actually converts to proportional revenue. The answer increasingly looks like “not yet,” and the stocks are repricing accordingly.

Apple’s move is the mirror image. Hedge funds like Thiel Macro started rotating from Nvidia into Apple in late 2025, treating it as a defensive cash flow play. The logic: Apple gets to ride AI adoption through its 2.2 billion active devices without burning $100 billion a year on data centers. It buys compute from partners, runs a hybrid cloud model, and keeps gross margins fat.

Tim Cook’s actual strategy is to let everyone else build the infrastructure, then distribute AI through the install base at near-zero marginal cost. The market spent 18 months calling this “boring.” Now it calls it “the only Mag Seven stock not lighting cash on fire.”

The funniest part of this meme is the punchline. “Still no strategy” is the strategy. And the stock price is the receipt.

See 1 related tweets

  • @aakashgupta: Amazon just committed to spending $548 million per day on capex in 2026.

The math is wild. $200 bil...


4. bindureddy (Group Score: 60.3 | Individual: 39.9)

Cluster: 2 tweets | Engagement: 807 (Avg: 303) | Type: Tech

🚨 CANCEL YOUR SaaS SUBSCRIPTIONS

We will soon be launching agentic templates where you can install your favorite SaaS products in one click!

An AI agent will spin up and vibe code a complex software system

Initial launch targets

  • CRM for sales and leads
  • contract management
  • campaign management
  • bug tracker
  • internal wiki

The best part - you can customize these apps any which way ❤️

See 1 related tweets

  • @abacusai: RT @bindureddy: 🚨 CANCEL YOUR SaaS SUBSCRIPTIONS

We will soon be launching agentic templates wher...


5. jerryjliu0 (Group Score: 58.0 | Individual: 30.6)

Cluster: 2 tweets | Engagement: 15 (Avg: 44) | Type: Tech

Extracting structured outputs with LLMs is easy. But doing large-scale extraction with precise citations and bounding boxes back to the source documents is way harder.

With our latest release in LlamaExtract, we extract citation bounding boxes along with every single key and value within a document.

You can see this in the UI. Hover over any k:v pair and you’ll be able to see the corresponding highlights in the source doc.

If you’re a human reviewing a million docs (resumes, IDs, invoices, claims, contracts), this will help you 5x your ability to verify values and make sure things are correct.

Check out these new extraction upgrades in LlamaCloud: https://t.co/XYZmx5TFz8

See 1 related tweets

  • @llama_index: LlamaExtract citations just got an upgrade: we now show you exactly where extracted data comes from ...

6. awnihannun (Group Score: 56.0 | Individual: 56.0)

Cluster: 1 tweets | Engagement: 2253 (Avg: 89) | Type: Tech

Introducing Analogue 3D - Prototype Limited Editions.

Available in highly limited quantities. On sale Feb 9th 8am PST. Shipping in 24-48hrs.

Five colors were officially prototyped for the original Nintendo 64. Unreleased. Until now. They were real. They were manufactured. They were never released.

Analogue 3D: Prototype Editions finally release these lost colors as limited editions. History, finished. Decades later.

Learn more on the Analogue website.


7. tdinh_me (Group Score: 49.2 | Individual: 49.2)

Cluster: 1 tweets | Engagement: 956 (Avg: 181) | Type: Tech

How to start becoming a solo entrepreneur:

Build small side projects, finish and polish them (aim for less than 2 weeks), making them well-scoped, functional small products.

Then move on to creating more small products, gradually increasing complexity and adding payment options so people can pay if they want to. Still, each should be finished with a proper website, checkout flow, etc.

These products don't have to be extremely useful or solve a painful problem, they're just for training your "building muscle". Naturally, you'll start creating more useful products later on.

Now learn at least some marketing channels that suit your skills and interests: SEO, cold email, paid ads, influencers, etc.

Then practice one week building, one week marketing, and repeat.

If everything goes well, you'll unlock a profitable business in about 1-2 years.


8. aakashgupta (Group Score: 48.5 | Individual: 48.5)

Cluster: 1 tweets | Engagement: 1559 (Avg: 344) | Type: Tech

Sounds incredible until you read the fine print. The compiler generates less efficient code than GCC with all optimizations disabled. It doesn’t have its own assembler or linker. It can’t produce a 16-bit x86 code generator. And Carlini himself says it has “nearly reached the limits of Opus’s abilities.” New features and bugfixes kept breaking existing functionality.

So what did $20,000 and two weeks actually buy? A compiler that passes 99% of GCC’s torture tests but can’t match the output quality of a tool that’s had 37 years of human engineering. That’s the constraint nobody’s pricing in.

The real story is in the cost curve, not the capability demo. 20,000for100,000linesmeans20,000 for 100,000 lines means 0.20 per line of generated code. A senior compiler engineer costs roughly 150/hour.Atmaybe50polishedlinesperhourforsomethingthiscomplex,thats150/hour. At maybe 50 polished lines per hour for something this complex, that’s 3/line. AI just did it at 15x cheaper, and it will only get cheaper from here.

But the code isn’t equivalent. The AI version needs a human to finish the assembler, fix the linker, optimize the output, and prevent regressions. Those are the hardest 20% of the problem, and they represent 80% of the engineering value. Anthropic built the demo. Shipping the product still requires humans.

This tells you exactly where we are in the autonomous software timeline. AI can now produce impressive first drafts of complex systems at trivial cost. Turning those drafts into production software still requires the judgment that costs $300K+ per year in compiler engineer salary. The gap between “compiles the Linux kernel” and “replaces GCC” is measured in decades of accumulated engineering wisdom that no model has internalized yet.

The companies that understand this will use agent teams to generate the 80% and hire engineers to finish the 20%. The companies that don’t will ship $20,000 compilers that produce slower code than a free tool from 1987.


9. Reuters (Group Score: 48.3 | Individual: 18.5)

Cluster: 3 tweets | Engagement: 146 (Avg: 84) | Type: Tech

TikTok was charged with breaching EU online content rules over what the bloc's regulator said were its addictive features and was told to change the design of its app https://t.co/trEVbGNLyu https://t.co/ks7MjlLRWu

See 2 related tweets

  • @Reuters: TikTok was charged with breaching EU online content rules over what the bloc's regulator said were i...
  • @ReutersBiz: TikTok was charged with breaching EU online content rules over what the bloc's regulator said were i...

10. ycombinator (Group Score: 48.2 | Individual: 48.2)

Cluster: 1 tweets | Engagement: 911 (Avg: 128) | Type: Tech

Pax Historia is the first-AI powered sandbox game.

Players create worlds, publish them to the community, and play to answer all of their ‘what would have happened if…?’ questions.

Congrats on the launch @Eli_BullockPapa and @Ryzhang22!

https://t.co/VQiYOBcEDR https://t.co/L16Q7bNlpq


11. burkov (Group Score: 48.2 | Individual: 48.2)

Cluster: 1 tweets | Engagement: 3282 (Avg: 312) | Type: Tech

It's funny that Claude says something like "This is a major refactor: the first stage will take 2-3 days, the second stage two days, and the third stage about a week" while we both know all three stages will be generated in the next 30 minutes.


12. SakanaAILabs (Group Score: 47.5 | Individual: 47.5)

Cluster: 1 tweets | Engagement: 653 (Avg: 63) | Type: Tech

【Sakana AI ベータテスター募集 🐟🐠】

現在開発中のAIサービスについて、検証にご協力いただけるテスターを募集しています。

まだ開発途上のプロトタイプではありますが、皆様からのフィードバックを元に品質を向上させていきたいと考えております。

▼ 応募はこちら https://t.co/QFmNGHVTVB

※ ご協力いただいた方には、オリジナルグッズのプレゼントも予定しています🐟


13. steipete (Group Score: 46.5 | Individual: 15.9)

Cluster: 3 tweets | Engagement: 888 (Avg: 393) | Type: Tech

RT @dylan522p: 4% of GitHub public commits are being authored by Claude Code right now. At the current trajectory, we believe that Claude C…

See 2 related tweets

  • @rohanpaul_ai: SemiAnalysis data shows Claude Code now accounts for roughly 4% of public GitHub commits, up 2x in a...
  • @kimmonismus: Claude Code will be 20%+ of all daily commits to GitHub by the end of 2026.

2027 is getting even m...


14. gdb (Group Score: 45.8 | Individual: 45.8)

Cluster: 1 tweets | Engagement: 9079 (Avg: 2205) | Type: Tech

Software development is undergoing a renaissance in front of our eyes.

If you haven't used the tools recently, you likely are underestimating what you're missing. Since December, there's been a step function improvement in what tools like Codex can do. Some great engineers at OpenAI yesterday told me that their job has fundamentally changed since December. Prior to then, they could use Codex for unit tests; now it writes essentially all the code and does a great deal of their operations and debugging. Not everyone has yet made that leap, but it's usually because of factors besides the capability of the model.

Every company faces the same opportunity now, and navigating it well — just like with cloud computing or the Internet — requires careful thought. This post shares how OpenAI is currently approaching retooling our teams towards agentic software development. We're still learning and iterating, but here's how we're thinking about it right now:

As a first step, by March 31st, we're aiming that:

(1) For any technical task, the tool of first resort for humans is interacting with an agent rather than using an editor or terminal. (2) The default way humans utilize agents is explicitly evaluated as safe, but also productive enough that most workflows do not need additional permissions.

In order to get there, here's what we recommended to the team a few weeks ago:

  1. Take the time to try out the tools. The tools do sell themselves — many people have had amazing experiences with 5.2 in Codex, after having churned from codex web a few months ago. But many people are also so busy they haven't had a chance to try Codex yet or got stuck thinking "is there any way it could do X" rather than just trying.
  • Designate an "agents captain" for your team — the primary person responsible for thinking about how agents can be brought into the teams' workflow.
  • Share experiences or questions in a few designated internal channels
  • Take a day for a company-wide Codex hackathon
  1. Create skills and AGENTS[.md].
  • Create and maintain an AGENTS[.md] for any project you work on; update the AGENTS[.md] whenever the agent does something wrong or struggles with a task.
  • Write skills for anything that you get Codex to do, and commit it to the skills directory in a shared repository
  1. Inventory and make accessible any internal tools.
  • Maintain a list of tools that your team relies on, and make sure someone takes point on making it agent-accessible (such as via a CLI or MCP server).
  1. Structure codebases to be agent-first. With the models changing so fast, this is still somewhat untrodden ground, and will require some exploration.
  • Write tests which are quick to run, and create high-quality interfaces between components.
  1. Say no to slop. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high
  • Ensure that some human is accountable for any code that gets merged. As a code reviewer, maintain at least the same bar as you would for human-written code, and make sure the author understands what they're submitting.
  1. Work on basic infra. There's a lot of room for everyone to build basic infrastructure, which can be guided by internal user feedback. The core tools are getting a lot better and more usable, but there's a lot of infrastructure that currently go around the tools, such as observability, tracking not just the committed code but the agent trajectories that led to them, and central management of the tools that agents are able to use.

Overall, adopting tools like Codex is not just a technical but also a deep cultural change, with a lot of downstream implications to figure out. We encourage every manager to drive this with their team, and to think through other action items — for example, per item 5 above, what else can prevent a lot of "functionally-correct but poorly-maintainable code" from creeping into codebases.


15. addyosmani (Group Score: 44.3 | Individual: 33.2)

Cluster: 2 tweets | Engagement: 420 (Avg: 280) | Type: Tech

Every team shipping AI-assisted code at scale needs new norms around quality gates, observability, and ownership.

Regardless of which model or toolchain you use, this is one of the most practical frameworks I've seen for adopting agentic development.

See 1 related tweets

  • @addyosmani: Regardless of which model or toolchain you use, this is a practical framework for adopting agentic d...

16. brunoborges (Group Score: 41.5 | Individual: 41.5)

Cluster: 1 tweets | Engagement: 734 (Avg: 43) | Type: Tech

RT @ianmiles: Marc Andreessen: AI coding doesn’t eliminate programmers — it redefines them. The job is no longer typing code line by line,…


17. ibuildthecloud (Group Score: 41.3 | Individual: 41.3)

Cluster: 1 tweets | Engagement: 498 (Avg: 30) | Type: Tech

RT @mitchellh: Wrote up about my personal journey from AI skeptic to someone who finds a lot of value in it daily. My goal is to share a mo…


18. dejavucoder (Group Score: 40.1 | Individual: 40.1)

Cluster: 1 tweets | Engagement: 1152 (Avg: 120) | Type: Tech

RT @esrtweet: If you are a software engineer "experiencing some degree of mental health crisis", now hear this, because I've been coding fo…


19. omarsar0 (Group Score: 40.1 | Individual: 26.8)

Cluster: 2 tweets | Engagement: 203 (Avg: 158) | Type: Tech

NEW research on improving memory for AI Agents.

(bookmark it)

As context windows scale to millions of tokens, the bottleneck shifts from raw capacity to cognitive control. Knowing what you know, knowing what's missing, and knowing when to stop matters more than processing every token.

Longer context windows don't guarantee better reasoning. This is largely because the way devs handle ultra-long documents today remains expanding the context window or compressing everything into a single pass.

But when decisive evidence is sparse and scattered across a million tokens, passive memory strategies silently discard the bridging facts needed for multi-hop reasoning.

This new research introduces InfMem, a bounded-memory agent that applies System-2-style cognitive control to long-document question answering through a structured PRETHINK–RETRIEVE–WRITE protocol.

Instead of passively compressing each segment as it streams through, InfMem actively monitors whether its memory is sufficient to answer the question. Is the current evidence enough? What's missing? Where in the document should I look?

PRETHINK acts as a cognitive controller, deciding whether to stop or retrieve more evidence. When evidence gaps exist, it synthesizes a targeted retrieval query and fetches relevant passages from anywhere in the document, including earlier sections it already passed. WRITE then performs joint compression, integrating retrieved evidence with the current segment into a bounded memory under a fixed budget.

The training recipe uses an SFT warmup to teach protocol mechanics through distillation from Qwen3-32B, then reinforcement learning aligns retrieval, writing, and stopping decisions with end-task correctness using outcome-based rewards and early-stop shaping.

On ultra-long QA benchmarks from 32k to 1M tokens, InfMem outperforms MemAgent by +10.17, +11.84, and +8.23 average absolute accuracy points on Qwen3-1.7B, Qwen3-4B, and Qwen2.5-7B, respectively.

A 4B parameter InfMem agent maintains consistent accuracy up to 1M tokens, where standard baselines like YaRN collapse to single-digit performance. Inference latency drops by 3.9x on average (up to 5.1x) via adaptive early stopping.

These gains also transfer to LongBench QA, where InfMem+RL achieves up to +31.38 absolute improvement on individual tasks over the YaRN baseline.

Paper: https://t.co/4wxeCua7a7

Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX

See 1 related tweets

  • @dair_ai: RT @omarsar0: NEW research on improving memory for AI Agents.

(bookmark it)

As context windows sca...


20. a16z (Group Score: 39.8 | Individual: 25.0)

Cluster: 2 tweets | Engagement: 926 (Avg: 428) | Type: Tech

ElevenLabs started as a weekend project.

They crossed $330M ARR in 2025 as they build the voice interface of the future.

This is the ElevenLabs story. An a16z Original. https://t.co/rVE1cPyORB

See 1 related tweets

  • @elevenlabsio: RT @a16z: ElevenLabs started as a weekend project.

They crossed $330M ARR in 2025 as they build the...