Published on

科技推特精选 - 2026-02-09

Authors

今日科技要闻:随着“氛围编程”(vibe coding)的兴起以及 OpenClaw 和 Claude Code 等自主智能体的问世,软件工程正被重新定义,行业趋势愈发倾向于能够跨领域利用 AI 工具的通才。在技术进展方面,通义千问 Qwen 3.5 推出了先进的多模态能力,亚马逊斥资 110 亿美元建设大型数据中心以扩张基础设施,波士顿动力公司的 Atlas 机器人也展示了卓越的灵活性。与此同时,印度更新了深科技初创企业规则以推动创新,而围绕 AI 训练数据和版权的法律诉讼仍在持续。


1. burkov (Group Score: 56.6 | Individual: 56.6)

Cluster: 1 tweets | Engagement: 2473 (Avg: 299) | Type: Tech

I didn't want to comment on OpenClaw. Usually, when there's so much noise in the media, it's some ordinary stuff just hyped well.

So I took time to learn how it works thanks to open source.

I was right. OpenClaw is 2% of ordinary stuff and 98% of hype.

To put it very shortly, in case you were wondering, there are two things in it:

  1. You can chat with an LLM via a text messenger. Not anything new.

  2. The LLM can use tools that run on your computer. Not anything new either.

Most of the "magic" mentioned in the media is about its ability to use the browser.

But it's not its ability. It's Playwright's ability.

Playwright is a library made by Microsoft which allows you to programmatically run a browser. It uses a built-in vision model made by Microsoft that converts the browser's screen into a textual description for LLMs.

Again, Microsoft has built Playwright exactly for what OpenClaw is using it.

So, OpenClaw's typical workflow:

  1. The user types in a text messenger "Buy me a flashlight on Amazon."

  2. OpenClaw blindly dispatches this message to an LLM which has access to some tools, including Playwright.

  3. The LLM, trained not by OpenClaw folks, decides that Playwright is the right tool (of course it is) and Amazon is the URL to navigate to.

  4. Playwright, built not by OpenClaw folks, runs the browser, which navigates to Amazon, and returns the textual description of what Amazon's home page looks like.

  5. OpenClaw blindly returns to the LLM this textual description.

  6. The LLM (again without any help from OpenClaw) decides that one should type "flashlight" into the search field and press Search, so it calls the Playwright tool with the search parameters.

  7. OpenClaw calls Playwright because the LLM told it to and types "flashlight" and then presses Search (it's all part of what Playwright does out of the box).

...

In the end of this LLM-controlled scenario, the order is submitted. OpenClaw just listened to what the LLM told it to do via tool calls.

I tried hard, and I haven't found anything else worth mentioning in the source code. There's also a part that keeps "memories" about past conversations, but it's all basic stuff. These memories are stored in text files and grep (controlled by LLMs trained to use grep, and trained not by OpenClaw folks) is used to search in them.

It's a nice hobby project, just like Cursor or Perplexity are nice hobby projects, but there's nothing there to look for, except for the hype and 2% of unoriginal plumbing code.


2. rohanpaul_ai (Group Score: 48.3 | Individual: 48.3)

Cluster: 1 tweets | Engagement: 1587 (Avg: 92) | Type: Tech

Marc Andreessen explains future belongs to generalist in the AI era.🎯

Founders will need skills across 6–8 fields. Deep expertise still matters, but broad knowledge plus AI tools will be more valuable in most areas.

Top CEOs already operate this way https://t.co/VftoYqW0kn


3. TechCrunch (Group Score: 44.0 | Individual: 44.0)

Cluster: 1 tweets | Engagement: 472 (Avg: 49) | Type: Tech

India has changed its startup rules for deep tech https://t.co/dXpVasURM8


4. rohanpaul_ai (Group Score: 41.6 | Individual: 41.6)

Cluster: 1 tweets | Engagement: 1794 (Avg: 92) | Type: Tech

RT @rohanpaul_ai: This is Amazon’s new $11B campus in St. Joseph County, Indiana, for AI data center buildout. Projected at 2.2 GW power d…


5. badlogicgames (Group Score: 41.5 | Individual: 41.5)

Cluster: 1 tweets | Engagement: 896 (Avg: 58) | Type: Tech

RT @Nick_Davidov: Asked Claude Cowork organize my wife’s desktop, it stated doing it, asked for a permission to delete temp office files, I…


6. gregisenberg (Group Score: 40.3 | Individual: 40.3)

Cluster: 1 tweets | Engagement: 3450 (Avg: 1292) | Type: Tech

you’re vibe coding when you should be vibe marketing (with claude code, openclaw etc)

  1. opus 4.6 / codex 5.3 → ship the product (core features, backend, auth, infra )

  2. claude code → design the playbook (content formats, hook templates, lead magnets, reply rules, tone of voice, weekly experiments)

  3. openclaw → run the playbook 24/7 (ads, post drafts, build free tools, repurpose content, create campaigns, reply to comments + dms, send follow-ups, queue experiments)

  4. dashboards → decide what to double down on (saves, shares, replies, clicks, signups)

most founders stop at step 1.

"wHy tHis DiDn'T woRk"

the money/opportunity is steps 2–5.

media is the most mispriced asset right now.

it isn't code, you know this

relax on the vibe coding for 1 sec

vibe marketing is your friend

it answers the real question nobody wants to ask:

“does anyone notice, remember, or care?”


7. lateinteraction (Group Score: 40.1 | Individual: 40.1)

Cluster: 1 tweets | Engagement: 365 (Avg: 28) | Type: Tech

Nope. My lab is making 3 algorithmic bets. One of them is on recursion, RLMs being step 1. Another one is on the power of late interaction retrieval.

Conventional single-vector retrieval was always a bottleneck, even back in 2019 when starting ColBERT. So if you're wondering if that's not ideal, it isn't. Same for single-step RAG.

But better retrieval is both possible and badly needed. Recursion is fundamentally an inference-time mechanism. You wouldn't want to recursively index a 10B-token corpus on each and every request.


8. minchoi (Group Score: 38.6 | Individual: 27.9)

Cluster: 2 tweets | Engagement: 1115 (Avg: 267) | Type: Tech

Boston Dynamics Atlas just did a roundoff backflip.

This is a real robot. Not CGI. Not AI video.

We are so cooked. 🤖 https://t.co/XgEYLB1VQv

See 1 related tweets

  • @minchoi: RT @minchoi: Boston Dynamics Atlas just did a roundoff backflip.

This is a real robot. Not CGI. Not...


9. MLStreetTalk (Group Score: 38.4 | Individual: 38.4)

Cluster: 1 tweets | Engagement: 3428 (Avg: 378) | Type: Tech

RT @esrtweet: If you are a software engineer "experiencing some degree of mental health crisis", now hear this, because I've been coding fo…


10. AlexFinn (Group Score: 38.2 | Individual: 38.2)

Cluster: 1 tweets | Engagement: 3746 (Avg: 1098) | Type: Tech

I'm sick and tired of the people who don't understand why I spent 20,000onthissetup,andplanonspendinganother20,000 on this set up, and plan on spending another 100,000 by the end of the year

IT DOES NOT MATTER THAT LOCAL MODELS AREN'T AS GOOD AS OPUS 4.6

That is not the point. The point is me being able to run a swarm of local AI agents powered by local AI models unlocks a world you can't imagine

A world never discovered by humanity before

Right now, as you read this post, I have multiple local AI models reading thousands of posts on X and Reddit

Hunting for challenges to solve

Those local AI models are feeding hundreds of challenges a day to a manager model

The manager model (Henry) decides what the company (Alex Finn Global Enterprises) will build.

The company is constantly working. Constantly researching. Constantly building. Constantly shipping

If I did this with local models I'd be spending $20,000 a month on API calls.

With my set up, it's free. I have an army on my desk. Never resting. Never eating. Never complaining. Always conquering.

Here is your problem: it's not that you don't understand this. You don't want to understand this. You don't want to think this is possible. Your brain doesn't want to believe this is the world we now live in.

It is. And the faster you can accept this and get on board, the faster you can enter the new society.

Otherwise, you will forever be doomed to the permanent underclass.

Make your choice.


11. TDataScience (Group Score: 38.0 | Individual: 22.3)

Cluster: 2 tweets | Engagement: 6 (Avg: 5) | Type: Tech

"With the release of coding agents, working with both frontend and backend code at the same time has become simpler."

@EivindKjos expands on some best practices in his latest article. https://t.co/ur7Pc1mgs7

See 1 related tweets

  • @EivindKjos: RT @TDataScience: "With the release of coding agents, working with both frontend and backend code at...

12. Forbes (Group Score: 37.2 | Individual: 18.9)

Cluster: 2 tweets | Engagement: 12 (Avg: 127) | Type: Tech

Authors loathe AI. That’s evident in the dozens of lawsuits they’ve filed against AI companies for allegedly training their models on millions of copyrighted books without consent or compensation. But it turns out their publishing houses aren’t quite so against it. Some of the largest publishers in the country, including Penguin Random House, Macmillan, Sourcebooks and Wiley are recruiting AI engineers, according to public job listings reviewed by Forbes.

Check out the full story: https://t.co/dRm6pPjLLM 📸: PA Images via Getty Images

See 1 related tweets

  • @Forbes: Why Some Of The Largest Book Publishers Are Hiring AI Engineers

Publishing giants like Penguin Rand...


13. jukan05 (Group Score: 36.3 | Individual: 36.3)

Cluster: 1 tweets | Engagement: 561 (Avg: 305) | Type: Tech

Lately I’ve been building a thesis I call the “Apple Bull” theory.

Vibe coding will boost Apple’s revenue even further.

By lowering the barrier to entry for programming, vibe coding will lead to more apps being submitted to the App Store, which in turn will generate more App Store profit for Apple.


14. TheAhmadOsman (Group Score: 36.3 | Individual: 36.3)

Cluster: 1 tweets | Engagement: 575 (Avg: 195) | Type: Tech

MASSIVE

Qwen 3.5 PR just landed in the Hugging Face Transformers repo

dense + MoE variants both variants SUPPORT text + image & video

hybrid attention default pattern: linear attention on most layers full attention every 4th layer gated DeltaNet under the hood

gated DeltaNet chunked gated-delta rule long context without KV cache bloat

Qwen3_5DynamicCache unified cache handles KV + recurrent states together

model variants 9B dense: 32 layers hidden 4096 / 16 heads / 4 KV heads

35B A3B MoE: 40 layers 256 experts 8 active per token hidden 2048 / 16 heads / 2 KV heads

MoE router top-8 routing 256 experts

  • one shared expert always on

multimodal RoPE temporal + height + width dimensions proper video support, not bolted on

vision stack 27-layer ViT spatial + temporal merging images + video, same backbone

new models added Qwen3_5 (dense) Qwen3_5MoE (MoE)

inheritance chain Qwen3_5 ← Qwen3VL ← Qwen2.5-VL Qwen3_5MoE ← Qwen3VLMoE ← Qwen3VL

2026 is going to be absurd year for local LLMs and we’re only in february


15. GitHub_Daily (Group Score: 36.0 | Individual: 36.0)

Cluster: 1 tweets | Engagement: 847 (Avg: 133) | Type: Tech

很多时候手里有 PDF 或 Epub 电子书,想在通勤或做家务时听,却找不到现成的有声版本,市面上的朗读工具要么收费,要么声音机械。

刚好在 GitHub 发现 QuickPiperAudiobook 这个开源工具,一行命令就能把任意文本内容生成自然的有声书。

支持 PDF、ePub、Mobi、TXT 甚至 HTML 等多种格式,核心是调用了高质量的 Piper 语音模型,生成的语音非常自然且支持多语言。

GitHub:https://t.co/UDkzwtSMQI

最大的特点是完全离线运行,隐私安全,配合 ffmpeg 还能自动生成带章节的 MP3 文件,体验很接近原生有声书。

只要配置好对应的语言模型(支持中文),就能帮我们把本地的文档库变成专属的“听书馆”,非常适合喜欢听书的朋友。


16. mark_k (Group Score: 35.3 | Individual: 35.3)

Cluster: 1 tweets | Engagement: 689 (Avg: 445) | Type: Tech

It looks like OpenAI’s first hardware product is getting a name: "Dime."

Recent leaks point to a patent filing by @OpenAI that confirms the name and suggests a reveal is coming soon. But the ambitious, "phone-like" wearable we’ve been hearing about might be on hold.

Instead of a high-end device with smartphone-level internals, reports say OpenAI will launch a simpler, audio-only set of earbuds first. The reason? Sky-high memory costs, ironically driven by the AI boom itself.

Expect the streamlined version to drop sometime in 2026, with the advanced model pushed back until component prices settle.


17. TheAhmadOsman (Group Score: 34.7 | Individual: 34.7)

Cluster: 1 tweets | Engagement: 489 (Avg: 195) | Type: Tech

you are a person who wants to understand llm inference you read papers “we use standard techniques” which ones? where is the code? open vllm 100k lines of c++ and python custom cuda kernel for printing close tab

now you have this tweet and mini-sglang ~5k lines of python actual production features

four processes api server tokenizer scheduler detokenizer talk over zeromq simple

scheduler is the boss receives requests decides: prefill or decode batches them sends work to gpu

prefill process the prompt compute heavy thousands of tokens at once flash attention does the lifting

decode generate new tokens one at a time memory bound needs kv cache

kv cache is the secret sauce every token remembers the past without it you recompute everything with it you just append

memory is finite enter radix cache

two requests “explain quantum physics” “explain quantum physics simply” same prefix why compute twice

radix tree stores prefixes first request builds cache second one reuses it ~50% faster in practice

chunked prefill 128k token prompt arrives gpu says “nope” scheduler says “relax” splits into chunks processes sequentially avoids oom

tensor parallelism one model four gpus each gpu holds a slice allreduce merges results 70b models on consumer rigs

overlap scheduling gpu crunching current batch cpu prepping next batch two streams no idle time nano-flow

cuda graphs for decode small batches high overhead record once replay forever 2ms → 1.5ms

codebase is readable core = data structures scheduler/ = the brain engine/ = the muscle layers/ = building blocks models/ = llama, qwen

want to add a model copy llama file ~200 lines add the architecture & its tricks done

want to tweak scheduling scheduler file line ~172 choose your policy done

want to grok attention attention/fa.py flash-attn integration comments explain everything

linux (mostly) cuda required kernels jit compile wsl2 if you’re on windows mac users stay mad :P

run it python -m minisgl --model "qwen/qwen3-0.6b" openai-compatible api token streaming it just works

or go big 70b on 8x rtx 3090s --tp 8

interactive shell too --shell chat in terminal /reset clears history

great code that works and teaches you while it runs


18. aakashgupta (Group Score: 34.5 | Individual: 34.5)

Cluster: 1 tweets | Engagement: 357 (Avg: 349) | Type: Tech

The PM workflow is getting rebuilt from the protocol layer up and most PMs haven’t noticed.

Linear just added initiatives, milestones, and project updates to their MCP server. Figma shipped MCP. Notion shipped MCP. What’s happening is every tool in the product stack is exposing a write layer to AI agents, and that changes what a PM actually does day to day.

Today a PM spends 30-45 minutes per week writing status updates. They open their project tool, check what shipped, cross-reference the PRD, summarize progress, flag risks. In an MCP-connected workflow, your agent in Cursor or Claude already has the initiative context, the milestone targets, the completed issues. It drafts the update. You review and approve.

That’s one workflow. Now multiply it across every surface: spec writing with Notion MCP, design reviews pulling context from Figma MCP, roadmap updates flowing from Linear MCP. The PM goes from being the person who manually stitches context across 6 tabs to the person who reviews and approves agent-generated artifacts across all of them.

The PMs who understand MCP configurations, who know how to chain tool calls across project management and design and docs, will operate at 3-5x the throughput of PMs who are still copying and pasting between browser tabs. This is the same split that happened when engineers who understood CI/CD pulled away from engineers who deployed manually.

What makes this moment specific: we went from “AI can search your project tool” to “AI can write initiatives, set milestones, post updates, and manage labels.” Read to write. That’s the transition that actually changes job descriptions.

MCP is becoming the connective tissue of product work. The tools are racing to expose their full surface area to agents. The PMs who wire this together first will set the standard for what the role looks like in 18 months.


19. aakashgupta (Group Score: 33.8 | Individual: 33.8)

Cluster: 1 tweets | Engagement: 289 (Avg: 349) | Type: Tech

Every decade, the ratio of market cap to physical assets increases. Apple is worth 4Twithfactoriesitdoesntown.Nvidiaisworth4T with factories it doesn’t own. Nvidia is worth 4.5T and outsources all fabrication to TSMC. Alphabet is worth $3.9T and doesn’t own a single cell tower. The trend line points one direction, and AI accelerates it.

Look at the “atoms” layer where value supposedly reverts. TSMC runs the most advanced semiconductor fabs on Earth, a near-monopoly on cutting-edge chip manufacturing, 60% gross margins, and is worth 1.8T.Nvidiasitsontopofthosefabsandisworth1.8T. Nvidia sits on top of those fabs and is worth 4.5T. Apple sits on top and is worth $4T. The orchestration layer commands 2-3x the valuation of the manufacturing layer beneath it.

Go one layer deeper. Foxconn assembles every iPhone, employs 800,000+ people, and runs some of the largest manufacturing operations in human history. Market cap: ~97B.Thats2.497B. That’s 2.4% of Apple. Foxconn does 213B in annual revenue and is worth less than two quarters of Apple’s earnings. The company that literally touches every atom in the most valuable consumer product ever made captures almost none of the value.

AI is intensifying this pattern. Anthropic is valued at 350Bandprojecting350B and projecting 20-26B in 2026 revenue with zero factories. OpenAI is targeting an $830B valuation on rented compute. These two companies combined could be worth more than TSMC within 12 months. The most asset-light businesses in the history of capitalism are being valued higher than the most sophisticated manufacturing operation ever built.

The historical mean where value accrued to atoms existed because distribution was physical. You needed railroad tracks, retail stores, oil pipelines. The internet dissolved that constraint permanently. AI compounds that dissolution. The platform with the most users generates the most data, which trains the best models, which attracts more users. That flywheel has no physical equivalent.

Every previous technology wave concentrated more value in the orchestration layer and less in the physical layer. The most valuable electricity-era companies were the ones building appliances and media on top of cheap power, not the plants generating it. The most valuable internet-era companies were the ones organizing information and attention, not the ones laying fiber. AI follows the same playbook. The models capture more value than the data centers running them, and that spread is growing every quarter.

The 50-year “blip” framing gets the direction backwards. The gap between asset-light and asset-heavy valuations has widened every single decade since 1970. AI is the strongest tailwind that gap has ever had.


20. aakashgupta (Group Score: 33.7 | Individual: 33.7)

Cluster: 1 tweets | Engagement: 146 (Avg: 349) | Type: Tech

Most people still think “AI in productivity software” means a chatbot sidebar that summarizes your bullet points.

This clip shows Claude Opus 4.6 running inside PowerPoint as a native add-in, reading slide XML, calculating shape dimensions down to the point, and rewriting object definitions across an entire deck in one operation. Anthropic also ships Cowork, which lets you hand Claude a folder of files and walk away while it builds finished presentations and workbooks autonomously. Two different products, same conclusion: AI now operates at the object model level of your production files.

The group that should be paying the closest attention is the PowerPoint-and-Excel class. Consultants, analysts, associates. The people whose first three years on the job are defined by formatting slides and debugging spreadsheet models.

McKinsey’s internal AI platform Lilli now handles over 500,000 prompts per month. 75% of their 43,000 employees use it monthly. Their “One-Click” agent turns a four-question prompt into a formatted client deck in minutes. Kate Smaje, their global tech leader, said it directly: “Do we need armies of business analysts creating PowerPoints? No, the technology could do that.” Junior analysts who used to spend 6-10 hours assembling a first-cut deck now get a draft in minutes. BCG has Deckster. Bain has Sage. Every major firm is building the same thing.

The stock market already priced in what this means. When Anthropic launched Cowork plugins last week, Thomson Reuters dropped 15.8% in a single day, its worst ever. LegalZoom fell 19.7%. The total software selloff hit $285 billion. Investors looked at an AI agent that can autonomously produce professional documents and immediately repriced the entire services layer.

The consulting pyramid was built on a simple trade: hire smart 22-year-olds, bill them at 300/hourtoformatslidesandbuildmodels,thentrainthemintosenioradvisorsoveradecade.AIjustcollapsedthebottomofthatpyramid.ThequestionforeveryknowledgeworkerwhosecoreskillisImgoodatPowerPointandExceliswhethertheyrebuildingthejudgmentlayerontop,orwhethertheyrecompetingwitha300/hour to format slides and build models, then train them into senior advisors over a decade. AI just collapsed the bottom of that pyramid. The question for every knowledge worker whose core skill is “I’m good at PowerPoint and Excel” is whether they’re building the judgment layer on top, or whether they’re competing with a 30/month add-in that works at 3am without complaining.