Published on

科技推特精选 - 2026-03-22

Authors

2026年3月22日科技每日简报

Today's top tech conversations are led by @minchoi, whose post about 'It's happening... they are bui...' garnered the highest engagement. Key themes trending across the top stories include model, claude, https, openai, product. The community is actively discussing recent developments in AI, engineering practices, and startup strategies.


1. minchoi (Group Score: 169.3 | Individual: 58.0)

Cluster: 4 tweets | Engagement: 2773 (Avg: 328) | Type: Tech

It's happening... they are building ClaudeBot 💀 https://t.co/wjxClU2CPU\n\nQT @noahzweben: You can now schedule recurring cloud-based tasks on Claude Code.

Set a repo (or repos), a schedule, and a prompt. Claude runs it via cloud infra on your schedule, so you don’t need to keep Claude Code running on your local machine. https://t.co/Vse4WfVnKC

See 3 related tweets

  • @bibryam: RT @noahzweben: You can now schedule recurring cloud-based tasks on Claude Code.

Set a repo (or rep...

  • @EHuanglu: Claude Code now can work for you 24/7 on cloud https://t.co/Q8QiNiShrD\n\nQT @noahzweben: You can no...
  • @RoundtableSpace: YOU CAN NOW SCHEDULE RECURRING CLOUD-BASED TASKS ON CLAUDE CODE

THEY’RE BUILDING CLAUDEBOT NOW

ht...


2. rohanpaul_ai (Group Score: 95.9 | Individual: 36.4)

Cluster: 3 tweets | Engagement: 215 (Avg: 83) | Type: Tech

Terence Tao's new interview.

He just summarized how AI is massively accelerating math career and math research.

"In math, you previously had to basically go through years and years of education to be a math PhD before you could contribute to the frontier of math research. But now it's quite possible at the high school level or whatever, that you could get involved in a math project and actually make a real contribution because of all these AI tools and lean and everything else."

Great podcast by @dwarkesh_sp\n\nQT @dwarkesh_sp: The Terence Tao episode.

We begin with the absolutely ingenious and surprising way in which Kepler discovered the laws of planetary motion.

People sometimes say that AI will make especially fast progress at scientific discovery because of tight verification loops.

But the story of how we discovered the shape of our solar system shows how the verification loop for correct ideas can be decades (or even millennia) long.

During this time, what we know today as the better theory can often actually make worse predictions (Copernicus's model of circular orbits around the sun was actually less accurate than Ptolemy's geocentric model).

And the reasons it survives this epistemic hell is some mixture of judgment and heuristics that we don’t even understand well enough to actually articulate, much less codify into an RL loop.

Hope you enjoy!

0:00:00 – Kepler was a high temperature LLM 0:11:44 – How would we know if there’s a new unifying concept within heaps of AI slop? 0:26:10 – The deductive overhang 0:30:31 – Selection bias in reported AI discoveries 0:46:43 – AI makes papers richer and broader, but not deeper 0:53:00 – If AI solves a problem, can humans get understanding out of it? 0:59:20 – We need a semi-formal language for the way that scientists actually talk to each other 1:09:48 – How Terry uses his time 1:17:05 – Human-AI hybrids will dominate math for a lot longer

Look up Dwarkesh Podcast on YouTube, Apple Podcasts, or Spotify.

See 2 related tweets

  • @rohanpaul_ai: In the AI era, the bottleneck has shifted from generating ideas to filtering them.

Breakthroughs hi...

  • @rohanpaul_ai: RT @rohanpaul_ai: Terence Tao's new interview.

He just summarized how AI is massively accelerating...


3. Prince_Canuma (Group Score: 86.2 | Individual: 29.0)

Cluster: 3 tweets | Engagement: 8 (Avg: 730) | Type: Tech

Molmo Point by @allen_ai now on MLX 🚀

You can now run computer and browser use completely locally on your Mac using mlx-vlm.

> uv pip install -U mlx-vlm https://t.co/pDmECoThbZ\n\nQT @Prince_Canuma: 🚀 mlx-vlm v0.4.1 is out!

What’s new:

🧠 New models — Mistral4, Molmo Point

🔧 Server improvements — model & adapter preloading, common sampling params, cleaner defaults

🛠️ Fixes — Qwen3-Omni integration, Qwen3.5 continuous batching

📋 Tool call index per OpenAI spec

⚡ Torch removed as a dependency

Welcome new contributors @auggie246 & @howeirdo! 🎉

Get started today:

uv pip install -U mlx-vlm

If you love it, leave us a star ⭐️

https://t.co/7BvnEuzKvj

See 2 related tweets

  • @Prince_Canuma: 🚀 mlx-vlm v0.4.1 is out!

What’s new:

🧠 New models — Mistral4, Molmo Point

🔧 Server improvements —...

  • @Prince_Canuma: This will hit later 🤣\n\nQT @Prince_Canuma: 🚀 mlx-vlm v0.4.1 is out!

What’s new:

🧠 New models — Mi...


4. badlogicgames (Group Score: 76.7 | Individual: 32.3)

Cluster: 3 tweets | Engagement: 176 (Avg: 109) | Type: Tech

dax is trying to acqui-sponsor all them pi extension kids. what is his plan?\n\nQT @ferologics: Ditto: huge huge thank you to @thdxr, @anomalyco, @opencode for the GitHub sponsorship! 🙏 And a special shoutout to 'grandpa' @badlogicgames for believing in me. Incredibly grateful to be building in such a thriving OSS community during these exciting times! 🙇‍♂️🫡 https://t.co/X30rBg875I

See 2 related tweets

  • @badlogicgames: .@terrorobe figured out @dax plan https://t.co/RDgwVAf8RA\n\nQT @ferologics: Ditto: huge huge thank ...
  • @badlogicgames: (joking, nice to see dax and team redirecting some funds to pi-dawans. yes. i went there. i'm a dad)...

5. FirstSquawk (Group Score: 72.0 | Individual: 41.2)

Cluster: 2 tweets | Engagement: 2239 (Avg: 104) | Type: Tech

ANTHROPIC’S CLAUDE UPDATE HELPED A STUDENT TURN 1,400INTO1,400 INTO 238,000 IN 11 DAYS.

See 1 related tweets

  • @RoundtableSpace: ANTHROPIC’S CLAUDE UPDATE HELPED A STUDENT TURN 1,400INTO1,400 INTO 238,000 IN 11 DAYS.

He built a simple ...


6. rohanpaul_ai (Group Score: 71.3 | Individual: 33.2)

Cluster: 3 tweets | Engagement: 61 (Avg: 83) | Type: Tech

📢 OpenAI will nearly double its workforce as it pivots away from consumer experiments toward a massive push into the business market.

The company wants to stop competitors from taking over the corporate space by putting thousands of new engineers and sales specialists directly into the field.

This hiring spree aims to bring the total headcount to 8,000 employees by December-26.

Leadership recently issued a code red to pause non-essential projects and focus entirely on making ChatGPT a better tool for professional work.

Most of the new hires will focus on product development and a strategy called technical ambassadorship.

These technical ambassadors work like on-site experts who help big companies install and customize AI models for their specific business needs.

OpenAI is also planning to merge its coding model, Codex, with ChatGPT into one single app to better serve desktop users in the office.


reuters .com/business/openai-nearly-double-workforce-8000-by-end-2026-ft-reports-2026-03-21

See 2 related tweets

  • @xeophon: OpenAI wants to become as big as DocuSign in head count\n\nQT @chrmanning: Somehow the AI agent work...
  • @Cointelegraph: ⚡️ LATEST: OpenAI plans to nearly double its workforce to 8,000 employees by end of 2026 as it ramps...

7. thsottiaux (Group Score: 69.6 | Individual: 23.6)

Cluster: 4 tweets | Engagement: 3470 (Avg: 1697) | Type: Tech

Do people like this? We don't do this for codex because it exists to help you and it's important that you remain the owner and accountable for your work without AI taking credit. At the same time it does mean that you can't trace how popular codex is among repos.\n\nQT @Yuchenj_UW: I noticed something interesting:

Claude Code auto-adds itself as a co-author on every git commit. Codex doesn’t.

That’s why you see Claude everywhere on GitHub, but not Codex.

I wonder why OpenAI is not doing that. Feels like an obvious branding strategy OpenAI is skipping.

See 3 related tweets

  • @RyanPGreenblatt: Commit metadata noting it was by an AI is helpful for analysis of AI coding capabilities and diffusi...
  • @dejavucoder: removing claude from co-author after making it do all the work https://t.co/JVTqrSYHOx\n\nQT @Yuchen...
  • @badlogicgames: RT @thsottiaux: Do people like this? We don't do this for codex because it exists to help you and it...

8. ycombinator (Group Score: 68.2 | Individual: 36.9)

Cluster: 2 tweets | Engagement: 333 (Avg: 207) | Type: Tech

.@PatientdeskAI is building an AI-native operating system for dental clinics that autonomously handles inbound calls, bookings, real-time insurance verification, and claims submission - replacing five disconnected tools with one system that never lets revenue slip through the cracks.

https://t.co/yR28FWZzJB

See 1 related tweets

  • @pratik_satija: Absolutely cracked team and amazing founders\n\nQT @ycombinator: .@PatientdeskAI is building an AI-n...

9. aakashgupta (Group Score: 65.0 | Individual: 32.8)

Cluster: 2 tweets | Engagement: 104 (Avg: 596) | Type: Tech

The PM job used to be "figure out what's possible, then plan around it for 6 months."

That assumption worked when the technology underneath your product moved slowly. Cat Wu runs product for Claude Code at Anthropic. She tested every new model by asking it to add a table tool to Excalidraw. Sonnet 3.5 failed. Opus 4 occasionally succeeded. Opus 4.6 does it reliably enough to demo live in front of thousands of developers. That progression happened in 16 months.

METR measures this with time horizons: how long would a task take a human expert that AI can now complete half the time? Sonnet 3.5 (new) in October 2024: 21 minutes. Opus 4.6 in February 2026: roughly 14.5 hours. A 41x jump.

If your roadmap is longer than the gap between model releases, you're planning around constraints that may not exist by the time you ship.

Her team's response is worth studying. They replaced long-term roadmaps with "side quests," short self-directed experiments anyone on the team can run. Claude Code on Desktop, the AskUserQuestion tool, and todo lists all started this way. Someone prototyped it, internal users liked it, they shipped it.

The most telling detail: when they first launched todo lists, the model couldn't reliably check off completed items. They added system prompt hacks to nudge it. Next model generation, the behavior came for free. They deleted the hacks. Their system prompt shrank 20% with Opus 4.6 alone.

This is the part most PMs miss. Every workaround you build to compensate for a model limitation becomes dead weight the moment the next model drops. The simpler your implementation, the faster you absorb the next capability jump.

The Venn diagram in the image tells the structural story. Before AI: Product hands to Design hands to Eng, sequential. With AI: all three overlap. Designers ship code. Engineers make product calls. PMs build prototypes. The handoff chain collapses because the cost of building a working demo dropped to an afternoon.

Any PM still writing 30-page PRDs before touching a prototype is optimizing for a world where building is expensive. That world ended about 12 months ago.\n\nQT @_catwu: The PM playbook was built on an assumption that the technology underneath your product is roughly stable

With the current pace of model progress, this is no longer true. Here's how we've evolved the PM role:

See 1 related tweets

  • @aakashgupta: The hardest PM skill in 2026 is saying no to something that works.

Boris Cherny's team at Anthropic...


10. QuixiAI (Group Score: 64.4 | Individual: 25.5)

Cluster: 3 tweets | Engagement: 42 (Avg: 367) | Type: Tech

RT @Yuchenj_UW: Cursor’s Composer 2 is likely built on Kimi K2.5. The model URL + tokenizer are strong signals.

I love this direction: companies mid-train and post-train on top of OSS LLMs.

Prediction: open-source model labs will monetize by taking a cut when others build on top of their models and scale to millions of real users. They will enforce this via licensing.

That’s the flywheel. That’s how open-source AI thrives.

See 2 related tweets

  • @antirez: Take note about the right way to react.\n\nQT @Kimi_Moonshot: Congrats to the @cursor_ai team on the...
  • @brianzhan1: RT @rronak_: I think people are understating Cursor’s technical achievements for Composer 2.

Small...


11. chddaniel (Group Score: 64.3 | Individual: 33.4)

Cluster: 2 tweets | Engagement: 33 (Avg: 17) | Type: Tech

Big news, @claudeai just got a major upgrade today which I'm happy to introduce in Shipper.

From today on, Claude Code Opus 4.6 inside Shipper can create documents straight from anything it builds for you.

Anthropic keeps launching new models, so we've used them to bring document generation to Shipper.

Here's how it works:

✅ Ask Claude to "build me an online sneaker store" ✅ Once it's built, ask it to "create a document for my store's product catalog" ✅ Claude generates it and you preview/download it instantly

All the details about your store, website or business are captured in one document.

Try it out @ https://t.co/r6zEmvcppC!

See 1 related tweets

  • @chhddavid: 🚨BIG NEWS: @claudeai just got a huge upgrade today and I'm very happy to introduce it in Shipper.

A...


12. chddaniel (Group Score: 56.8 | Individual: 29.1)

Cluster: 2 tweets | Engagement: 23 (Avg: 17) | Type: Tech

IT'S SO OVER for motion designers...

This AI agent made this video in 25 mins, only did 2 changes. https://t.co/xskZSs0dpu\n\nQT @Remotion: Remotion now has Agent Skills - make videos just with Claude Code!

$ npx skills add remotion-dev/skills

This animation was created just by prompting 👇 https://t.co/hadnkHlG6E

See 1 related tweets

  • @chhddavid: oh my... Motion designers and video editors are cooked...

This AI agent made this video in 25 mins,...


13. chddaniel (Group Score: 56.5 | Individual: 29.7)

Cluster: 2 tweets | Engagement: 43 (Avg: 17) | Type: Tech

🚨Big news, @claudeai just got a major upgrade today and I'm happy to be introducing it in Shipper.

From today on, Claude Code Opus 4.6 inside Shipper can generate videos and embed them directly into your website live on Shipper.

We just launched this in Shipper as a package that let’s Claude:

→ Generate animated videos from a single prompt → Add them anywhere on your page → Build, animate and launch your site all at once

Claude Opus 4.6 can do all of the above in one prompt for ~$0.15/app... Ready to publish from the first prompt & built in minutes, not months.

Head over to Shipper and ask Claude to "build a one-page animated website for an SEO agency" then "create an animated video explaining what the agency does"!

To celebrate, we're giving away free credits randomly to people who repost and comment "SHIPPER" :)

See 1 related tweets

  • @chhddavid: Big news, @claudeai just got a huge upgrade today and I'm excited to introduce it in Shipper.

As of...


14. rohanpaul_ai (Group Score: 55.9 | Individual: 26.2)

Cluster: 3 tweets | Engagement: 61 (Avg: 83) | Type: Tech

Google has secured 1GW of flexible energy deals to prove that data centers can actually lower electricity bills.

This flexible demand response system lets them pause or reschedule heavy AI workloads whenever the local power grid gets too crowded.

By shifting these AI-heavy tasks to different times, Google helps local power companies balance the supply and demand of electricity without needing to build extra power plants.

New agreements with partners like Minnesota Power and DTE Energy show that this approach is moving into a large part of the national energy strategy.

Since these massive computing centers can act as a buffer, the entire electricity system becomes more reliable for every person living in those service areas.\n\nQT @sundarpichai: Google is now the first cloud provider to integrate 1 GW of flexible demand into long-term utility contracts. Our ability to shift or reduce our energy demand when it’s needed can help utility companies balance supply/ demand and plan for future capacity needs.

This is a big milestone for responsible data center growth and helps keep costs lower for local communities.

https://t.co/yagskz6Wq7

See 2 related tweets

  • @demishassabis: RT @sundarpichai: Google is now the first cloud provider to integrate 1 GW of flexible demand into l...
  • @rohanpaul_ai: RT @rohanpaul_ai: Google has secured 1GW of flexible energy deals to prove that data centers can act...

15. Reuters (Group Score: 55.8 | Individual: 25.1)

Cluster: 3 tweets | Engagement: 736 (Avg: 243) | Type: Tech

Exclusive: Pentagon to adopt Palantir AI as core US military system, memo says https://t.co/6jDHLitinQ https://t.co/6jDHLitinQ

See 2 related tweets

  • @Cointelegraph: 🇺🇸 HUGE: The Pentagon is set to adopt Palantir's Maven AI system as an official program of record, p...
  • @FirstSquawk: Pentagon to adopt Palantir's Maven AI as core U.S. military system, says Deputy Secretary of Defense...

16. aakashgupta (Group Score: 55.4 | Individual: 28.8)

Cluster: 2 tweets | Engagement: 39 (Avg: 596) | Type: Tech

Steinberger described the difference between skills and tools in OpenClaw perfectly: tools are organs, skills are textbooks.

That distinction explains why most AI agent frameworks fail.

A tool answers "can the agent do it?" It's a capability. Read a file. Send a Slack message. Query a database. Binary. Either the connection exists or it doesn't.

A skill answers "does the agent know how to do it?" It's a set of instructions. How to write a standup summary. How to route bugs by customer tier. How to structure a competitive analysis.

Most frameworks give agents tools and assume competence follows. It doesn't. Giving an agent Slack access without instructions on what a useful standup summary looks like is giving a new hire a laptop on day one with no onboarding.

OpenClaw stores skills as markdown files in a workspace folder. soul.md for personality. agents.md for operational instructions. heartbeat.md for scheduled cron jobs. You can open them in any text editor, paste in instructions generated by another LLM, or ask the bot itself to modify them.

The architecture is almost absurdly simple. And that simplicity is why a project built by one person in two months outpaced frameworks with full engineering teams.

When the skill layer is just text files, anyone can contribute. 60,000 forks in 4 months.\n\nQT @aakashgupta: You need to have started using OpenClaw yesterday.

Here's the web's easiest setup guide + 5 killer use cases:

38:06 - 1. Live knowledge bot 47:47 - 2. Automated standups 54:46 - 3. Push-based comp intel 1:13:26 - 4. VOC reporting 1:24:30 - 5. Auto bug routing https://t.co/mer0FN1k3a

See 1 related tweets

  • @startupideaspod: OpenClaw skills are powerful.

But the marketplace is still the wild west.

Here's what you need to ...


17. dr_cintas (Group Score: 55.4 | Individual: 31.5)

Cluster: 2 tweets | Engagement: 750 (Avg: 217) | Type: Tech

RT @dr_cintas: 🚨 China has released an AI employee that runs 100% locally.

It can do research, code, build websites, create slide decks, and generate videos.. all by itself. And it comes with its own computer.

100% Open Source. https://t.co/KuhKCteBYA

See 1 related tweets

  • @RoundtableSpace: CHINA RELEASED AN AI EMPLOYEE THAT RUNS 100% LOCALLY

It can: > research > code > build web...


18. alexocheema (Group Score: 53.9 | Individual: 29.0)

Cluster: 3 tweets | Engagement: 26 (Avg: 394) | Type: Tech

on a panel today in sf. will be around afterwards if you want to chat open source AI / local AI\n\nQT @MiniMax_AI: Friday night drop ☄️💥 Speaker lineup locked for our Founders' Voices panel this Saturday. Fresh off the launch of MiniMax M2.7, our first self-evolving model. We're going deep on: RL · post-training · inference · agent workflows

Registration still open → https://t.co/CNPYwZUPXd — Joining us with: @alexocheema (@exolabs) @RobRizk1 (@blackboxai) @tydsh (ex-@Meta FAIR) @grmcameron (@ArtificialAnlys) @yaboilyrical (@NousResearch) @steveshou (@duolingo )

See 2 related tweets

  • @blackboxai: RT @MiniMax_AI: Friday night drop ☄️💥 Speaker lineup locked for our Founders' Voices panel this Satu...
  • @exolabs: RT @alexocheema: on a panel today in sf. will be around afterwards if you want to chat open source A...

19. natolambert (Group Score: 53.3 | Individual: 31.0)

Cluster: 2 tweets | Engagement: 209 (Avg: 200) | Type: Tech

The answer (~44:40) to Noam's question on @NoPriorsPod

@karpathy: Well, I was there for a while, right? And I did re-enter. So to some extent I agree. And I think that there are many ways to slice this question. It's a very loaded question a little bit. Um, I will say that... I feel very good about what people can contribute and their impact outside of the frontier labs, obviously. Not in the industry, but also in like more, like ecosystem-level roles. So your role, for example, is more ecosystem level. My role currently is also kind of more on an ecosystem level, and I feel very good about the impact that people can have in those kinds of roles. I think conversely there's... there are definite problems in my mind for, um, for basically aligning yourself way too much with the frontier labs too. So fundamentally I mean you're, you have a huge financial incentive to, uh, with these frontier labs. And by your own admission, the uh, the AIs are going to like really change humanity and society in very dramatic ways, and here you are basically like building that technology and benefiting from it like and being like very allied to it through financial means. Like this was a conundrum that was in, um... at the heart of, you know, how OpenAI was started in the beginning, like this was the conundrum that we were trying to solve. Um, and so you know, that—so it's kind of...

@saranormous: It's still not resolved.

Andrej Karpathy: The conundrum is still not like fully resolved. So that's number one. You're not a completely free agent and you can't actually like be part of that conversation in a fully autonomous, um, free way. Like if you're inside one of the frontier labs. Like there are certain things that you can't say, uh, and conversely there are certain things that the organization wants you to say. And you know, they're not going to twist your arm, but you feel the pressure of like what you should be saying, you know? Cause like, obviously. Otherwise it's like really awkward conversations, strange side-eyes, like what are you doing, you know? So you can't like really be an independent agent, and I feel like a bit more aligned with humanity in a certain sense outside of a frontier lab, because uh, I don't, I'm not subject to those pressures almost, right? And I can say whatever I want. So those are like some sources of misalignment I think, to some extent. I will say that like, in one way I do agree a lot with that sentiment that, um, I do feel like the labs, for better or worse, they're opaque and a lot of work is there, and they're kind of like at the edge of capability and what's possible, and they're working on what's coming down the line. And I think if you're outside of that frontier lab, uh, your, your judgment fundamentally will start to drift, because you're not part of the, you know, what's coming down the line. And so I feel like my judgment will inevitably start to drift as well. And uh, I won't actually have an understanding of how these systems actually work under the hood. That's an opaque system. Uh, I won't have a good understanding of how it's going to develop and etc. And so I do think that in that sense I agree and it's something I'm nervous about. I think it's worth basically being in touch with what's actually happening and actually being in a frontier lab. And if some of the frontier labs would have me come for, you know, some amount of time and do really good work for them and then maybe come in and out—

Sarah Guo: Guys, he's looking for a job, this is super exciting!

Andrej Karpathy: (Laughs) Then I think that's maybe a good setup. Because I kind of feel like it kind of, um... you know, um, maybe that's like one way uh to, to actually be connected to what's actually happening but also not feel like you're necessarily fully controlled by those entities. So I think honestly in my mind like, uh, Noam can probably do extremely good work at OpenAI, but also I think his most, um, impactful work could very well be outside of OpenAI.

Sarah Guo: Noam, that's a call to be an independent researcher, if you got auto-research.

Andrej Karpathy: Yeah, there's many things to do on the outside and it's a... and I think ultimately I think the ideal solution maybe is like yeah, going back and forth, uh, or um, yeah, and I think fundamentally you can have really amazing impact in both places. So very complicated, I don't know, it's a very loaded question a little bit, but um, I mean I joined the frontier lab and I'm outside, and then maybe in the future I'll want to join again, and I think um, uh, that's kind of like how I look at it.\n\nQT @polynoamial: @saranormous @karpathy @NoPriorsPod Why is he not at a frontier AI lab at the most pivotal time in human history since at least the industrial revolution?

See 1 related tweets

  • @yacineMTB: RT @natolambert: The answer (~44:40) to Noam's question on @NoPriorsPod

@karpathy: Well, I was ...


20. HarryStebbings (Group Score: 52.5 | Individual: 30.6)

Cluster: 2 tweets | Engagement: 687 (Avg: 189) | Type: Tech

Spoke to a CRO of a hot Series B startup yesterday:

“We don’t have the knowledge internally to implement AI and agents into our process.”

Toast. You are toast. That is unacceptable.

Everyone can learn. There is zero excuse for the above.

See 1 related tweets

  • @clairevo: I hear this all. the. time.

Working w a few AI-pilled operators to stand up some executive level se...