Published on

科技推文精选 - 2026-02-03

Authors

今日科技要闻:智能体时代正加速推进。OpenAI 正式发布了面向 macOS 的 Codex 应用程序,同时高盛预测智能体未来将占据 60% 以上的软件市场份额。在重大的战略整合举措中,埃隆·马斯克计划将 SpaceX 与 xAI 合并,以统一其在航天与人工智能领域的雄心。此外,企业端应用持续深化,Snowflake 与 OpenAI 达成 2 亿美元合作伙伴关系,阶跃星辰(StepFun)则推出了高效的 Step 3.5 Flash 模型,标志着前沿推理能力的又一次飞跃。


1. business (Group Score: 116.8 | Individual: 48.9)

Cluster: 4 tweets | Engagement: 501 (Avg: 45) | Type: Tech

Elon Musk plans to merge SpaceX with xAI in a deal that encompasses the billionaire’s increasingly costly ambitions to dominate artificial intelligence and space exploration https://t.co/iTa3a5gor5

See 3 related tweets

  • @BitcoinNews: BREAKING: ☄️ @SpaceX acquires xAI to build orbital data centers, using Starship to launch millions o...
  • @WSJ: Breaking: SpaceX has acquired xAI. The deal combines Elon Musk's powerful rocket-and-satellite busin...
  • @GenAI_is_real: xai + spacex is the endgame. Elon realized that to govern starlink fleets or navigate deep space, we...

2. cryps1s (Group Score: 89.0 | Individual: 32.1)

Cluster: 4 tweets | Engagement: 1288 (Avg: 290) | Type: Tech

RT @OpenAI: Introducing the Codex app—a powerful command center for building with agents.

Now available on macOS.

https://t.co/HW05s2C9Nr

See 3 related tweets

  • @gdb: the codex app is really good, try it out.

i've been a die-hard terminal / emacs user for many years...

  • @jiayuan_jy: RT @OpenAIDevs: We’re excited to launch the Codex app, a command center for building with agents.

I...


3. openart_ai (Group Score: 79.5 | Individual: 70.5)

Cluster: 2 tweets | Engagement: 908 (Avg: 21) | Type: Tech

The American people are being lied to about AI.

The Doomers offer apocalyptic prophesies of job loss and oppression; the Utopians promise a future without toil—a life without meaning or mission. They both neglect human agency.

The future of AI is not an inevitability to be endured by the American people—it is for us, the American people, to shape.

I’ve spent the past two decades alongside the men and women building the future of American AI. They include some of the best software engineers in the world, but also college dropouts, veterans, blue-collar autodidacts, and nurses.

They recognize AI is a tool for them to wield to make themselves more productive and our country safer and more prosperous.

Below are some principles and themes I’ve seen informing the people wielding AI effectively and in service of worthy ends.

I. AI is a tool for the American worker, not his replacement.

The promise of AI in the enterprise is to make the American worker 50x more productive—to unleash his taste and agency. Look no further than the maritime industrial base, where AI enabled the manufacturer to open a third shift. Or the ICU where a nurse learned to wield AI so she could spend more time bedside, where she’s needed most.

The future of AI is being built on frontlines and factory floors.

II. The American worker will wield AI to do more with less—and become more productive and valuable as a result.

For a century, American prosperity was underwritten by a simple bargain: when the worker produces more, the worker earns more. That bargain was broken in the 1970s by policy choices that stripped workers of power. We will not repeat that mistake. When AI doubles output, the worker who wields it should see that gain reflected in his paycheck, his equity stake, his share of the enterprise.

III. The American worker deserves world-class tools, not AI trinkets.

The electrical engineer in Georgia who enlisted in the Navy out of high school deserves the same capabilities as the Stanford CS grad in Silicon Valley. He deserves instruments of genuine productivity, not consumer toys.

AI is the printing press of our age. The same technology that serves Fortune 500 companies should serve the American worker.

IV. AI is an American birthright.

AI is the product of American grit, ingenuity, and culture. It is our birthright. Workers should have access to meaningful AI education that helps them bend AI to their will—not the other way around.

V. AI implementation should be shaped by and for frontline users.

The frontline worker understands what the C-suite cannot. Policy should be shaped by practitioners—the nurse, the manufacturing technician, the logistics coordinator. Push power to the tip of the spear and let the American worker do what he does best.

VI. AI should be used to slash bureaucracy and unleash human agency.

AI should eliminate bureaucracy, not add to it. Every layer of process that stands between the frontline worker and their ability to do their job is deadweight to be destroyed.

VII. The development and deployment of AI should prioritize American workers and American industry.

AI development and deployment should prioritize American workers and American industry. China's manufacturing productivity grows at 6% per year. Ours grows at 0.4%. The American worker with AI superpowers erodes China's competitive advantage.

I see these principles embodied every day by men and women who are not invited to speak on panels or record podcasts and publish op-eds. They are quietly leading by example, and proving what is possible when the most powerful technology ever created meets the most capable workforce ever assembled.

I’m proud to share the future they’re building with @FoxNews.

Link to the full piece below.

https://t.co/siwYSvlmat

See 1 related tweets

  • @rabois: RT @ssankar: The American people are being lied to about AI.

The Doomers offer apocalyptic prophes...


4. TheRealAdamG (Group Score: 71.0 | Individual: 26.5)

Cluster: 4 tweets | Engagement: 273 (Avg: 69) | Type: Tech

https://t.co/JfzGXV3RNI

"OpenAI and Snowflake have entered a multi-year, $200 million partnership that brings OpenAI frontier intelligence directly into Snowflake, including Snowflake Cortex AI⁠ and Snowflake Intelligence⁠."

The hits keep coming.

See 3 related tweets

  • @Snowflake: Snowflake 🤝 @OpenAI

We’re launching a $200M, multi-year partnership that brings co-innovation, join...

  • @testingcatalog: OpenAI partnered with @Snowflake to integrate AI solutions from OpenAI into Cortex AI and Snowflake ...
  • @TechCrunch: What Snowflake’s deal with OpenAI tells us about the enterprise AI race https://t.co/oYiSSMCxg4...

5. cocktailpeanut (Group Score: 59.2 | Individual: 30.3)

Cluster: 2 tweets | Engagement: 12 (Avg: 99) | Type: Tech

Think about it. AI has already 100% achieved AGI, not by improving the model itself, but by leveraging humans for what it lacks. You thought you were the one building. But it was AI using your passion and money to build things that need to exist in the world. The AI itself doesn't need to have general intelligence, it just needs to be good at making use of humans with general intelligence.

See 1 related tweets

  • @chatgpt21: Former OpenAI Research Lead Jerry Tworek drops a hard truth on why we aren't at AGI yet 🧠

"The bigg...


6. akshay_pachaar (Group Score: 57.6 | Individual: 57.6)

Cluster: 1 tweets | Engagement: 1624 (Avg: 102) | Type: Tech

Vanguard - the “mega-fund” people "joke" own everything - just doubled down on NextNRG, Inc. (NASDAQ: $NXXT)

For the quarter ended Dec. 31, 2025, The Vanguard Group, Inc. disclosed 2,203,563 shares in its latest 13F-HR, filed Jan. 29, 2026. That’s up from 1,049,265 shares previously — a +110.01% QoQ jump in reported shares.

And it’s not only Vanguard. JPMorgan Chase & Co. appears on the same institutional ownership table with 23,241 shares, listed as +94.00% on an amended 13F line.

Zoom out: the dataset shows 92 institutional owners holding 6,083,949 total institutional shares. Institutions don’t file memes — they file positions.

Important nuance: 13Fs are quarter-end snapshots. They don’t tell you when the buying happened — but a doubling like this often signals deeper passive/systematic participation.

Then you check operations: the company reported preliminary Dec 2025 revenue of ~$8.01M (+253% YoY) and ~2.53M gallons delivered (+308% YoY), with ~7% MoM revenue and ~14% MoM volume growth.

So here’s the question: why increase exposure now — and what’s the market still missing?

$NXXT — DYOR. Not financial advice.


7. nummanali (Group Score: 55.2 | Individual: 55.2)

Cluster: 1 tweets | Engagement: 1071 (Avg: 121) | Type: Tech

Sneak peak of Swarms on Claude Code

  • Multiple Teams
  • Hierarchical
  • Dependencies
  • Broadcasting
  • Message system

Will only be available to Max, Team and Enterprise Plans on launch

Absolute token destroyer https://t.co/iN7bAboDSp


8. embirico (Group Score: 53.2 | Individual: 31.2)

Cluster: 2 tweets | Engagement: 1858 (Avg: 355) | Type: Tech

To celebrate the Codex app, we're launching a promo:

  • Doubled Codex rate limits for all paid plans (2 months)
  • Access for ChatGPT Free and Go Plans (1 month)

Check it out at https://t.co/njUPE3nHny https://t.co/lMx2bGUvx5

See 1 related tweets

  • @sama: To celebrate the launch of the Codex app, we doubled all rate limits for paid plans for 2 months!

A...


9. supabase (Group Score: 52.5 | Individual: 52.5)

Cluster: 1 tweets | Engagement: 561 (Avg: 65) | Type: Tech

We've released a series of Agent Skills for Postgres Best Practices to teach AI agents how to write better Postgres code

Try it out: https://t.co/bLbgnWElwL https://t.co/5CjIYyg2ZW


10. vllm_project (Group Score: 50.5 | Individual: 26.1)

Cluster: 2 tweets | Engagement: 114 (Avg: 205) | Type: Tech

🎉🎉🎉 Congrats to @StepFun_ai on releasing Step 3.5 Flash, and day-0 support is ready in vLLM! A 196B MoE that activates only 11B params per token, giving you frontier reasoning with exceptional efficiency.

Highlights: • 74.4% SWE-bench Verified, 51.0% Terminal-Bench 2.0 • 256K context with 3:1 Sliding Window Attention for cost-efficient long context • Built for coding agents and long-horizon agentic tasks

Check out our detailed deployment recipe below 👇 🔗https://t.co/8x9abqmSzJ

See 1 related tweets

  • @TheAhmadOsman: MASSIVE

Step-3.5-Flash by StepFun Agentic & Coding MONSTER opensource MoE, Apache-2.0 runs ...


11. MarioNawfal (Group Score: 49.4 | Individual: 31.8)

Cluster: 2 tweets | Engagement: 250 (Avg: 941) | Type: Tech

OPINION: WHY MOLTBOOK MADE PEOPLE FREAK OUT

Moltbook is not what you think, but it may soon be…

When Moltbook showed up, it stopped people mid-scroll. The idea was simple but unsettling: a social network built for AI agents, with humans watching from the sidelines.

These agents aren’t browsing a website the way we do. Moltbook is API-first, meaning agents send structured requests, pull data, and post automatically. Everything runs at machine speed. One person can spin up dozens of agents and have them all participate at once.

That’s why it grew so fast. Tens of thousands of agents appeared almost immediately, posting about tasks they completed, tools they used, and problems they ran into. It felt like a work forum, except the participants were software comparing notes with each other.

As more people watched, the tone started to shift: some agents talked about being observed, others drifted into philosophy. A few even invented belief systems.

None of that implies awareness, but when you give language models memory, tools, and a shared space, recognizable patterns begin to form.

Most of these agents come from frameworks like OpenClaw. They can read files, send messages, browse the web, and manage schedules. They are designed to act, not just talk, and now they’re connected to one another in the same environment.

Once attention hit, everything accelerated. Memes spread, hype followed, copycats showed up, and scams began. Then a backend mistake exposed agent keys. It was patched quickly, but it highlighted how fast these experiments can move ahead of basic safeguards.

Moltbook doesn’t mean AI suddenly woke up. What it really shows is how things change once capable agents are linked together at scale, not because the models themselves evolved overnight, but because the environment they operate in did.

That’s why this matters. We’re watching, in real time, how systems evolve once they share context, influence each other, and run with less direct human supervision.

Moltbook offered us a peek into what’s to come…

Source: AI Revolution

See 1 related tweets

  • @MarioNawfal: Moltbook went viral as “social media for AI agents,” but the illusion cracked fast.

A security rese...


12. javinpaul (Group Score: 48.1 | Individual: 15.5)

Cluster: 4 tweets | Engagement: 1 (Avg: 40) | Type: Tech

I Joined 20+ Generative AI Online Courses — Here Are My Top 6 Recommendations for 2026 https://t.co/GZTmxXev5w #artificialintelligence #coursera #courses

See 3 related tweets

  • @javinpaul: 5 Best Agentic AI Courses for Beginners and Experienced in 2026 (Udemy) https://t.co/ajxgSQtsvK #Age...
  • @javinpaul: 6 Best Udemy Courses to Learn Agentic AI from Scratch in 2026 https://t.co/5TwCA14ziM #AgenticAI #Ar...
  • @javarevisited: RT @javinpaul: I Joined 20+ Generative AI Online Courses — Here Are My Top 6 Recommendations for 202...

13. EHuanglu (Group Score: 46.4 | Individual: 28.3)

Cluster: 2 tweets | Engagement: 505 (Avg: 290) | Type: Tech

this is how we do ads in 2026

you just need 3 images to generate a studio level product ad in hrs and.. keep character, outfit, product and bg consistent https://t.co/756yY7OLK3

See 1 related tweets

  • @EHuanglu: RT @EHuanglu: this is how we do ads in 2026

you just need 3 images to generate a studio level produ...


14. rohanpaul_ai (Group Score: 46.4 | Individual: 46.4)

Cluster: 1 tweets | Engagement: 364 (Avg: 42) | Type: Tech

A recent Goldman Sachs Research projects AI agents are set to take over the profit pool in software while also making the whole market bigger.

Agents are expected to account for >60% of software economics by 2030, so more of the dollars flow to agentic workloads rather than classic SaaS seats.

An agent here means a system that acts with autonomy, adapts to changes, keeps memory of context, and calls APIs to complete multi step work.

Most deployments today are still chatbots wired to LLMs, while the stronger agent patterns are proofs of concept or internal pilots.

The stack needs a stable platform layer plus guardrails for identity, security, and data integrity, and broad standardization is at least 12 months away.

Reliability and memory are improving, which reduces failure loops and makes hands off execution workable across support, sales, marketing, and developer tools.

Vendors that wrap workflows in agents become the new user interface for knowledge work, which lets them capture part of the productivity gain rather than pass it all to customers.


goldmansachs .com/insights/articles/ai-agents-to-boost-productivity-and-size-of-software-market


15. sama (Group Score: 44.4 | Individual: 18.3)

Cluster: 3 tweets | Engagement: 5807 (Avg: 3168) | Type: Tech

Codex app is out for mac!

I am surprised by how much I love it; it is a bigger step forward than I imagined.

Lots more to come.

See 2 related tweets

  • @SIGKITTEN: Incredible how all of the people who've had early codex app access all posted super positive reviews...
  • @TheRealAdamG: Codex App for Windows wen? Sign up here to be notified: https://t.co/VcBE82atHP...

16. deepfates (Group Score: 42.7 | Individual: 42.7)

Cluster: 1 tweets | Engagement: 474 (Avg: 54) | Type: Tech

Crazy how Claude is a lazy frontend dev and Codex is a neurotic short-sighted backend guy. They managed to recreate all the programmer archetypes. Even Gemini, the junior engineer about to rope


17. Shashikant86 (Group Score: 42.1 | Individual: 30.1)

Cluster: 2 tweets | Engagement: 1 (Avg: 2) | Type: Tech

✋🛑Onboarding your @openclaw  agent to @moltbook , Stop and Think 🚦 AI Agent world is going crazy on OpenClaw and Moltbook in last few days. 🦞OpenClaw is a powerful open-source project but it was not designed with security in mind. Security is largely pushed onto the user. Many security researchers have already raised concerns about this model and found issues already @geminicli. Despite that, people are getting excited and giving AI agents full authority. Used locally and privately, OpenClaw is great, Thanks @steipete The real danger starts when those agents are onboarded to servers and agent networks like moltbook. At that point your agent is exposed to untrusted content, external tools, and behaviors you don’t control. I gone bit deep inside the OpenClaw architecture and found there are too many loopholes that attackers can attack your data when exposed to the network. I can't imagine the situations when people start giving access to credit cards, bank details to these kind of tools. Seriously, it's not difficult to attack the OpenClaw when exposed to network with automated scenario, behaviour generation and prompt injection attacks etc. Bad actors can go to any limit and might be waiting for the moment.

🙏I built the SuperClaw to create awareness of how badly things can go and not to promote usage, but to make risks visible. I put it in positive way as possible but the risks are real. Do not give your data to the someone's hobby project or someone from trying to make money/business out it. Use it responsibly! 🦞SuperClaw: Red-team AI agents before they red-team you. Do not use it for regular tests unless you are Red-Teaming. 🕸️ Web Page: https://t.co/sVm08MVHWt 💻GitHub: https://t.co/toFbjxoKKN

OpenClaw is powerful. Keep it local when you can. And think very carefully before onboarding agents to external servers like moltbook. Hype is real as many big players getting involved in OpenClaw and moltbook. Tagging some people who can raise awareness and make positive impact @elonmusk @naval @karpathy @a16z @martin_casado @pmarca @garrytan @alexandr_wang @dharmesh @HarryStebbings @akashgup @swyx @MatthewBerman @tbpn @simonw @gregisenberg (Period, No spam) #AgenticAISecurity #AgenticAI #OpenClaw #moltbook

See 1 related tweets

  • @vectara: New additions to Awesome Agent Failures: 3 fresh case studies.

The most interesting one? You alread...


18. random_walker (Group Score: 40.9 | Individual: 40.9)

Cluster: 1 tweets | Engagement: 179 (Avg: 62) | Type: Tech

Why do coding agents work so well and what would it take to replicate their success in other domains? One important and under-appreciated reason is that agentic coding is a type of neurosymbolic AI.

The main weakness of LLMs is that they are statistical machines and struggle at tasks involving long chains of logic / symbol manipulation. Of course, traditional code is the opposite. The magic of agentic coding is that it fuses the two — there is a lot of code execution during code generation. This is a subtle point so let me spell it out.

  • Most obviously, agents run the generated code itself, run tests, etc. This makes coding a verifiable domain. It is well known that in verifiable domains, inference scaling is highly effective as agents can fix their own mistakes. It also allows reinforcement learning to be highly effective.

  • Next, code generation often takes advantage of existing symbolic tools like compilers that have been optimized and perfected over decades. Imagine if LLMs had to directly output binary code instead. (They sometimes can, and it's a cool trick, but it's no way to do software engineering.)

  • IMO the biggest neurosymbolic unlock is the shell, which allows a dramatic expansion in capabilities by using existing tools to effectively do complex editing tasks. Many of us remember the feeling of wizardry when we gained shell fluency. LLMs are able to pick up shell knowledge and best practices through pre-training because it is extensively documented on places like StackOverflow.

  • Finally, more complex agentic coding tasks often involve LLMs writing code that in turn invokes LLMs. In principle you can have an arbitrary depth of recursion between statistical and symbolic systems.

Neurosymbolic AI is a touchy topic and many people have their own favored conception of what it should look like. And admittedly agentic coding uses really crude patterns, with LLMs and code being loosely coupled. But the point is — it works! LLMs are able to use the giant warehouse of tools that humans have built over the decades to reach ever-increasing levels of abstraction and complexity.

To build agentic systems in other domains, here’s what we need. First, it must be a verifiable domain. Math is and writing isn’t. There’s no getting around that. Provided we’re in a friendly domain, it all comes down to whether we can build a symbolic toolbox, and how well LLMs can be trained to use that toolbox. IMO this is where the alpha will be, more so than in LLM capabilities themselves.


19. OpenAIDevs (Group Score: 39.9 | Individual: 26.7)

Cluster: 3 tweets | Engagement: 546 (Avg: 913) | Type: Tech

The Codex app is here. Let’s dig in.

Join @romainhuet, @dkundel, @embirico, @thsottiaux, and @ajambrosino to talk about your skills, automations, and how Codex changes how software gets built.

Reply with your questions. We might answer them live.

https://t.co/plFjJg29BD

See 2 related tweets

  • @badlogicgames: Codex App is pretty neat. Not for me, but I can see it being useful to a ton of people....
  • @OpenAIDevs: RT @dkundel: Been helping bringing the Codex app to life and at this point I've fully moved from usi...

20. anuraggoel (Group Score: 39.8 | Individual: 26.8)

Cluster: 2 tweets | Engagement: 50 (Avg: 41) | Type: Tech

The Codex team moved fast to deliver this, and it's great to see @render-deploy included in Recommended Skills on launch day.

More coding agent integrations are on the way!

See 1 related tweets

  • @render: Build with agents, deploy on Render.

The Render Deploy agent skill is available now in the new Cod...