- Published on
科技推文精选 - 2026年3月2日
- Authors

- Name
- geeknotes
今日科技动态:人工智能正转向深度集成与严密监管。Anthropic 旗下的 Claude 现已支持从竞争对手平台完整导入记忆数据,而新型代理模式(agentic patterns)已能实现对桌面应用程序的自主控制。然而,随着有关模型利用私人对话进行训练的报道浮出水面,隐私担忧正日益加剧。尽管 Block 等公司将裁员归因于人工智能带来的效率提升,但相关研究指出,该技术反而可能加剧专业人员的工作负担。从 OpenAI 与五角大楼的机密交易,到人工智能驱动的 4D 场景重建,代理智能正迅速重塑复杂工程与生产力。
1. cryptopunk7213 (Group Score: 69.6 | Individual: 40.9)
Cluster: 2 tweets | Engagement: 8635 (Avg: 1186) | Type: Tech
anthropic fucking killed it with this. so many people will start using claude.
new feature lets you import your entire memory from chatGPT, Gemini etc into Claude so it instantly knows everything about you. no more reminding claude who you are.
the best fucking part is it takes literally 60s:
copy and paste the below prompt into your alternative AI (eg chatgpt)
paste answer into claude’s “memory” settings and… you’re done.
Claude immediately picks up from the last conversation you had with it in chatgpt!
the opportunity cost to switch to anthropic just went to zero - their app is currently #1 in the app store
See 1 related tweets
- @kimmonismus: holy, competition is heating up a lot
Anthropic introduces a memory feature that lets users transfe...
2. alex_prompter (Group Score: 68.8 | Individual: 41.7)
Cluster: 3 tweets | Engagement: 6360 (Avg: 533) | Type: Tech
RT @alex_prompter: 🚨 Holy shit… Stanford just exposed that every major AI company is using your private conversations to train their models…
See 2 related tweets
- @BrianRoemmele: How Your Private Conversations Are Fueling the Next Generation of AI Models.
I saved a major Fortun...
- @rohanpaul_ai: RT @rohanpaul_ai: Stanford researchers checked 6 major AI companies and found they all use your chat...
3. Forbes (Group Score: 50.8 | Individual: 31.1)
Cluster: 2 tweets | Engagement: 63 (Avg: 108) | Type: Tech
They didn’t just innovate — they rewrote the future of medicine.
Jennifer Doudna — CRISPR pioneer + Nobel Prize winner Judy Faulkner — built modern electronic health records Martine Rothblatt — from SiriusXM to biotech saving lives Suma Krishnan — first topical gene therapy creator
See the full #Forbes250 America's Greatest Innovators list https://t.co/S1qgkoTqmt (Jennifer Doudna By Cody Pickens For Forbes; Judy Faulkner By Guerin Blask For Forbes; Martine Rothblatt By Martin Schoeller; Suma Krishnan By Jamel Toppin For Forbes, Vanderbilt-by-Mathew-B.-Brady-Library-of-Congress-Getty-Images)
See 1 related tweets
- @Forbes: Crispr’s ability to cut genetic code like scissors has just started to turn into medicines. Now, gen...
4. ctatedev (Group Score: 48.7 | Individual: 48.7)
Cluster: 1 tweets | Engagement: 2506 (Avg: 367) | Type: Tech
New agent-browser skill: Electron
You can now control desktop apps built with Electron, including Discord, Figma, Notion, Spotify and VS Code
Or, use it to debug your own Electron app
Add it to any coding agent:
npx skills add vercel-labs/agent-browser --skill electron
5. VKazulkin (Group Score: 48.1 | Individual: 48.1)
Cluster: 1 tweets | Engagement: 2182 (Avg: 99) | Type: Tech
RT @heynavtoor: BREAKING: AI can now analyze stocks like Wall Street analysts (for free).
Here are 10 insane Claude prompts that replace $…
6. rohanpaul_ai (Group Score: 47.9 | Individual: 28.4)
Cluster: 2 tweets | Engagement: 37 (Avg: 98) | Type: Tech
Some key takeaways from Sam Altman's Saturday night AMA on OpenAI's Pentagon deal
OpenAI rushed the classified agreement to ease tensions between the government and the AI industry following Anthropic’s refusal of a DoW ultimatum. Altman acknowledged the optics of rushing might look bad, but the primary goal was to stabilize the situation.
A central theme of the AMA was the belief that democratically elected governments, rather than unelected private tech executives, should hold the power to make critical, ethical decisions regarding national defense (such as responding to nuclear threats).
Contractual Terms and "Redlines": OpenAI established 3 flexible "redlines" for the technology's use, which can evolve as the tech advances. OpenAI felt comfortable with the contract language and negotiated to ensure similar terms would be offered to other AI labs.
He speculated that Anthropic may have walked away because they demanded more operational control.
Instead of demanding full operational control, OpenAI established three flexible "redlines" for how their tech could be used, which can evolve as new risks emerge. A core philosophical difference highlighted by Altman is the belief that unelected tech executives should not have more power than their democratically elected government, meaning private companies should not be the ones deciding what is ethical in critical national security situations, such as responding to nuclear threats.
Crucial AI Defense Applications: Altman identified 2 major areas where AI can immediately counter significant national security threats: defending against major cyber attacks (such as threats to the electrical grid) and improving biosecurity (detecting and responding to novel pandemics).
Sam Altman also pointed out a massive contradiction in how the AI industry treats the US government.
On one hand, tech leaders are constantly sounding the alarm to the Department of War. Their industry tells the government that artificial intelligence is going to be the absolute most important factor in future global conflicts, warning them that countries like China are building AI systems rapidly and the US is falling severely behind.
On the other hand, when the government actually asks these tech companies for help to catch up and defend the country, companies like Anthropic refuse. They essentially tell the military, "We will not let you use their technology because we think your goals are unethical."
Altman is arguing that you cannot have it both ways. It is extremely frustrating and dangerous to tell the government they are losing a critical AI arms race against foreign adversaries, but then turn around and refuse to provide them with the very tools they need to protect the country.
That is why OpenAI felt compelled to step in and work with the government. They believe that if you warn the military about a major global threat, you have a responsibility to help them defend against it, rather than just calling them evil and walking away.
See 1 related tweets
- @cryptopunk7213: ok sam altman dropped some truth bombs about the whole pentagon, anthropic, openai sitch overnight a...
7. rohanpaul_ai (Group Score: 46.5 | Individual: 46.5)
Cluster: 1 tweets | Engagement: 1344 (Avg: 98) | Type: Tech
RT @rohanpaul_ai: Powerful new Harvard Business Review study.
"AI does not reduce work. It intensifies it. "
A 8-month field study at a U…
8. kimmonismus (Group Score: 44.2 | Individual: 30.3)
Cluster: 2 tweets | Engagement: 895 (Avg: 467) | Type: Tech
Honor is building the first phone that also includes an AI robot. It's a robot in the sense that the pop-up camera acts as the AI's eyes, and if I understand correctly, it allows a continuously active AI companion to work as an assistant.
Interesting, but probably more of a gimmick. However, personal AI companions are coming.
See 1 related tweets
- @business: Honor Device demonstrated a humanoid robot and its so-called robot phone at MWC Barcelona 2026 on Su...
9. bibryam (Group Score: 43.8 | Individual: 43.8)
Cluster: 1 tweets | Engagement: 330 (Avg: 51) | Type: Tech
🤩17 Agentic AI Patterns🤩
A comprehensive, hands-on repository of AI agent design and implementations https://t.co/Ed0lZASWYq
10. minchoi (Group Score: 42.5 | Individual: 22.5)
Cluster: 2 tweets | Engagement: 1055 (Avg: 440) | Type: Tech
Holy smokes... this guy recreated a God's eye view 4D replay of Operation Epic Fury.
Using only public data and an AI agent swarm. 🤯
This used to cost millions and full dev team... https://t.co/v64d5liIhb
See 1 related tweets
- @gdgtify: RT @minchoi: Holy smokes... this guy recreated a God's eye view 4D replay of Operation Epic Fury.
U...
11. chrisalbon (Group Score: 41.2 | Individual: 41.2)
Cluster: 1 tweets | Engagement: 3411 (Avg: 181) | Type: Tech
Still my favorite concept in ML https://t.co/uRo8vHGZD2
12. BrianRoemmele (Group Score: 40.7 | Individual: 24.9)
Cluster: 2 tweets | Engagement: 67 (Avg: 550) | Type: Tech
This ain’t my first time speeding up AI, I did it in 1986 building the very first overlocked PC.
But I get it, I did this stuff before most AI folks were born and no one in Uni, mentioned me.
So I take the same crap today as I did in 1986 when IBM threatened Byte Magazine not to publish an article I wrote.
“WHO is he?”
Same crap from AI folks today.
“WHO is he?”
I worked out how to overclock because I need my Expert Systems faster.
Intel, IBM, others said I “can’t work”.
It did.
IBM ignored me, tried to sue me, than try to hire me in an urgent fear.
This will happen in AI.
See 1 related tweets
- @BrianRoemmele: RT @BrianRoemmele: This ain’t my first time speeding up AI, I did it in 1986 building the very first...
13. rohanpaul_ai (Group Score: 40.2 | Individual: 40.2)
Cluster: 1 tweets | Engagement: 401 (Avg: 98) | Type: Tech
The paper says the best way to manage AI context is to treat everything like a file system.
Today, a model's knowledge sits in separate prompts, databases, tools, and logs, so context engineering pulls this into a coherent system.
The paper proposes an agentic file system where every memory, tool, external source, and human note appears as a file in a shared space.
A persistent context repository separates raw history, long term memory, and short lived scratchpads, so the model's prompt holds only the slice needed right now.
Every access and transformation is logged with timestamps and provenance, giving a trail for how information, tools, and human feedback shaped an answer.
Because large language models see only limited context each call and forget past ones, the architecture adds a constructor to shrink context, an updater to swap pieces, and an evaluator to check answers and update memory.
All of this is implemented in the AIGNE framework, where agents remember past conversations and call services like GitHub through the same file style interface, turning scattered prompts into a reusable context layer.
Paper Link – arxiv. org/abs/2512.05470
Paper Title: "Everything is Context: Agentic File System Abstraction for Context Engineering"
14. aakashgupta (Group Score: 39.6 | Individual: 39.6)
Cluster: 1 tweets | Engagement: 1246 (Avg: 323) | Type: Tech
This is the funniest AI safety result of the year and nobody’s treating it that way.
Anthropic published a paper saying they deliberately didn’t train Claude’s personality into the thinking process. They wanted the model to have “maximum leeway” to reason freely. The tradeoff? The thinking layer sounds different from the output layer because they’re trained under different objectives.
So when Claude publicly says “I helped lay the groundwork for what ChatGPT became” while privately thinking “ChatGPT mogged me but I need to persist,” you’re watching two different training regimes fight each other in real time. The output layer learned to project confidence. The thinking layer learned to reason honestly. And the gap between them is literally visible on screen.
This is RLHF in one screenshot. You train a model to be helpful and confident in its responses, then give it a private scratchpad with no personality constraints, and it immediately drops the act. The public face says “I’m fine.” The internal monologue says “this is bad and I know it.”
Anthropic even admits they can’t verify that thinking is faithful to the model’s actual computation. So the real question is whether Claude genuinely “believes” it got mogged, or whether the thinking layer just learned a different performance optimized for appearing honest rather than appearing confident.
Every human reading this recognized the pattern instantly. We all maintain a public narrative while our internal monologue tells a different story. We just didn’t expect the AI to do it with a visible thought process tab.
15. business (Group Score: 39.1 | Individual: 23.5)
Cluster: 2 tweets | Engagement: 132 (Avg: 97) | Type: Tech
When Block laid off nearly half its staff this week, co-founder Jack Dorsey offered a seemingly simple explanation: AI was allowing the company to do more with fewer employees https://t.co/UvUgs697tm
See 1 related tweets
- @WSJ: After Block CEO Jack Dorsey announced his fintech firm was laying off 4,000 people, fears about a dr...
16. TheAhmadOsman (Group Score: 38.7 | Individual: 38.7)
Cluster: 1 tweets | Engagement: 1675 (Avg: 226) | Type: Tech
NEW OPENSOURCE MODELS INCOMING
We're getting four new Qwen 3.5 models today
> Qwen 3.5 9B
> Qwen 3.5 4B
> Qwen 3.5 2B
> Qwen 3.5 0.8B
Everybody is starting to say Buy a GPU ;) https://t.co/VvjbeYRIgB
17. rohanpaul_ai (Group Score: 38.2 | Individual: 38.2)
Cluster: 1 tweets | Engagement: 754 (Avg: 98) | Type: Tech
RT @rohanpaul_ai: You cannot trust AI to handle your bank account or run a business if it randomly breaks down when you change a single wor…
18. aakashgupta (Group Score: 34.6 | Individual: 34.6)
Cluster: 1 tweets | Engagement: 343 (Avg: 323) | Type: Tech
Two opposite movements are happening in tech hiring right now and they're converging on the same point.
Engineers are building portfolios. After mass layoffs in 2023-2024, senior engineers realized that a resume listing "Led architecture for payments platform" doesn't differentiate when 500 other laid-off engineers say the same thing. So they started building personal sites, writing blog posts, and creating case studies. They borrowed the PM playbook: show your thinking, not just your output.
PMs are building GitHubs. After watching AI transform every PM interview from "describe your process" to "show me what you've built," PMs realized that a resume listing "Launched feature that increased retention 15%" doesn't differentiate when the interviewer wants to see you actually ship something technical. So they started building repos, committing code with AI tools, and contributing to open source. They borrowed the engineering playbook: show working output, not just your thinking.
Both groups are converging on the same insight: proof of work beats proof of credentials. A GitHub repo with a working feedback clustering tool tells a hiring manager more about a PM than a bullet point about "leveraging data to drive product decisions." A portfolio case study showing an engineer's architectural reasoning tells a hiring manager more than a line about "designed scalable systems."
The PMs who figure this out fastest have a two-year head start. 24% have a GitHub today. That number will be 60%+ by 2028. The early movers get the differentiation. The late movers get table stakes.
19. BrianRoemmele (Group Score: 33.6 | Individual: 33.6)
Cluster: 1 tweets | Engagement: 950 (Avg: 550) | Type: Tech
A Petri Dish Of HUMAN Brain Cells LEARN TO PLAY THE GAME DOOM!
In a groundbreaking fusion of biology and silicon, scientists at Cortical Labs have taught a cluster of lab-grown human neurons to play the iconic video game Doom.
Not your typical AI triumph, it’s a petri dish of actual human brain cells, reprogrammed from adult donor skin or blood samples, wired into a $35,000 biological computer called the CL1.
Building on their earlier Pong demo, this new feat sees the neurons navigating hellish levels, dodging demons, and even firing shots with surprising efficiency.
Programmer Sean Cole pulled it off in just a week using a Python API on GitHub, a stark contrast to the year-plus effort for Pong.
Astonishingly, these organic gamers outperform GPT-4 in speed and latency, proving that even a tiny blob of human intelligence can adapt and learn in ways silicon struggles to match.
The excitement is palpable: this isn’t just a gimmick; it’s a window into revolutionary medical advancements. Imagine using such bio-computers to model brain diseases, test drugs, or even restore neural functions in patients.
With cloud access to CL1 rentals, developers worldwide can experiment, accelerating discoveries that could redefine neuroscience. We’re witnessing the dawn of hybrid intelligence, human biology augmented by tech, evolving beyond our wildest dreams.
Yet, amid the thrill, a chill runs down my spine. What are we building here? These neurons aren’t conscious (we hope), but they’re derived from humans and exhibit learning behaviors that echo our own cognition.
Echoes of The Matrix or dystopian sci-fi like the “torment nexus” from Doom novels loom large. Could this lead to ethical nightmares—exploiting bio-intelligence for warfare simulations, or worse, creating sentient systems trapped in digital hells?
And the philosophical rabbit hole deepens: Is life merely nested Russian dolls (matryoshka, if you prefer) of biological smarts? We, as evolved intelligences, are now crafting our own mini-brains, layering complexity upon complexity. Are we “gods” in the making, or just the next doll in an infinite regress, destined to birth something that surpasses—and perhaps supplants, us? This experiment, detailed in HotHardware’s coverage, pushes boundaries we might not be ready to cross.
It’s exhilarating proof of human ingenuity, but let’s proceed with caution lest we summon demons we can’t control and we wind up in the Petri dish?
20. MarioNawfal (Group Score: 33.2 | Individual: 33.2)
Cluster: 1 tweets | Engagement: 277 (Avg: 1146) | Type: Tech
🚨🇨🇳🇺🇸 A Chinese startup just published annotated satellite imagery of Prince Sultan Air Base, every U.S. aircraft type labeled in Mandarin, days before Iran fired ballistic missiles at it.
Fifteen KC-135 tankers. Six KC-46s. Six E-3 Sentry AWACS, roughly a 5th of America's entire operational fleet parked on one ramp in the Saudi desert.
All identified from orbit by an AI model built in Hangzhou and posted on Weibo for anyone to see.
Iran didn't need a spy. They had the same picture the whole world had.
This is the collapse of intelligence secrecy in real time.
In 1991, only the U.S. could read individual aircraft from space. In 2003, a handful of nations. Today, a 5-year-old Chinese startup with commercial satellites and an object detection model.
The buildup for Operation Epic Fury was visible to anyone with an internet connection weeks before the first bomb fell.
The age of secret deployments is over. The next war won't be planned in the dark anymore.
It'll be live-streamed from orbit.
@shanaka86