- Published on
科技推特精选 - 2026年2月8日
- Authors

- Name
- geeknotes
2026年2月8日 科技每日简报
Today's top tech conversations are led by @aakashgupta, whose post about 'Macron just told you France is...' garnered the highest engagement. Key themes trending across the top stories include france, billion, using, intelligence, people. The community is actively discussing recent developments in AI, engineering practices, and startup strategies.
1. aakashgupta (Group Score: 88.7 | Individual: 34.8)
Cluster: 4 tweets | Engagement: 228 (Avg: 326) | Type: Tech
Macron just told you France is winning the AI race. Let’s check the math.
That $69 billion in “foreign data center investment” is almost entirely two deals: UAE sovereign wealth fund MGX committing €30-50 billion for a single data center campus, and Canadian firm Brookfield putting €20 billion into its subsidiary Data4. Two foreign entities buying French real estate to house servers. That’s the bulk of the number.
Meanwhile, actual AI company funding tells a different story. In 2025, US firms attracted 8 billion. France’s homegrown champion Mistral is competitive, but one startup does not make a superpower.
The distinction matters. Data centers are warehouses. You can build them anywhere with cheap electricity and friendly permitting. What you can’t import is the ecosystem that produces OpenAI, Anthropic, Google DeepMind, and Meta AI. France ranked seventh globally in AI research publications. The US and China produce the models. France is offering to store them.
This is the landlord strategy. France is positioning itself as Europe’s best place to park GPUs, which is a legitimate economic play. Construction jobs, energy contracts, tax revenue. But calling it “AI leadership” is like calling a parking garage a car company.
The €54 billion France 2030 plan includes €2.1 billion actually earmarked for AI research and ecosystem development. That’s 1.9% of the headline number Macron is posting about. The rest goes to health, climate, and broader industrial policy that happens to have “AI” somewhere in the deck.
Macron’s framing passes some people’s filters because they conflate infrastructure spending with innovation spending. But in reality, they’re different games with different winners.
See 3 related tweets
- @aakashgupta: Macron just said France invested “more than €30 million” to advance health, climate, AI, and fundame...
- @ylecun: RT @ylecun: @PalmerLuckey Dude, government investment in AI in France is actually quite large.
For ...
- @kimmonismus: France is making a fool of itself: €30 million in data center expansion is being hailed as a major a...
2. alexocheema (Group Score: 67.2 | Individual: 52.8)
Cluster: 2 tweets | Engagement: 2021 (Avg: 218) | Type: Tech
I’ll be honest, I have 32 mac minis. 3 more clusters like this one.
Why? @jason thought my argument for local AI would be cost, but it’s much more than that.
AI is becoming an extension of your brain, an exocortex. @openclaw is a huge leap towards that.
It knows everything you know, it can do pretty much everything you can do. It’s personalised to you.
That brings into question where this exocortex should run. who should own it? who can switch it off?
I certainly won’t be trusting @sama or @DarioAmodei with my exocortex. I want to own it. I want to know if the model weights change. I don’t want my brain to be rate limited by a profit seeking corporation.
“not your weights, not your brain” - @karpathy
See 1 related tweets
- @exolabs: RT @alexocheema: I’ll be honest, I have 32 mac minis. 3 more clusters like this one.
Why? @jason th...
3. rohanpaul_ai (Group Score: 66.1 | Individual: 43.0)
Cluster: 2 tweets | Engagement: 1130 (Avg: 70) | Type: Tech
RT @rohanpaul_ai: Goldman Sachs is rolling out Anthropic’s AI model to automate accounting and compliance roles completely.
Anthropic engi…
See 1 related tweets
- @rickasaurus: RT @CNBC: Goldman Sachs has been working with the artificial intelligence startup Anthropic to creat...
4. rickasaurus (Group Score: 65.2 | Individual: 20.0)
Cluster: 5 tweets | Engagement: 644 (Avg: 1142) | Type: Tech
RT @claudeai: Our teams have been building with a 2.5x-faster version of Claude Opus 4.6.
We’re now making it available as an early experi…
See 4 related tweets
- @kimmonismus: 2.5x faster version of Claude Opus 4.6 incoming. Lets freaking go.
Competition between OAI and Anth...
- @testingcatalog: Anthropic released Claude Opus 4.6 Fast Mode as a research preview for Claude Code and APIs.
It co...
- @JamesMontemagno: RT @pierceboggan: Fast mode for Claude Opus 4.6 is now rolling out to @code developers in research p...
- @pierceboggan: Fast mode for Claude Opus 4.6 is now available to Pro+ developers in @code as well!...
5. vikramlingam9 (Group Score: 59.5 | Individual: 20.0)
Cluster: 3 tweets | Engagement: 0 (Avg: 0) | Type: Tech
Specialist fintechs are set to dominate by 2026, zeroing in on niches like small business loans or regional payments to fix what big banks ignore. Your coffee run could soon trigger instant credit based on spending patterns, making finance feel personal and seamless. The industry's shifting as mature markets slow and emerging ones boom, with regulators greenlighting digital banks and AI handling the heavy lifting. This favors focused players over stretched generalists, helping businesses and people manage cash smarter amid economic squeezes. Vertical tools like embedded banking in supply chain software offer real-time financing, tying loans to verified orders for manufacturers facing cash crunches. Banks miss these details, but fintechs nail them, even linking green practices to better rates for sustainable wins. AI agents will revolutionize credit by autonomously scanning emails, socials, and records for fairer lending decisions in minutes, not days. Startups like EnFi are scaling these for banks, boosting small biz access while keeping compliance tight and human touch for the big stuff.
Read more: https://t.co/YzwBhwj4RS
See 2 related tweets
- @vikramlingam9: Specialist fintechs are revolutionizing finance by using AI to target industries like beauty salons ...
- @vikramlingam9: Specialist fintechs are taking over by laser-focusing on niche problems like supply chain loans for ...
6. gdgtify (Group Score: 59.2 | Individual: 37.1)
Cluster: 2 tweets | Engagement: 500 (Avg: 57) | Type: Tech
RT @openclaw: 🦞 OpenClaw v2026.2.6 is here!
🧠 Opus 4.6 + GPT-5.3-Codex support ⚡ xAI Grok + Baidu Qianfan providers 📊 Token usage dashboar…
See 1 related tweets
- @openclaw: 🦞 OpenClaw v2026.2.6 is here!
🧠 Opus 4.6 + GPT-5.3-Codex support ⚡ xAI Grok + Baidu Qianfan provide...
7. mattturck (Group Score: 50.7 | Individual: 50.7)
Cluster: 1 tweets | Engagement: 2664 (Avg: 273) | Type: Tech
RT @EmmanuelMacron: “This clown wants to make France an AI leader with €30M.”
€30 million → to attract and support a…
8. alex_prompter (Group Score: 41.5 | Individual: 41.5)
Cluster: 1 tweets | Engagement: 1156 (Avg: 113) | Type: Tech
RT @alex_prompter: After 3 years of using Claude, I can say that it is the technology that has revolutionized my life the most, along with…
9. aakashgupta (Group Score: 41.4 | Individual: 41.4)
Cluster: 1 tweets | Engagement: 968 (Avg: 326) | Type: Tech
Naval is right, and the math proves it in a way most people aren’t processing.
GPT-4 launched at 1. That’s a 98% price collapse in two years. Demand didn’t fall. It exploded. OpenAI went from 12B+ in ARR while slashing prices every quarter.
This is Jevons Paradox at civilizational scale. When coal got cheaper in the 1800s, England didn’t use less coal. They burned 10x more. Intelligence is following the same curve, except the adoption rate is compressing a century of energy economics into 36 months.
The part nobody’s thinking through: every previous commodity with “unlimited demand” eventually restructured the labor market around it. Electricity didn’t create unlimited demand for electricians. It eliminated most of the jobs that electricity replaced and created entirely new ones that didn’t exist before.
The 280x cost reduction Stanford measured between 2022 and 2024 means a task that cost 3.57. At that price, companies don’t just automate what humans were doing. They start doing things that were never economically viable at human-labor pricing. Analysis that would have required a 50 in an afternoon.
Unlimited demand for intelligence at near-zero marginal cost means intelligence stops being the scarce input. Taste, judgment, and the ability to ask the right question become the bottleneck. The returns flow to people who can direct intelligence, not people who provide it.
That’s the real trade: the value of raw intelligence is cratering while the value of knowing what to do with intelligence has never been higher. And that gap is only getting wider.
10. KirkDBorne (Group Score: 40.4 | Individual: 25.1)
Cluster: 2 tweets | Engagement: 11 (Avg: 58) | Type: Tech
#Python #MachineLearning By Example: https://t.co/3mO7oBt4gc v/ @PacktDataML + GitHub: https://t.co/u0eoHafy02…
518-pages! What you will learn:
🟣Machine learning best practices throughout data preparation and model development
🟣Build and improve image classifiers using convolutional neural networks (CNNs) and transfer learning
🟣Develop and fine-tune neural networks using TensorFlow and PyTorch
🟣Analyze sequence data and make predictions using recurrent neural networks (RNNs), transformers, and CLIP
🟣Build classifiers using support vector machines (SVMs) and boost performance with PCA
🟣Avoid overfitting using regularization, feature selection, and more
See 1 related tweets
- @KirkDBorne: Hands-On Introduction to #MachineLearning: https://t.co/Kf6LzIfM98 ———— #NeuralNetworks #DataScienc...
11. rohanpaul_ai (Group Score: 40.1 | Individual: 31.6)
Cluster: 2 tweets | Engagement: 22 (Avg: 70) | Type: Tech
The wildest AI infra buildout is happening.
Amazon has spent more on capex in last 3 years than prior 26 years.
The company’s cloud unit added almost 4 gigawatts of computing capacity in 2025, and AWS expects to double that power by the end of 2027.
And Amazon’s CEO Andy Jassy said on Analyst call that he is very ‘confident’ with $200 billion spending plan
“This isn’t some sort of quixotic, top-line grab. We have confidence that we, that these investments will yield strong returns on invested capital. We’ve done that with our core AWS business. I think that will very much be true here as well.”
~ Amazon’s CEO Andy Jassy
Jassy said the AI market has become more like a “barbell,” with the AI labs on one side and enterprises on the other end, looking to the technology as a “productivity and cost avoidance” tool. The middle is comprised of enterprises that are in various stages of building AI applications, he said.
Chart from Bloomberg
bloomberg .com/news/articles/2026-02-06/nvidia-nvda-shares-surge-on-big-tech-s-650-billion-ai-spending-plan
See 1 related tweets
@rohanpaul_ai: RT @rohanpaul_ai: Big tech is gearing up to spend huge on AI in 2026.
Amazon leading at $200B, -...
12. pierceboggan (Group Score: 38.7 | Individual: 22.4)
Cluster: 2 tweets | Engagement: 106 (Avg: 498) | Type: Tech
Pro tip: Click "Move Terminal into Editor Area" to take advantage of @code's grid layout... so you can quadbox GitHub Copilot CLI :) https://t.co/McxQj7uBIG
See 1 related tweets
- @JamesMontemagno: RT @pierceboggan: Pro tip: Click "Move Terminal into Editor Area" to take advantage of @code's grid ...
13. KirkDBorne (Group Score: 38.4 | Individual: 13.4)
Cluster: 3 tweets | Engagement: 20 (Avg: 58) | Type: Tech
Numerical Linear Algebra: Twenty-Fifth Anniversary Edition
Get it at https://t.co/ZkYzGL07aB —————— #Mathematics #LinearAlgebra #ML #MachineLearning #DataScience #DataScientist #ComputationalScience https://t.co/Md3MR2Ho0D
See 2 related tweets
- @KirkDBorne: Linear Algebra and #Optimization for #MachineLearning [516-page textbook]: https://t.co/62bWLNwzd4 —...
- @KirkDBorne: Practical Linear Algebra for #DataScience — From Core Concepts to Applications Using #Python — https...
14. gdgtify (Group Score: 37.8 | Individual: 18.9)
Cluster: 2 tweets | Engagement: 31 (Avg: 57) | Type: Tech
Never get tired of prompts about scientists.
2x2 grid, do this for 4 famous scientists --> Input Variable: [scientist name]
Anchor: A 3D diorama of the actual scientist (semantically inferred from the input discovery/concept) intensely sketching equations/diagrams at a lab desk; the notebook page matches the discovery’s canonical notation and key visual diagrams; four floating grayscale “figure panels” show the most iconic explanatory visuals (graph, schematic, apparatus, result) as clean annotated frames; behind them a deluxe “Discovery Edition” display box with the concept title and scientist credit, featuring miniature apparatus and symbolic elements in dynamic poses; floating props: beakers, pipettes, chalk fragments, index cards, clean studio sweep background::3.8 Morphology: premium stylized figure, crisp lab-gear geometry, clean diagram linework, axis ticks, vector arrows, notation glyphs consistent with the field (physics/chem/bio)::3.2 Material Physics: glossy figure texture, matte paper figure panels with fiber, semi-matte box with spot UV + foil title, translucent window insert, glass-like highlights on labware (stylized), crisp registration::3.2 Illumination: studio product lighting, soft key + fill, rim separation, controlled reflections, readable equations and labels::1.6 Render Stack: collectible figure photography, shallow DOF but readable diagrams, isometric three-quarter angle, clean cyclorama, sharp focus on face + writing hand + page::1.2 Negative: photoreal skin, messy stains, illegible equations, gibberish, watermark, logos, extra limbs, deformed fingers, blurry graphs, dull colors, harsh contrast:: -1.2
See 1 related tweets
- @gdgtify: I have done tons of Nikola Tesla prompts here. This may be the cutest one yet.
Prompt: Input: Nik...
15. DominikTornow (Group Score: 37.3 | Individual: 21.6)
Cluster: 2 tweets | Engagement: 2 (Avg: 11) | Type: Tech
A minimal in-memory version of an application is not simply an oracle, but a guide for your coding agent
Claude iteratively probes your mini app and derives a specification of what to build
See 1 related tweets
- @DominikTornow: RT @DominikTornow: A minimal in-memory version of an application is not simply an oracle, but a guid...
16. mitchellh (Group Score: 37.2 | Individual: 28.5)
Cluster: 2 tweets | Engagement: 810 (Avg: 1120) | Type: Tech
AI eliminated the natural barrier to entry that let OSS projects trust by default. People told me to do something rather than just complain. So I did. Introducing Vouch: explicit trust management for open source. Trusted people vouch for others. https://t.co/6mY8yIcvGx
The idea is simple: Unvouched users can't contribute to your projects. Very bad users can be explicitly "denounced", effectively blocked. Users are vouched or denounced by contributors via GitHub issue or discussion comments or via the CLI.
Integration into GitHub is as simple as adopting the published GitHub actions. Done. Additionally, the system itself is generic to forges and not tied to GitHub in any way.
Who and how someone is vouched or denounced is up to the project. I'm not the value police for the world. Decide for yourself what works for your project and your community.
All of the data is stored in a single flat text file in your own repository that can be easily parsed by standard POSIX tools or mainstream languages with zero dependencies.
My hope is that eventually projects can form a web of trust so that projects with shared values can share their vouch lists with each other (automatically) so vouching or denouncing a person in one project has ripple effects through to other projects.
The idea is based on the already successful system used by @badlogicgames in Pi. Thank you Mario.
Ghostty will be integrating this imminently.
See 1 related tweets
- @mitsuhiko: RT @mitchellh: AI eliminated the natural barrier to entry that let OSS projects trust by default. Pe...
17. GuidesJava (Group Score: 36.5 | Individual: 36.5)
Cluster: 1 tweets | Engagement: 442 (Avg: 69) | Type: Tech
Designing a Production-Ready Microservices System:
- API Gateway with JWT security, rate limiting, and load balancing
- Centralized Config Server backed by Git
- Service Discovery using Eureka
- Event-driven communication with Kafka
- Saga pattern for distributed transactions
- Idempotency + Retry + Dead Letter Topics for reliability
- Circuit Breakers for resilience
- OpenTelemetry for metrics, logs, and traces
- Grafana & Prometheus for observability
18. Forbes (Group Score: 36.2 | Individual: 36.2)
Cluster: 1 tweets | Engagement: 326 (Avg: 104) | Type: Tech
As CEO of OpenAI, the 40-year-old billionaire helped unleash ChatGPT, pushing artificial intelligence into the mainstream and reshaping the global economy. Now a new father, with another baby on the way, he’s building the world his kids will one day inherit.
Read more from our conversation with Sam Altman: https://t.co/4CNTnfMosn (Photo: Cody Pickens for Forbes) #Forbes250
19. rohanpaul_ai (Group Score: 36.2 | Individual: 29.3)
Cluster: 2 tweets | Engagement: 228 (Avg: 70) | Type: Tech
"Anthropic is making great money. OpenAI is making great money. If they could have twice as much compute, the revenues would go up 4 times as much. These guys are so compute constrained, and the demand is so incredibly great."
~ Jensen Huang on CNBC https://t.co/WWg9sTX5W4
See 1 related tweets
- @rohanpaul_ai: RT @rohanpaul_ai: Anthropic is on track to add as much power as OpenAI in the next three years.
~ P...
20. omarsar0 (Group Score: 35.4 | Individual: 35.4)
Cluster: 1 tweets | Engagement: 188 (Avg: 139) | Type: Tech
I think one of the most underappreciated findings in AI engineering is what this paper calls the "Grep Tax."
First, they ran nearly 10,000 experiments testing how agents handle structured data, and the headline result is that format barely matters.
But here's the weird finding: a compact, token-saving format they tested (TOON) actually consumed up to 740% more tokens at scale because models didn't recognize the syntax and kept cycling through search patterns from formats they already knew.
It's one of the reasons my preferred formats are XML and Markdown. LLMs know those really well.
The models have preferences baked into their training data, and fighting those preferences doesn't save you money. It costs you.
The other finding worth sitting with: the same agentic architecture that improves frontier model performance actively hurts open-source models. It seems that the universal best-practices guide for AI engineering may not exist.