- Published on
今日科技推文精选 - 2026年3月27日
- Authors

- Name
- geeknotes
2026年3月27日 科技每日简报
Today's top tech conversations are led by @ivanfioravanti, whose post about 'RT @OfficialLoganK: Introducin...' garnered the highest engagement. Key themes trending across the top stories include https, every, introducing, model, brain. The community is actively discussing recent developments in AI, engineering practices, and startup strategies.
1. ivanfioravanti (Group Score: 354.6 | Individual: 41.4)
Cluster: 14 tweets | Engagement: 422 (Avg: 78) | Type: Tech
RT @OfficialLoganK: Introducing Gemini 3.1 Flash Live, our new realtime model to build voice and vision agents!!
We have spent more than a year improving the model + infra + experience, the results? A step function improvement in quality, reliability, and latency. https://t.co/0esYpmDy5l
See 13 related tweets
- @JeffDean: 📢 Another exciting step forward today with the launch of Gemini 3.1 Flash Live.
It natively underst...
- @kimmonismus: Google just launched Gemini 3.1 Flash Live, a new realtime model built for voice and vision agents. ...
- @testingcatalog: BREAKING 🚨: Gemini 3.1 Flash Live is launching on AI Studio, APIs and Gemini Live!
Gemini 3.1 Flas...
- @kimmonismus: Not gonna lie, Gemini 3,1 Flash Live sounds really cool! https://t.co/uTQa0MM87s\n\nQT @kimmonismus:...
- @mark_k: Google launched Gemini 3.1 Flash Live today, its highest-quality real-time audio model for natural v...
2. sdrzn (Group Score: 237.9 | Individual: 33.3)
Cluster: 10 tweets | Engagement: 154 (Avg: 121) | Type: Tech
excited to share cline kanban!
it's incredible seeing how the models can break down, parallelize, and link tasks in clever ways to get work done quicker than you ever could staring at a single terminal.\n\nQT @cline: Introducing Cline Kanban: A standalone app for CLI-agnostic multi-agent orchestration. Claude and Codex compatible.
npm i -g cline
Tasks run in worktrees, click to review diffs, & link cards together to create dependency chains that complete large amounts of work autonomously. https://t.co/4HjvwSu4Mo
See 9 related tweets
- @rohanpaul_ai: Cline just launched Cline Kanban, a platform designed to bring order to the chaos of multi-agent wor...
- @svpino: A single board to orchestrate all your coding agents.
The future of software development is managin...
- @dr_cintas: Cline just launched Kanban 🤯
Multi-agent orchestration. One board, full task visibility, dependency...
- @alexcooldev: You don’t need more agents. You need control.
Running multiple agents shouldn’t feel like juggling...
- @cline: RT @testingcatalog: Cline released Kanban, a free, open-source local web app that runs multiple CLI ...
3. qtnx_ (Group Score: 191.3 | Individual: 33.6)
Cluster: 8 tweets | Engagement: 128 (Avg: 54) | Type: Tech
Voxtral TTS it out, our first go at voice output, with really strong preference metrics! https://t.co/qHiwhxGpxD\n\nQT @MistralAI: 🔊Introducing Voxtral TTS: our new frontier open-weight model for natural, expressive, and ultra-fast text-to-speech
🎭Realistic, emotionally expressive speech. 🌍Supports 9 languages and accurately captures diverse dialects. ⚡Very low latency for time-to-first-audio. 🔄Easily adaptable to new voices
See 7 related tweets
- @tunguz: I just tried it out, and I am really impressed. So far my favorite AI TTS. Not as many options as so...
- @vllm_project: 🎉 Congrats to @MistralAI on launching Voxtral 4B TTS — enterprise-grade TTS built for production voi...
- @testingcatalog: Voxtral TTS by @MistralAI is officially out! Open-source and fast TTS model with a support of 9 lang...
- @arthurmensch: RT @MistralAI: 🔊Introducing Voxtral TTS: our new frontier open-weight model for natural, expressive,...
- @kimmonismus: Mistral just dropped Voxtral TTS an open-weight text-to-speech model with ultra-low latency, emotio...
4. HamelHusain (Group Score: 149.2 | Individual: 39.6)
Cluster: 5 tweets | Engagement: 392 (Avg: 81) | Type: Tech
RT @noahzweben: Thrilled to announce Claude Code auto-fix – in the cloud. Web/Mobile sessions can now automatically follow PRs - fixing CI failures and addressing comments so that your PR is always green.
This happens remotely so you can fully walk away and come back to a ready-to-go PR. https://t.co/F41RTeymXJ
See 4 related tweets
- @danshipper: dang they are shipping FAST\n\nQT @noahzweben: Thrilled to announce Claude Code auto-fix – in the cl...
- @arvidkahl: Now extrapolate this into the future: instead of just reactively fixing issues when they occur, the ...
- @lydiahallie: Claude Code can now auto-fix your PR in the background!
all you have to do is turn on the Auto Fix...
- @Cointelegraph: 🚨 BIG: Anthropic introduces Claude Code auto-fix in the cloud, enabling PRs to automatically resolve...
5. markgurman (Group Score: 112.3 | Individual: 30.0)
Cluster: 6 tweets | Engagement: 2244 (Avg: 727) | Type: Tech
Apple will let any AI platform - big apps include Gemini, Claude, Alexa, Meta AI etc. - to be queried in Siri if they enable an Extensions service inside of their iOS, macOS or iPadOS app. Apple will have a new section in the App Store. Unclear if there’s an approval process.\n\nQT @markgurman: BREAKING: Apple is planning to open up Siri to run any AI service via their App Store apps as part of iOS 27, dropping ChatGPT as the exclusive outside partner in Apple Intelligence and Siri. https://t.co/tfEnHTheBP
See 5 related tweets
- @markgurman: This is all completely separate from its - unchanged - deal with Google Gemini to help it rebuild i...
- @MikeIsaac: narrative for a while was "apple is behind on AI" — which was true when siri was a bust after trying...
- @kimmonismus: Apple is opening Siri to rival AI assistants starting with iOS 27, ending ChatGPT's exclusive partne...
- @business: Apple plans to open Siri to outside artificial intelligence assistants, a major move aimed at bolste...
- @WesRoth: RT @WesRoth: Apple is preparing a massive artificial intelligence overhaul for its ecosystem, center...
6. danshipper (Group Score: 105.2 | Individual: 70.5)
Cluster: 2 tweets | Engagement: 603 (Avg: 41) | Type: Tech
BREAKING!
Introducing Plus One:
A hosted @openclaw that lives in your Slack and comes pre-loaded with @every's best tools, skills, and workflows.
Set it up in one click, and use your ChatGPT subscription (or any other API key.)
Bring your Plus One to work: https://t.co/7Lo2xHM1B4
Connected to the @every ecosystem Plus Ones automatically use @every's agent-native apps, no setup required:
- @CoraComputer for searching, sending, and managing email
- @TrySpiral for great writing in your voice
- Proof (https://t.co/NTVY3NgAKy) for agent-native document editing
Custom skills and workflows we use and love Plus Ones come pre-loaded with skills and workflows we use ourselves @every —some we've made, and some we think are great.
- Content digest—summarizes the publications you read, starting with @every
- Daily brief—your day's schedule and to-dos sent to you each morning
- Animate—turn any static screenshot into an animation with @Remotion
- Frontend—Anthropic's front-end skill (which we use all the time!)
We also make it fast to connect Google, Notion, Github, and more to your Plus One.
Our goal is to give you a capable AI coworker right away, not a vanilla OpenClaw that you have to teach from scratch.
Why we built Plus One @OpenClaw has changed the way we work at Every.
We effectively have a parallel org chart of AI coworkers, each with a name, a manager, and real responsibilities. Because of them our workflows are completely different—our company is different—and we would never go back.
But getting here has been hard. Claws require a significant amount of manual setup and require a dedicated machine—like a Mac Mini—running 24/7 to stay responsive.
We have learned that the hard part of Claws is the infrastructure around them—the hosting, the integrations, the skills, and the ongoing care.
We’ve made them work great for our team, and we want to share everything we’ve learned with you.
We're letting in 20 people a week to start, and scaling invites quickly from there. @Every subscribers get priority.
Bring your Plus One to work: https://t.co/3GRscNf15z
See 1 related tweets
- @hammer_mt: Throw away that Mac mini - get your claw while it's hot\n\nQT @danshipper: BREAKING!
Introducing Pl...
7. kimmonismus (Group Score: 95.2 | Individual: 35.6)
Cluster: 3 tweets | Engagement: 571 (Avg: 360) | Type: Tech
Meta is back: Meta just dropped TRIBE v2, a foundation model that predicts how your brain responds to sight, sound, and language.
Trained on 500+ hours of fMRI data from 700+ people, it can predict a new person's brain activity without any retraining, and its predictions are actually more accurate than a real brain scan. Neuroscience just got a serious AI upgrade!\n\nQT @AIatMeta: Today we're introducing TRIBE v2 (Trimodal Brain Encoder), a foundation model trained to predict how the human brain responds to almost any sight or sound.
Building on our Algonauts 2025 award-winning architecture, TRIBE v2 draws on 500+ hours of fMRI recordings from 700+ people to create a digital twin of neural activity and enable zero-shot predictions for new subjects, languages, and tasks.
Try the demo and learn more here: https://t.co/VkMd1YpQWI
See 2 related tweets
- @waitin4agi_: RT @AIatMeta: Today we're introducing TRIBE v2 (Trimodal Brain Encoder), a foundation model trained ...
- @seconds_0: were so close\n\nQT @AIatMeta: Today we're introducing TRIBE v2 (Trimodal Brain Encoder), a foundati...
8. thdxr (Group Score: 95.0 | Individual: 30.7)
Cluster: 4 tweets | Engagement: 650 (Avg: 896) | Type: Tech
what a journey\n\nQT @nextjs: Next.js 16.2 introduces a stable Adapter API, built with Netlify, Cloudflare, OpenNext, AWS, and Google Cloud. But the API is only part of the story.
Next.js is used by millions of developers across every major cloud, and making it work well everywhere is on us. Here are our commitments.
See 3 related tweets
- @rauchg: Next.js is for everyone\n\nQT @nextjs: Next.js 16.2 introduces a stable Adapter API, built with Netl...
- @vercel_dev: RT @nextjs: Next.js 16.2 introduces a stable Adapter API, built with Netlify, Cloudflare, OpenNext, ...
- @CloudflareDev: RT @FredKSchott: ☑️ Vinext (Next.js on Vite) ☑️ OpenNext (Unofficial adapter API) ✅ NEW: Official Ad...
9. elonmusk (Group Score: 89.9 | Individual: 24.9)
Cluster: 5 tweets | Engagement: 56848 (Avg: 21917) | Type: Tech
AI content will vastly exceed all human content\n\nQT @wintonARK: We have been surpassed: AI written output exceeded human written output in 2025 https://t.co/Dv4CNJDMVf
See 4 related tweets
- @MSBIntel: Musk says AI will dominate content.
“AI content will vastly exceed all human content.”
The interne...
- @kimmonismus: AI written output exceeds human written output for the first time in history.
Looking at the very ...
- @Cointelegraph: 🚨 JUST IN: Elon Musk says AI-generated content will vastly exceed human-created contenT. https://t.c...
- @AndrewCurran_: Eclipsed. https://t.co/GSFrS47kji\n\nQT @wintonARK: We have been surpassed: AI written output exceed...
10. edzitron (Group Score: 76.9 | Individual: 33.4)
Cluster: 3 tweets | Engagement: 211 (Avg: 429) | Type: Tech
8.3m in revenue in a month max? On 900m weekly active users? Not great. Don’t think I’ve ever seen ads sales framed as ARR either, grim\n\nQT @steph_palazzolo: New: OpenAI has surpassed $100m in ARR from its ads pilot, which launched 6 weeks ago. It's expanded to 600+ advertisers and plans to launch self-serve advertiser access in April.
See 2 related tweets
- @steph_palazzolo: New: OpenAI has surpassed $100m in ARR from its ads pilot, which launched 6 weeks ago. It's expanded...
- @CNBC: OpenAI ads pilot tops $100 million in annualized revenue in under 2 months https://t.co/M5Dgt7jhx8...
11. tunguz (Group Score: 67.6 | Individual: 25.4)
Cluster: 3 tweets | Engagement: 59 (Avg: 86) | Type: Tech
Or - hear me out - let them work remotely.\n\nQT @AmandaAskell: Tech companies pay millions of dollars for their employees and then stick them in open-plan offices that make it nearly impossible to get work done. Best strategy for poaching employees is probably to just offer them an office with a door.
See 2 related tweets
- @Winterrose: yes. real offices not shared\n\nQT @AmandaAskell: Tech companies pay millions of dollars for their e...
- @tszzl: RT @AmandaAskell: Tech companies pay millions of dollars for their employees and then stick them in ...
12. levelsio (Group Score: 64.4 | Individual: 21.0)
Cluster: 4 tweets | Engagement: 1582 (Avg: 958) | Type: Tech
So @nikitabier implemented @photomatt's idea to stop AI bots from destroying the reply section on here
You can set it to only allow people you follow and the people in turn they follow to reply, nobody else
If on average ppl follow 500 people that means still 500*500=250,000 possible repliers
But all the spammers are isolated out 👏\n\nQT @levelsio: This would be genius actually @nikitabier
Where people I follow can reply to my tweets but also the people they follow (like 2nd degree follows)
And maybe you can see that in small text too like:
@photomatt (via @levelsio): "Bla bla bla"
Then if you realize that's an AI bot you just unfollow your friend
See 3 related tweets
- @fabianstelzer: this is clever in three ways:
- it stops the bots
- increases your propensity to respond to show...
- @levelsio: This is why this system is so cool
You can punish people for posting AI bot spam
And you punish pe...
- @yacineMTB: RT @levelsio: To give you an idea of how bad the AI bot problem was getting
I muted about 17,500 of...
13. garrytan (Group Score: 63.9 | Individual: 39.0)
Cluster: 2 tweets | Engagement: 601 (Avg: 327) | Type: Tech
One of the most important things about this new age is you have to use tokens aggressively to create something remarkable
You have to let it rip. If you do, and you have agency and taste, the result will be remarkable.
So token credits for AI is a big part of making startups accessible regardless of where you grew up or whether your family has money\n\nQT @ycombinator: Every student accepted into Startup School India now gets $25k+ in AI and cloud credits.
Apply, get in, and start building: https://t.co/gncXSJGhdb
See 1 related tweets
- @paulg: RT @ycombinator: Every student accepted into Startup School India now gets $25k+ in AI and cloud cre...
14. fchollet (Group Score: 62.8 | Individual: 31.5)
Cluster: 2 tweets | Engagement: 1082 (Avg: 361) | Type: Tech
RT @levie: Jevons paradox is happening in real time. Companies, especially outside of tech, are realizing that they can now afford to take on software projects that they wouldn’t have been able to tackle before because now AI lets them do so.
We’re going to start to use software for all new things in the economy because it’s incrementally cheaper to produce. Marketing teams at big companies will have engineers helping to automate workflows. Engineers in life sciences and healthcare will automate research. Small businesses will hire engineers for the first to build better digital experiences.
And as long as AI agents still require a human who understands what to prompt, how to review when an agent goes off the rails, how it guide back, how to maintain the system that was built, how to fix the ongoing bugs, and more, we will still have humans managing these agents.
This is why all the advice you get of not going into engineering is wrong. The world is going to increasingly be made up of software, and the people that understand it best will be in a strong economic position. This will happen in other roles as well where output goes up and demand increases.
See 1 related tweets
- @fchollet: There's going to be a lot more software, and a lot more demand for software engineers. And a lot mor...
15. TheAhmadOsman (Group Score: 62.1 | Individual: 32.3)
Cluster: 2 tweets | Engagement: 163 (Avg: 191) | Type: Tech
I didn’t comment much on Google’s TurboQuant yesterday because I didn’t want to be the party pooper
But if you guys imagine that there is a free lunch out there then I am sorry to disappoint you\n\nQT @no_stp_on_snek: I implemented Google's TurboQuant paper (ICLR 2026) in llama.cpp with Metal kernels for Apple Silicon.
4.9× KV cache compression. Working end-to-end on M5 Max with Qwen 3.5 35B MoE and Qwopus v2 27B.
Speed needs work (unoptimized shader), compression target met.
Repo: https://t.co/7aUaWo7Mm1
Note: as you'll see from the git when I saw "I" it's in conjunction with claudecode and codex. Just lots of steering and babysitting.
See 1 related tweets
- @rickasaurus: Wow https://t.co/6ZPAD95UrP\n\nQT @no_stp_on_snek: Google dropped the TurboQuant paper yesterday mor...
16. WesRoth (Group Score: 61.6 | Individual: 28.3)
Cluster: 3 tweets | Engagement: 32 (Avg: 34) | Type: Tech
Google DeepMind unveiled Lyria 3 Pro, a major upgrade to its music generation model that brings structural awareness, extended track lengths, and broad integration across the Google ecosystem.
Unlike previous models that generated short, unpredictable clips, Lyria 3 Pro allows creators to map out tracks up to 3 minutes long.
Users can explicitly prompt for complex musical structures, dictating exactly where the intro, verses, choruses, and bridges should occur.\n\nQT @GoogleDeepMind: You can now create longer tracks with Lyria 3 Pro. 🎶
Map out intros, verses, choruses, and bridges to build high-fidelity compositions up to 3 minutes long. 🎹
See 2 related tweets
- @rohanpaul_ai: RT @rohanpaul_ai: Google DeepMind launched Lyria 3 Pro
Lyria 3 Pro generates up to three-minute tra...
- @dl_weekly: 🤖 From this week's issue: Google launches Lyria 3 Pro, an upgraded music generation model that produ...
17. jukan05 (Group Score: 61.4 | Individual: 24.3)
Cluster: 3 tweets | Engagement: 171 (Avg: 516) | Type: Tech
Fuck, I really needed this, but Sama killed it.\n\nQT @wallstengine: OpenAI has indefinitely shelved plans to release its adult chatbot, after concerns from employees and investors and as the company shifts focus back to core products like coding and productivity tools. - Verge https://t.co/fuIsJAHqdg
See 2 related tweets
- @FT: FT exclusive: OpenAI has shelved plans to release an erotic chatbot 'indefinitely' as it refocuses o...
- @dejavucoder: openai blueballing the erotic chatbot plan\n\nQT @unusual_whales: OpenAI puts erotic chatbot plans o...
18. rohanpaul_ai (Group Score: 61.3 | Individual: 34.7)
Cluster: 2 tweets | Engagement: 60 (Avg: 82) | Type: Tech
New GoogleDeepmind paper introduces Ego2Web and shows that today’s web agents still struggle badly when real-world video must guide online actions.
The problem is that most web-agent tests only check what an agent can do inside a browser, not whether it can look at the user’s surroundings, figure out what matters, and then use that information correctly on the web.
Ego2Web fixes that by pairing 500 first-person videos with real web tasks, like spotting a product, brand, place, or exercise in the video and then finding the right page on Amazon, YouTube, Wikipedia, or Google Maps.
The authors also built an automatic judge that checks the agent’s video evidence, browser steps, screenshots, and final answer, and this judge matches human decisions about 84% of the time.
When they tested 6 strong agents, the best one reached only 58.6% success, which means even top systems still miss a big share of tasks that humans can verify.
The main failure modes: 36% of errors came from picking the wrong object, 18% from misunderstanding timing, and raw video worked much better than captions or no vision.
Overall, the paper is not mainly proposing a better agent, but a better test that exposes the missing link between seeing the physical world and acting correctly in the digital world.
arxiv .org/pdf/2603.22529\n\nQT @shoubin621: Introducing Ego2Web from Google DeepMind and UNC Chapel Hill, accepted to #CVPR2026.
AI agents can browse the web. But can they act based on what you see? Existing benchmarks focus only on web interaction while ignoring the real world.
Ego2Web bridges egocentric video perception and web execution, enabling agents that can see through first-person video, understand real-world context, and take actions on the web grounded in the egocentric video.
This opens a path toward AI assistants that operate seamlessly across physical and digital environments. We hope Ego2Web serves as an important step for building more capable, perception-driven agents.
🧵👇
See 1 related tweets
- @rohanpaul_ai: RT @rohanpaul_ai: New GoogleDeepmind paper introduces Ego2Web and shows that today’s web agents stil...
19. jasonlk (Group Score: 61.1 | Individual: 26.4)
Cluster: 3 tweets | Engagement: 54 (Avg: 70) | Type: Tech
Every single app we vibe code is better than the last one.
⚙️Some of is the models ⌨️Some of it is the vibe platforms like Replit getting better and better 👩💻Some of it is us getting better
But man, each one is better than the last.
We'll demo our latest app we vibe'd, our AI VP of Customer Success, next week on SaaStr AI Workshop Wednesday
Come join and see how we built it
It's the best one ... yet.
See 2 related tweets
- @jasonlk: Everyone celebrates when the vibe-coded app hits production. Nobody talks about what happens next. ...
- @ycombinator: RT @axios: .@replit CEO @amasad on vibe coding:
"We started our company even 10 years ago before v...
20. vllm_project (Group Score: 61.1 | Individual: 32.4)
Cluster: 2 tweets | Engagement: 130 (Avg: 146) | Type: Tech
🎉 Congrats to @Cohere on releasing Cohere Transcribe, a 2B speech recognition model (Apache 2.0, 14 languages). Day-0 support in vLLM.
Cohere contributed encoder-decoder serving optimizations to vLLM: variable-length encoder batching and packed attention for the decoder. Up to 2x throughput improvement for speech workloads, and these gains carry over to all encoder-decoder models on vLLM.
Thanks to the @Cohere team for the contribution!
PR 🔗 https://t.co/mk8Nb0iHqW Blog 🔗 https://t.co/DblyoZnwQ0\n\nQT @cohere: Introducing: Cohere Transcribe – a new state-of-the-art in open source speech recognition. https://t.co/l87Z6oyQdM
See 1 related tweets
- @victormustar: RT @cohere: Introducing: Cohere Transcribe – a new state-of-the-art in open source speech recognitio...