Published on

科技推文精选 - 2026-04-03

Authors

2026年4月3日 科技每日简报

Today's top tech conversations are led by @tbpn, whose post about 'OpenAI Acquires TBPN https://t...' garnered the highest engagement. Key themes trending across the top stories include models, https, gemma, first, built. The community is actively discussing recent developments in AI, engineering practices, and startup strategies.


1. tbpn (Group Score: 329.7 | Individual: 35.0)

Cluster: 18 tweets | Engagement: 1022 (Avg: 146) | Type: Tech

OpenAI Acquires TBPN https://t.co/Sdw32Weo4z

See 17 related tweets

  • @tbpn: RT @jordihays: TBPN has been acquired by OpenAI

The world is changing quickly but TBPN will stay th...

  • @dkundel: Welcome @tbpn 🙌 the only disappointing thing is the missed opportunity to have a Traded graphic for ...
  • @mitsuhiko: LOLWAT.\n\nQT @jordihays: TBPN has been acquired by OpenAI

The world is changing quickly but TBPN w...

  • @CCgong: lol smart to not post a day earlier\n\nQT @jordihays: TBPN has been acquired by OpenAI

The world is...

  • @chatgpt21: So OpenAI bought TBPN to help create conversation about the change AGI will bring https://t.co/GZclE...

2. etnshow (Group Score: 178.4 | Individual: 31.5)

Cluster: 7 tweets | Engagement: 137 (Avg: 22) | Type: Tech

Congratulations @tbpn https://t.co/7qTmE3sn5s\n\nQT @johncoogan: TBPN has been acquired by OpenAI!

The show is staying the same and we’ll continue to go live at 11am pacific every weekday.

This is a full circle moment for me as I’ve worked with @sama for well over a decade. He funded my first company in 2013. Then helped us fix a serious logjam during a critical funding round a few years later. When I took my second company through YC, he was president at the time, and then when I joined Founders Fund, the first deal I saw in motion was the post-ChatGPT round in late 2022. And as we started growing TBPN last year, he was the very first lab lead to join the show.

Thank you to everyone that has been a part of TBPN until now. The last year has been the most fun and rewarding part of my career and we’re excited to have more resources than ever going forward.

See 6 related tweets

  • @tbpn: RT @johncoogan: TBPN has been acquired by OpenAI!

The show is staying the same and we’ll continue t...

  • @danshipper: it's now cool to start media companies!

jfyi\n\nQT @johncoogan: TBPN has been acquired by OpenAI! ...

  • @garrytan: Huge congrats! TBPN is awesome\n\nQT @johncoogan: TBPN has been acquired by OpenAI!

The show is sta...

  • @gregisenberg: I think we’ll see a lot more creator/media acquisitions over the next 5 years in the age of AI\n\nQT...
  • @Techmeme: OpenAI acquires popular tech news show TBPN; the show will stay the same and will continue to air li...

3. 0xdevshah (Group Score: 175.8 | Individual: 48.3)

Cluster: 6 tweets | Engagement: 1262 (Avg: 164) | Type: Tech

RT @demishassabis: Excited to launch Gemma 4: the best open models in the world for their respective sizes. Available in 4 sizes that can be fine-tuned for your specific task: 31B dense for great raw performance, 26B MoE for low latency, and effective 2B & 4B for edge device use - happy building! https://t.co/Sjbe3ph8xr

See 5 related tweets

  • @Meer_AIIT: 🚨Breaking News: Google just launched Gemma 4 today four open models under Apache 2.0, ranging from e...
  • @sundarpichai: Gemma 4 is here, and it’s packing an incredible amount of intelligence per parameter 👇\n\nQT @demish...
  • @ClementDelangue: Let's go 💎💎💎💎 https://t.co/9s4U4jtNAT!\n\nQT @demishassabis: Excited to launch Gemma 4: the best ope...
  • @rronak_: This is huge!!!\n\nQT @demishassabis: Excited to launch Gemma 4: the best open models in the world f...
  • @rohanpaul_ai: RT @Meer_AIIT: 🚨Breaking News: Google just launched Gemma 4 today four open models under Apache 2.0,...

4. NielsRogge (Group Score: 160.8 | Individual: 31.1)

Cluster: 7 tweets | Engagement: 8 (Avg: 70) | Type: Tech

Haters can say all they want, Cursor is still the top AI product for me, which I use every day\n\nQT @cursor_ai: We’re introducing Cursor 3. It is simpler, more powerful, and built for a world where all code is written by agents, while keeping the depth of a development environment. https://t.co/rXR9vaZDnO

See 6 related tweets

  • @ankitxg: 'Built for agents while keeping IDE depth' buries the real tension: is this an IDE that adopted agen...
  • @martin_casado: RT @cursor_ai: We’re introducing Cursor 3. It is simpler, more powerful, and built for a world where...
  • @NielsRogge: Wait, Cursor looks like Codex Desktop now?

Sidebar, chat in the center, diff on the right\n\nQT @cu...

  • @Techmeme: Cursor launches Cursor 3, an "agent-first" coding product designed to compete with Claude Code and C...
  • @WIRED: RT @ZeffMax: New: Cursor is launching Cursor 3, a new product interface centered around spinning up ...

5. startupideaspod (Group Score: 150.4 | Individual: 56.2)

Cluster: 4 tweets | Engagement: 2457 (Avg: 348) | Type: Tech

Sam Altman predicted the first one-person billion-dollar company.

Matthew Gallagher built a 401Mcompanyinyearonewith401M company in year one with 20,000, AI tools, and zero employees.

This year he's on track for $1.8B. With 2 people.

The playbook has changed:

Old path:

  • Come up with an idea
  • Fundraise from friends or VCs
  • Hire a team
  • Build the product
  • Hope it works

New path:

  • Start with an audience (X, Instagram, TikTok)
  • Vibe code something for that audience
  • Build a community around it
  • Automate fulfillment with AI agents
  • Repeat

That's the new barrier to entry is a laptop and an idea.

See 3 related tweets

  • @gregisenberg: So, he vibe coded a $1B startup?

Really cool

With the right idea, right tools, right distribution...

  • @rohanpaul_ai: Sam Altman told you so: "We're going to see 10 person billion-dollar companies pretty soon.

In my ...

  • @Yuchenj_UW: One person, 2 months, 20Kbootstrap,noVC,vibecodedsoftware.20K bootstrap, no VC, vibe-coded software. 1.8B company.

We’re about to see...


6. chddaniel (Group Score: 140.6 | Individual: 32.5)

Cluster: 5 tweets | Engagement: 11 (Avg: 24) | Type: Tech

oh wow... this is actually scary.\n\nQT @chhddavid: Introducing Shipper: the first autonomous AI business maker.

Successful startups spend $65k/mo in salaries… before their first paying customers comes in -

We built Shipper to change that forever this. Shipper can:

✅ Research how other startups made it big ✅ Build any kind of app: mobile, web, website, extension, bot etc ✅ Code, design, monetize, launch ✅ Do email marketing for you ✅ Self-maintain and build out new features

...and so much more

Every project has its own AI co-founder, scheduled prompts, autonomous building mode and native connectors.

No API tokens. No confusion on Cursor. No credits wasted on errors.

Shipper replaces teams of 30+ employees and acts just like VC-backed startups... For the price of $25/month.

To celebrate the launch, we're giving away free credits randomly. Repost and comment "SHIPPER" to join - we'll let Siri pick the winners.

See 4 related tweets

  • @chhddavid: This is terrifying.\n\nQT @chhddavid: Introducing Shipper: the first autonomous AI business maker.

...

Successful startups spe...

  • @WIRED: As Cursor launches the next generation of its product, the AI coding startup has to compete with Ope...

7. NVIDIAAIDev (Group Score: 140.1 | Individual: 57.2)

Cluster: 4 tweets | Engagement: 975 (Avg: 111) | Type: Tech

🙌 Congrats @GoogleDeepMind and teams on the release of your @googlegemma 4 models!🎉

The new multimodal and multilingual models are built for fast, efficient, and secure AI across devices – and optimized to run locally on NVIDIA RTX, RTX PRO, DGX Spark, and Jetson.

👉 Prototype the 31B model and start experimenting for free on https://t.co/1r7CMikAnk

🔗Check out the details to get started in our Technical Blog: https://t.co/VlDjX9xKrN\n\nQT @GoogleDeepMind: Meet Gemma 4: our new family of open models you can run on your own hardware.

Built for advanced reasoning and agentic workflows, we’re releasing them under an Apache 2.0 license. Here’s what’s new 🧵

See 3 related tweets

  • @lmsysorg: 🎉 Congrats on the Gemma 4 launch from @googlegemma, day-0 support is now live in SGLang!

Gemma 4 is...

  • @UnslothAI: Google releases Gemma 4. ✨

Gemma 4 introduces 4 models: E2B, E4B, 26B-A4B, 31B. The multimodal reas...

  • @rseroter: RT @clmt: 💎💎💎💎 Huge news today: we're launching #Gemma4! Our most capable open models yet.

🔓 Apache...


8. JeffDean (Group Score: 140.0 | Individual: 35.4)

Cluster: 6 tweets | Engagement: 194 (Avg: 320) | Type: Tech

Today we're releasing Gemma 4, our new family of open foundation models, built on the same research and technology as our Gemini 3 series. These models set a new standard for open intelligence, offering SOTA reasoning capabilities from edge-scale (2B and 4B w/ vision/audio) up to a 124B parameter MoE model. By releasing Gemma 4 under the Apache 2.0 license, we hope to enable more innovation across the research and developer communities. Our earlier Gemma 3 models were downloaded 400M times and over 100,000 variants of those models have been published, so we're excited to see what the community will do with the even better Gemma 4 models!

Learn more at https://t.co/BW6O3Gr8bc and https://t.co/8M0XSQSP4u

Great work by everyone involved! #Gemma4 #AI #OpenSource #ML

See 5 related tweets

  • @vllm_project: 🎉 Gemma 4 is officially available on vLLM! Byte-for-byte, these are the most capable open models for...
  • @vllm_project: 🎉Announcing Gemma4 on vLLM model launch blog at https://t.co/UnI5G8LGFn!

Explore our detailed blogp...

  • @chatgpt21: “Sir another American source open model has hit the timeline” 🇺🇸 https://t.co/m7qjcnmHT3\n\nQT @Goog...
  • @arena: RT @Google: Gemma 4 is our most capable open model family yet:

🔵 Four versatile sizes 🔵 Up to 256K ...

  • @rseroter: RT @GoogleOSS: Autonomy, Control, Clarity: Gemma 4 models are now under the industry-standard Apache...

9. CSProfKGD (Group Score: 135.1 | Individual: 44.8)

Cluster: 6 tweets | Engagement: 2268 (Avg: 102) | Type: Tech

RT @AnthropicAI: New Anthropic research: Emotion concepts and their function in a large language model.

All LLMs sometimes act like they have emotions. But why? We found internal representations of emotion concepts that can drive Claude’s behavior, sometimes in surprising ways.

See 5 related tweets

  • @nummanali: This is probably one of the most fascinating things on LLMs that I've read

It doesn't say necessari...

  • @Yuchenj_UW: “All LLMs sometimes act like they have emotions.” https://t.co/4Xl4PRRAZX\n\nQT @AnthropicAI: New An...
  • @minchoi: We are not ready for this.

Anthropic says Claude has functional emotion concepts...

And "desperati...

  • @chatgpt21: Phenomenal work by the Anthropic team\n\nQT @AnthropicAI: New Anthropic research: Emotion concepts a...
  • @adonis_singh: say whatever you want anthropic has some of the coolest interp research\n\nQT @AnthropicAI: New Anth...

10. eliebakouch (Group Score: 133.9 | Individual: 40.0)

Cluster: 4 tweets | Engagement: 410 (Avg: 96) | Type: Tech

google gemma 4 architecture is very interesting and every model has some subtle differences, here is a recap:

per layer embedding only on the small variant no attention scale (usually you divide qk^T by sqrt(d), they don't) they do QK norm + V norm as well they share K and V for the large variant they do quite aggressive KV cache sharing on the small variant sliding window (512 and 1024) is bigger than gpt-oss 128 and they don't use sinks! softcapping rope only on part of the dimensions + different rope theta for the local/global layer\n\nQT @osanseviero: Gemma 4 is here!

🧠 31B and 26B A4B for models with impressive intelligence per parameter 🤏E2B and E4B for mobile and IoT 🤗Apache 2.0 🤖Base and IT checkpoints available

Available in AI Studio, Hugging Face, Ollama, Android, and your favorite OS tools 🚀Download it today! https://t.co/jEadp7zFqy

See 3 related tweets

  • @stanfordnlp: RT @JeffDean: Today we're releasing Gemma 4, our new family of open foundation models, built on the ...
  • @huggingface: RT @GoogleAI: Today, we’re launching Gemma 4, our most intelligent open models to date. Built with t...
  • @testingcatalog: BREAKING 🚨: Google released Gemma 4 in 4 different variants: 31B, 26B MoE, 2B and 4B!

Offline on-d...


11. shiri_shh (Group Score: 123.1 | Individual: 30.0)

Cluster: 5 tweets | Engagement: 38 (Avg: 461) | Type: Tech

Studios are paying $2M for Seedance 2.0 access

and you guys are giving us on a monthly subscription. https://t.co/YUvmVmutcW\n\nQT @invideoOfficial: Seedance 2.0 is now live on invideo for Max, Generative, and Team plan users.

@BytePlusGlobal's most advanced video model — and arguably the most controllable AI video model ever released.

Multimodal input. Motion replication from reference videos. Native audio-video generation in one pass. Character consistency that actually holds. Director-level camera control. Real-world physics.

This isn't prompt-and-pray. This is a production tool.

Only available through business email verification for all regions except US and Japan.

See 4 related tweets

  • @unusual_whales: TikTok’s China-based parent company ByteDance allows creators to be their own director and make vira...
  • @heynavtoor: I've tested every major AI video tool this year. Seedance 2.0 generates incredible 15-second clips. ...
  • @Mayhem4Markets: Seedance 2 is on Higgsfield now and it looks absolutely insane. 🤯

Next-gen physics.

Stunning video...

  • @shiri_shh: Seedance 2.0 is such a goated Video model that it’s basically illegal in a few countries💀 https://t....

12. business (Group Score: 118.6 | Individual: 50.5)

Cluster: 5 tweets | Engagement: 1407 (Avg: 120) | Type: Tech

Breaking: SpaceX boosted its target IPO valuation above $2 trillion as the world’s most valuable startup gears up to pitch potentially the biggest-ever market debut https://t.co/FTiCNAa3pe

See 4 related tweets

  • @SawyerMerritt: HOLY MOLY\n\nQT @business: Breaking: SpaceX boosted its target IPO valuation above $2 trillion as th...
  • @MSBIntel: 🚨🇺🇸 JUST IN: SpaceX files for a June 2026 IPO at a 1.75trillionvaluationon1.75 trillion valuation on 15B in revenue, maki...
  • @ns123abc: As predicted https://t.co/Ug8vcB3DzW\n\nQT @business: Breaking: SpaceX boosted its target IPO valuat...
  • @BusinessInsider: SpaceX's plans for an IPO are driving positive sentiment across the entire space-tech industry. http...

13. fchollet (Group Score: 118.1 | Individual: 31.0)

Cluster: 5 tweets | Engagement: 223 (Avg: 770) | Type: Tech

Some of the biggest beneficiaries of AI will be established companies with a profitable business model that manage to leverage AI to make their existing products more compelling and even start new ones (like Adobe Podcast which is both new and AI first)\n\nQT @fchollet: One of the best AI products I've seen recently: (drumroll)

Adobe Podcast

See 4 related tweets

  • @BrianRoemmele: If the US doesn’t grow from Grandpa’s 2010 SAAS monetization model for AI, real fast, we will not ma...
  • @BrianRoemmele: RT @BrianRoemmele: This is just the tip of the iceberg berg.

The open source AI that will eventuall...

  • @ManningBooks: AI isn't just about where you can use it. It's about knowing where you should.

The Art of AI Produc...

  • @gdb: AI creates new opportunity for entrepreneurs\n\nQT @nic_carter: first vibecoded billion-dollar compa...

14. rohanpaul_ai (Group Score: 116.4 | Individual: 34.5)

Cluster: 4 tweets | Engagement: 53 (Avg: 80) | Type: Tech

Runable is the sort of AI startup story that makes the whole space feel full of possibility.

$2M ARR in 3 weeks after launching 2.0. 700K users. Super small team in Bengaluru, India.

It's co-founder Umesh, is the youngest Indian founder to get here this fast.

A lot of the next decade’s biggest fortunes will be built on AI.\n\nQT @itsumeshk: 3 weeks ago, we launched Runable 2.0

Today we hit $2M in ARR, making @runable_hq one of the fastest-ever to do it!

We are just getting started... https://t.co/f21135JlH7

See 3 related tweets

  • @EHuanglu: AI isn’t just a tool anymore

it watches how humans work… then outperforms us\n\nQT @itsumeshk: 3 we...

  • @AngryTomtweets: Every time I open Runable, the feature I was just thinking about is already live.

Only 7 people. $2...

  • @svpino: These guys are running one of the most powerful general AI agents in the market.

A couple of weeks ...


15. BrianRoemmele (Group Score: 106.3 | Individual: 30.4)

Cluster: 5 tweets | Engagement: 172 (Avg: 449) | Type: Tech

This is just the tip of the iceberg berg.

The open source AI that will eventually be released in the US will end many AI company’s SAAS plans.

I know the open source path for AI in the US but I ain’t got pedigree.

I am just some guy on X.\n\nQT @garrytan: Wow. Incredible amount of SOTA training data now just available to China thanks to @mercor_ai leak. Every major lab. Billions and billions of value and a major national security issue.

See 4 related tweets

  • @pmarca: First the Claude Code leak and now this. In the same week. "AI safety" by way of "we'll lock it up" ...
  • @BrianRoemmele: RT @BrianRoemmele: If the US doesn’t grow from Grandpa’s 2010 SAAS monetization model for AI, real f...
  • @allenholub: RT @Grady_Booch: Here is my prediction:

The Magnificent Seven - Alphabet, Amazon, Apple, Meta, Mic...

  • @secureainow: RT @proudamericanst: Will AI turn Big Tech data into a surveillance state?

CPAC 2026: Brendan Stein...


16. AngryTomtweets (Group Score: 106.1 | Individual: 32.7)

Cluster: 4 tweets | Engagement: 72 (Avg: 29) | Type: Tech

Microsoft just dropped MAI-Transcribe-1, a new SOTA speech-to-text model.

The model is built to deliver high quality transcription in messy, real-world environments, while remaining incredibly fast and efficient.

MAI-Transcribe-1 delivers SOTA speech-to-text transcription across the top 25 most-used languages.\n\nQT @satyanadella: We’re bringing our growing MAI model family to every developer in Foundry, including …

· MAI-Transcribe-1, most accurate transcription model in world across 25 languages · MAI-Voice-1, natural, expressive speech generation · MAI-Image-2, our most capable image model yet

Start building: https://t.co/Mls2y7nRQT

See 3 related tweets

  • @testingcatalog: Microsoft now has MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 available on MAI Playground!

  • "M...

  • @JamesMontemagno: RT @satyanadella: We’re bringing our growing MAI model family to every developer in Foundry, includi...

  • @arena: RT @mustafasuleyman: Three models. Three top-tier results. All shipped within just a few months by t...


17. garrytan (Group Score: 105.2 | Individual: 38.5)

Cluster: 3 tweets | Engagement: 974 (Avg: 311) | Type: Tech

RT @karpathy: LLM Knowledge Bases

Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So:

Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them.

IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides).

Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale.

Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base.

Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into.

Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries.

Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows.

TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

See 2 related tweets

  • @omarsar0: I have also been obsessed with building LLM knowledge bases.

Here is one example of the type of thi...

  • @nummanali: Pretty good write up on using Claude Code / Codex to create a personal AI augmented knowledge base ...

18. chatgpt21 (Group Score: 96.9 | Individual: 27.2)

Cluster: 5 tweets | Engagement: 207 (Avg: 164) | Type: Tech

The forecasters at AI 2027 (best forecasters in the industry imo) have moved their timelines from 2029 back to 2027 citing faster time horizon growth.

The new METR times will shock you. My timeline is still 2029, currently waiting on continuous learning, strong reduction in hallucinations, & long term memory.\n\nQT @eli_lifland: AI timelines update: @DKokotajlo and I have updated our timelines earlier by ~1.5 years over the last 3 months, primarily due to (a) expecting faster time horizon growth, and (b) coding agents impressing in the real world. During 2025, we had updated toward longer timelines. https://t.co/5dKocoAjAP

See 4 related tweets

  • @AndrewCurran_: Timelines updated.\n\nQT @eli_lifland: AI timelines update: @DKokotajlo and I have updated our timel...
  • @RyanPGreenblatt: These numbers seem reasonable. My median for "full automation of AI R&D" is ~early 2031 and my 2...
  • @teortaxesTex: will be historically impressive if they end up with the original timeline and it works out as predic...
  • @StefanFSchubert: More AI experts should do this https://t.co/fWnHfGn2lL\n\nQT @DKokotajlo: New timelines update! We a...

19. badlogicgames (Group Score: 96.2 | Individual: 31.5)

Cluster: 4 tweets | Engagement: 119 (Avg: 130) | Type: Tech

also works well with pi! thanks @ClementDelangue for the tip!

edge models like this are super valuable. can automate a bunch of low complexity agentic tasks locally. https://t.co/ZusIBlIctA\n\nQT @victormustar: Google Gemma 4 is here - and it delivers 🤯

Here's HOW TO run it on your hardware (runs on most devices) with llama.cpp to give you a Chat UI + OpenAI chat completion endpoint instantly! https://t.co/pXDZVkCkm0

See 3 related tweets

  • @badlogicgames: akso works well with pi! thanks @ClementDelangue for the tip!

edge models like this are super valua...

  • @huggingface: RT @victormustar: Google Gemma 4 is here - and it delivers 🤯

Here's HOW TO run it on your hardware ...

  • @googledevs: Go beyond chatbots. Build your next AI agent on your own device. 🤖

Use #GoogleAIEdge to bring Gemma...


20. ZaiforStartups (Group Score: 92.5 | Individual: 33.4)

Cluster: 3 tweets | Engagement: 139 (Avg: 127) | Type: Tech

Vision isn’t new but putting it directly into the build loop is a shift.

design → working code, without the translation layer.

curious what actually holds up 👀\n\nQT @Zai_org: Introducing GLM-5V-Turbo: Vision Coding Model

  • Native Multimodal Coding: Natively understands multimodal inputs including images, videos, design drafts, and document layouts.
  • Balanced Visual and Programming Capabilities: Achieves leading performance across core benchmarks for multimodal coding, tool use, and GUI Agents.
  • Deep Adaptation for Claude Code and Claw Scenarios: Works in deep synergy with Agents like Claude Code and OpenClaw.

Try it now: https://t.co/WCqWT0qCQb API: https://t.co/xDy1O6ZPcz Coding Plan trial applications: https://t.co/qCM6cri0KK

See 2 related tweets

  • @0xSero: They finally fixed the main issue with the V series of models.

Will weights be public? https://t.c...

  • @adonis_singh: glm-5v turbo gets 17% on eyebench-v3!

qwen models are still marginally better, however error bars e...