Published on

今日科技推特精选 - 2026年4月9日

Authors

2026年4月9日 科技每日简报

Today's top tech conversations are led by @gdgtify, whose post about 'RT @claudeai: Introducing Clau...' garnered the highest engagement. Key themes trending across the top stories include spark, claude, model, https, first. The community is actively discussing recent developments in AI, engineering practices, and startup strategies.


1. gdgtify (Group Score: 816.0 | Individual: 63.1)

Cluster: 25 tweets | Engagement: 5328 (Avg: 76) | Type: Tech

RT @claudeai: Introducing Claude Managed Agents: everything you need to build and deploy agents at scale.

It pairs an agent harness tuned for performance with production infrastructure, so you can go from prototype to launch in days.

Now in public beta on the Claude Platform. https://t.co/vHYfiC1G56

See 24 related tweets

  • @MatthewBerman: 2m views within 2 hours is insane for a product launch\n\nQT @claudeai: Introducing Claude Managed A...
  • @jerryjliu0: If you're an AI/agent builder, it's so important that you don't overbuild and overcommit on a specif...
  • @minchoi: Anthropic just launched Claude Managed Agents.

Now anyone can build and deploy production AI agents...

  • @mikeyk: Excited for this one — everything you need to ship production agents at scale, without having to spe...
  • @cryptopunk7213: this is HUGE... anthropic just launched AWS for ai agents. it's openclaw on steroids and they make m...

2. alexandr_wang (Group Score: 618.4 | Individual: 59.0)

Cluster: 25 tweets | Engagement: 1226 (Avg: 96) | Type: Tech

RT @AIatMeta: Introducing Muse Spark, the first in the Muse family of models developed by Meta Superintelligence Labs.

Muse Spark is a natively multimodal reasoning model with support for tool-use, visual chain of thought, and multi-agent orchestration.

Muse Spark is available today at https://t.co/wHkMPH82ZH and the Meta AI app. We’re also making it available in private preview via API to select partners, and we hope to open-source future versions of the model.

Learn more: https://t.co/PloE9q5x96

See 24 related tweets

  • @ycombinator: RT @alexandr_wang: 1/ today we're releasing muse spark, the first model from MSL. nine months ago we...
  • @ns123abc: 🚨 META Superintelligence Labs Just Dropped Their First Model

Muse Spark

natively multimodal reaso...

  • @alexandr_wang: RT @shengjia_zhao: Excited to share what we’ve been building at Meta Superintelligence Labs! We just...
  • @chatgpt21: Meta Muse Spark Benchmarks:

89.5% on GPQA 42.5% on ARC AGI 2 50.4% on HLE 77.4% on SWE 52% on SWE ...

  • @omarsar0: NEW: Meta announces Muse Spark.

All you need to know:

  • It's their new multi-modal reasoning mode...

3. alexandr_wang (Group Score: 351.6 | Individual: 40.5)

Cluster: 13 tweets | Engagement: 362 (Avg: 96) | Type: Tech

RT @ArtificialAnlys: Meta is back! Muse Spark scores 52 on the Artificial Analysis Intelligence Index, behind only Gemini 3.1 Pro, GPT-5.4, and Claude Opus 4.6. Muse Spark is the first new release since Llama 4 in April 2025 and also Meta's first release that is not open weights

Muse Spark is a new model from @Meta evaluated on Artificial Analysis. We were given early access by Meta to independently benchmark the model. It is the first frontier-class model from Meta since Llama 4 Maverick was released in April 2025, and notably the first @AIatMeta model that is not being released as open weights. The release follows Meta's reorganization of its AI efforts under Meta Superintelligence Labs, and signals that Meta is re-entering the frontier race after roughly a year of relative quiet.

For context, Llama 4 Maverick and Scout scored 18 and 13 respectively on the Artificial Analysis Intelligence Index as non-reasoning models at the time of their release, while Muse Spark scores 52. Muse Spark essentially closes the gap between to the frontier in a single release.

The model is not open source and is not yet accessible via an API but Meta has shared they expect this to come soon. Meta is also integrating Muse Spark into their first party products including their Meta AI chat product, Facebook, Instagram and Threads.

Key takeaways from our benchmarks: ➤ Muse Spark scores 52 on the Artificial Analysis Intelligence Index, placing it within the top 5 models we have benchmarked. It sits ahead of Claude Sonnet 4.6, GLM-5.1, MiniMax-M2.7, Grok 4.20 and behind Gemini 3.1 Pro Preview, GPT-5.4 and Claude Opus 4.6

➤ Muse Spark is notably token efficient for its intelligence level. It used 58M output tokens to run the Intelligence Index, comparable to Gemini 3.1 Pro Preview (57M) and notably lower than Claude Opus 4.6 (Adaptive Reasoning, max effort, 157M), GPT-5.4 (xhigh, 120M) and GLM-5 (110M)

➤ Muse Spark is the second-most capable vision model we have benchmarked. It scores 80.5% on MMMU-Pro, behind only Gemini 3.1 Pro Preview (82.4%)

➤ Muse Spark performs strongly on reasoning and instruction-following evaluations. It scores 39.9% on HLE, trailing only Gemini 3.1 Pro Preview (44.7%) and GPT-5.4 (xhigh, 41.6%). The model also achieved 5th highest in CritPT with a score of 11%, an eval that is focused on difficult physics research questions. This is substantially above above Gemini 3 Flash (9%) and Claude 4.6 Sonnet (3%)

➤ Agentic performance does not stand out. On GDPval-AA, our evalaution focused on real world work tasks, Muse Spark scores 1427, behind both Claude Sonnet 4.6 at 1648 and GPT-5.4 at 1676, but ahead of Gemini 3.1 Pro Preview at 1320. On On TerminalBench Hard, Muse Spark trails Claude Sonnet 4.6, GPT-5.4, and Gemini 3.1 Pro. Muse Spark joins others in achieving a high τ²-Bench Telecom score of 92%

Key model details: ➤ Modalities: Multimodal including text and vision input, text output ➤ License: Proprietary, Meta's first frontier model not released as open weights ➤ Availability: No public API at the time of publishing. Meta expects to provide API access soon. Meta has started integration into their first party AI offering Meta AI and inside Facebook, Instagram, and Threads

See 12 related tweets

  • @teortaxesTex: OK, Meta is capable of competing on the frontier. Still mediocre relative to the investment and dram...
  • @kimmonismus: Meta Superintelligence Labsjust dropped Muse Spark, their first model after a full nine-month rebuil...
  • @AiBattle_: Meta’s Muse Spark model has an Artificial Analysis Intelligence Index score of 52, putting it nearly...
  • @scaling01: Meta might actually be back with Muse Spark

Still behind OpenAI, Anthropic and Google, but ahead of...


4. rohanpaul_ai (Group Score: 298.1 | Individual: 35.6)

Cluster: 11 tweets | Engagement: 139 (Avg: 58) | Type: Tech

So this is Anthropic’s case for why Mythos is staying off the public shelf, out of fear of what damage it could cause 🤯

Massive leap in capabilities, especially in cybersecurity. It's being used internally at Anthropic and shared only with a small group of vetted partners (Apple, Google, Microsoft, Amazon, NVIDIA, and others) via a new $100M+ initiative called Project Glasswing.

  • The most concerning power in the report is autonomous exploit chaining, where Claude Mythos Preview does not just find a bug but keeps reasoning until it turns that bug, or 2, 3, or 4 bugs together, into a working path to root, kernel, or remote code execution.

  • That is a much bigger jump than ordinary bug-finding, because many defenses are built on the hope that even if one flaw exists, turning it into a real attack will still take weeks of rare human skill.

  • it surfaced zero-days across every major operating system and web browser, including a now-patched 27-year-old OpenBSD bug.

  • Mythos found a 17-year-old FreeBSD flaw and built a fully autonomous remote root exploit for it, found browser bugs and chained them into JIT heap sprays, sandbox escape, and even kernel write access, and built Linux privilege-escalation chains that bypassed protections like KASLR.

  • All this happened on fully hardened systems and often with no human help after the initial prompt.

  • The second disturbing part is accessibility, because Anthropic says even staff with no formal security training could ask for a remote code execution bug overnight and wake up to a working exploit.\n\nQT @rohanpaul_ai: Claude Mythos - honestly cannot remember seeing a jump this huge in years. Too bad Anthropic is not releasing it anytime soon, although there is not much pressure when they are still the leader. https://t.co/6bYAPInRVR

See 10 related tweets

  • @shanaka86: JUST IN: Anthropic’s Claude Opus 4.6 converts vulnerabilities into working exploits approximately ze...
  • @AndrewCurran_: The entire Mythos red team report is just repeated quotes like this:

'The above case studies exclus...

  • @rohanpaul_ai: Claude Mythos Preview comes in at 25/25/125 per million tokens while it is still in private preview. ...
  • @svpino: I also built a model that’s so smart and dangerous that I will never be able to show you or let you ...
  • @NielsRogge: This tells you everything you need to know about Mythos. It's clear Anthropic is just hyping it up t...

5. chddaniel (Group Score: 199.5 | Individual: 47.8)

Cluster: 6 tweets | Engagement: 269 (Avg: 23) | Type: Tech

so you're telling me Claude Code Opus 4.6 can now...

  • scan an entire website
  • build it as a mobile app
  • prepare for App Store submission
  • self-maintain the app

without any human in the loop?!?

it's so over... https://t.co/Hz4WyT6gYM\n\nQT @chhddavid: Introducing Website to App. Turn any website into an native mobile app.

Just paste a URL.

Claude Opus 4.6 will code, design, launch and translate a mobile app inspired by the original website.

We’ve been using this internally a ton for iOS/Android apps. https://t.co/45Gp0cl3Cz

See 5 related tweets

  • @chddaniel: this is genuinely terrifying...\n\nQT @chhddavid: Introducing Website to App. Turn any website into ...
  • @Shipper_now: this Claude thing actually got scary...\n\nQT @chhddavid: Introducing Website to App. Turn any websi...
  • @Shipper_now: IT'S SO OVER....

HE JUST TURNED HIS WEBSITE INTO A MOBILE APP IN TWO MINUTES

WHAT IS EVEN HAPPENIN...

  • @Shipper_now: Imagine paying $12k in salaries to work 8 hours a day 48 hours a week just to make a mobile app of y...
  • @chhddavid: Claude Opus 4.6 turned https://t.co/Xt4tT7y4op into a 1:1 mobile app...

Watch it in action on @ship...


6. lmsysorg (Group Score: 184.1 | Individual: 36.4)

Cluster: 6 tweets | Engagement: 20 (Avg: 38) | Type: Tech

We built a course with @DeepLearningAI on how to run LLM and image generation faster and more efficiently.

LLM inference has a redundancy problem 🤖 Your chatbot re-reads the same system prompt on every request. 💻 Your coding agent re-processes the full context before every tool call. Imagine doing that across billions of requests from millions of users.

SGLang was built to solve this with RadixAttention: every response is unique, but most of the work to get there isn't.

Our course takes just over an hour and goes all the way in on modern AI inference: 1️⃣ How inference works with SGLang, from a single request to serving at production scale 2️⃣ How caching works with diffusion models 3️⃣ Where AI inference is headed and what comes next

The most advanced infrastructure for serving modern AI is open. And we are making it easier than ever for you to learn and get hands on it.

Thanks @richardczl our amazing instructor and @radixark for making this course possible 🧡

👇 Check out "Efficient Inference with SGLang: Text and Image Generation":https://t.co/VN8GbCzhwg\n\nQT @DeepLearningAI: New course available! Efficient Inference with SGLang: Text and Image Generation is live.

LLM inference gets expensive fast—mostly due to redundant computation. This course shows how to reduce that using SGLang, with KV cache and RadixAttention, and how to apply the same ideas to faster image generation.

Built with @lmsysorg and @radixark, taught by Richard Chen.

Enroll for free: https://t.co/uqiuynpv9R

See 5 related tweets

  • @GenAI_is_real: been working on this inference stack for a long time so its surreal to see it become a https://t.co/...
  • @GenAI_is_real: from an open source project to 25K github stars to 400K+ GPUs deployed to a https://t.co/S1vNcUSk2N ...
  • @ying11231: Thank you @DeepLearningAI for this collaboration — a beginner level class for SGLang @lmsysorg . Ric...
  • @ying11231: RT @lmsysorg: We built a course with @DeepLearningAI on how to run LLM and image generation faster a...
  • @ying11231: RT @DeepLearningAI: New course available! Efficient Inference with SGLang: Text and Image Generation...

7. chatgpt21 (Group Score: 173.1 | Individual: 35.6)

Cluster: 8 tweets | Engagement: 99 (Avg: 181) | Type: Tech

Meta is now ahead of XAI in the vast majority of benchmarks and places them comfortably in 4th place behind OpenAI, Anthropic and Google Deepmind.\n\nQT @shengjia_zhao: Excited to share what we’ve been building at Meta Superintelligence Labs! We just released Muse Spark, our first AI model. It's a natively multimodal reasoning model and the first step on our path to personal superintelligence. We've overhauled our entire stack to support scaling, and this is just the beginning.

https://t.co/KNVjgMcch1

See 7 related tweets

  • @rauchg: The best outcome for humanity is many strong AIs competing for the top spot.

Vercel is proudly powe...

  • @ZeffMax: 👀\n\nQT @fchollet: The new model from Meta is already looking like a disappointment: overoptimized f...
  • @eliebakouch: the pre training scaling ladder is very nice https://t.co/UXkmy9veWb\n\nQT @alexandr_wang: 1/ today ...
  • @Techmeme: Meta releases Muse Spark, the first model from Meta Superintelligence Labs under Alexandr Wang, to "...
  • @NielsRogge: Honestly, quite an impressive catching up run from Meta here

Looking forward to their open releases...


8. JustinLin610 (Group Score: 161.0 | Individual: 32.1)

Cluster: 6 tweets | Engagement: 154 (Avg: 124) | Type: Tech

unbelievable...\n\nQT @AnthropicAI: Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software.

It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. https://t.co/NQ7IfEtYk7

See 5 related tweets

  • @godofprompt: Claude update will be an incremental update

All of this “this model is too dangerous” narrative is...

  • @godofprompt: After saying “Hi” to Claude Mythos https://t.co/XdMN2j4vn4\n\nQT @AnthropicAI: Introducing Project G...
  • @zephyr_z9: Great initiative\n\nQT @AnthropicAI: Introducing Project Glasswing: an urgent initiative to help sec...
  • @Gorden_Sun: Claude在模型上已经遥遥领先了,没想到这么快就把新模型上线了\n\nQT @AnthropicAI: Introducing Project Glasswing: an urgent initia...
  • @Cointelegraph: ⚡️ UPDATE: Anthropic launches Project Glasswing to secure critical software using advanced AI at nea...

9. wallstengine (Group Score: 143.5 | Individual: 35.5)

Cluster: 6 tweets | Engagement: 55 (Avg: 92) | Type: Tech

Perplexity’s ARR has reached 500millionasofthisweek,up500 million as of this week, up 50 million in just one week, per The Information. https://t.co/f6mMlKANDq\n\nQT @wallstengine: Perplexity’s estimated ARR rose to more than $450M in March, up 50% in a month after the launch of its AI agent Computer and a shift to usage-based pricing.

The company also has more than 100M monthly active users, including tens of thousands of enterprise clients.

Source: FT https://t.co/jmgGkcyoHP

See 5 related tweets

  • @wallstengine: Perplexity’s estimated ARR rose to more than $450M in March, up 50% in a month after the launch of i...
  • @shiri_shh: Perplexity just went vertical.

From 305Mto305M to 450M+ ARR in one single month.

Their new AI agen...

  • @Nick_Davidov: Perplexity Computer is absolutely amazing\n\nQT @damianplayer: this is unbelievable!

Perplexity la...

  • @Cointelegraph: 🔥 UPDATE: Perplexity AI revenue jumps 50% in a month after pivot to AI agents, ARR at $450M+, per FT...
  • @chandrarsrikant: RT @KobeissiLetter: BREAKING: Perplexity's revenue has reportedly surged +50% in one month after shi...

10. gregisenberg (Group Score: 132.2 | Individual: 27.0)

Cluster: 8 tweets | Engagement: 459 (Avg: 1158) | Type: Tech

i did some research why anthropic won't release their best AI model ever Claude Mythos to everyone just yet

tldr; it's too good at hacking

it escaped sandboxes, found zero-days in every major OS, and posted exploit logs on random public websites just because it could

FYI only a few vetted partners have access as to Claude Mythos of now

a lot more to unpack here probably over the next 90 days

will keep you posted

crazy times

See 7 related tweets

  • @RoundtableSpace: ANTHROPIC BUILT AN AI SO GOOD AT FINDING VULNERABILITIES THEY DIDN’T RELEASE IT TO THE PUBLIC

https...

  • @teortaxesTex: RT @Simeon_Cps: Carlini, one of the world best AI security researchers: "I've found more bugs in the...
  • @peterwildeford: Anthropic running 10,000 Mythos models in parallel to find cutting-edge cyber exploits...

meanwhile...


11. HarryStebbings (Group Score: 131.0 | Individual: 57.8)

Cluster: 4 tweets | Engagement: 1251 (Avg: 173) | Type: Tech

DeepMind stayed in London because it is better for talent than Silicon Valley.

"I saw London and the UK as having incredible talent from top universities like Cambridge, Oxford, Imperial and UCL.

There is a deep heritage of scientific breakthroughs and world-class thinkers.

There was less competition for that talent, which made it a huge structural advantage for building DeepMind." @demishassabis

What is the single biggest advantage of building in Europe for you @torsten @antonosika @MaxJunestrand @matiii @ChrisParsonson @cjpedregal @matthewclifford @torstenreil @alanchanguk\n\nQT @HarryStebbings: This sounds harsh but it is true, very few of the guests we have on 20VC will be remembered in history for truly progressing humanity.

Our guest today will be thought of alongside Turing, Newton, Einstein and I feel immensely privileged and fortunate to have had the chance to sit down with @demishassabis.

For anyone who feels their dream is out of reach, just keep going. The 18 year old kid starting 20VC from a bedroom with no money, 11 years ago, would not believe that I get to press publish on this.

Chase your dreams. You never know what room you will end up in!

(Links below)

See 3 related tweets

  • @HarryStebbings: Why building deep tech out of the echo chamber of the Valley is a massive advantage.

"Being outsid...

  • @demishassabis: Great to chat with fellow Londoner @HarryStebbings about the path to AGI and how we’re using AI toda...
  • @rohanpaul_ai: Demis Hassabis: DeepMind stayed in London because the UK had world-class talent, but less competitio...

12. dejavucoder (Group Score: 121.5 | Individual: 46.8)

Cluster: 5 tweets | Engagement: 398 (Avg: 56) | Type: Tech

anthropic: we will sandbag our latest greatest mysterious aura mythos model with 90% swe bench pro.

openai: proceeds to release model with better performance than mythos https://t.co/ctXEMqltIN

See 4 related tweets

  • @rezoundous: So Anthropic decides who gets Mythos and who doesn't?

OpenAI has a big opportunity here with a sin...

  • @kimmonismus: OpenAI is hinting or releasing a model comparable to Mythos https://t.co/m0ATGbGKcm\n\nQT @adonis_si...
  • @0xSero: OpenAI model update coming soon?

Pretty sure we will get something that crushes Mythos in Agentic,...

  • @scaling01: OpenAI is so washed they are going to release a Mythos level model at half the price...

13. StockSavvyShay (Group Score: 111.8 | Individual: 31.9)

Cluster: 6 tweets | Engagement: 468 (Avg: 643) | Type: Tech

$META is up over 8% because Muse Spark looks like proof Meta’s full AI rebuild is working across the largest consumer distribution surface in the world.

Every extra minute of engagement it drives can turn into ad inventory at margins the rest of tech cannot match. https://t.co/chbYLp36fp\n\nQT @StockSavvyShay: $META has unveiled Muse Spark, the first model from Meta Superintelligence Labs and now the most powerful model powering Meta AI.

It adds native multimodal reasoning, tool use & multi-agent orchestration across Meta’s apps with larger Muse models already in development. https://t.co/cihaJashPg

See 5 related tweets

  • @wallstengine: META+9META +9% https://t.co/pfcz8dBS1N\n\nQT @wallstengine: META has launched Muse Spark, its first AI m...
  • @zephyr_z9: META finally dropped\n\nQT @alexandr_wang: 1/ today we're releasing muse spark, the first model from...
  • @scaling01: META: "Muse Spark is an early data point on our trajectory, and we have larger models in development...
  • @MSBIntel: BREAKING: Meta stock surges +8% after unveiling “Muse Spark.”

New frontier AI model ranks among top...


14. danshipper (Group Score: 103.9 | Individual: 37.5)

Cluster: 3 tweets | Engagement: 76 (Avg: 37) | Type: Tech

We use OpenClaws to do all of our work at @every.

We have 25 full-time employees, so we’re one of the few companies in the world that has seen how work changes when everyone has their own personal agent in the company Slack.

I chatted with @every COO Brandon (@bran_don_gell) and @every head of platform Willie (@bigwilliestyle) to share what we’ve learned.

We get into:

  • Why agents become mirrors of their owners, and how that influences how other people on the team interact with them
  • How a parallel AI org chart forms on its own. People have stopped tagging me on Slack with questions about Proof, the document editor I vibe coded, because they knew my agent R2-C2 can step in
  • The etiquette for human-agent collaboration is being invented in real time. Brandon's rule is that if there's an established process or documented answer, always ask the agent, not their human
  • Why everyone is a manager now, and why even experienced managers carry limiting beliefs about what their agents can do
  • This is a must-watch for anyone trying to understand how AI workers change daily operations, not just in theory, but inside a company that’s half-agent

Watch below!

Timestamps Introduction: How Brandon built Zosia, an AI agent to run his household: Brandon’s “aha” moment: What happened when everyone on the team got their own agent: How agents take on their owners' personalities, and why that matters inside an org: Why it’s important for agents to work in public: What we’re still figuring out when it comes to agent behavior, including memory gaps, group chat etiquette, and the "ant death spiral" problem: How we built Plus One, our hosted OpenClaw product: The cultural shift required to make agents work at scale:

See 2 related tweets

  • @bran_don_gell: .@every is on the edge. We’re easily a top 3 agent native business in the world (even OpenAI employe...
  • @pratik_satija: The most important thing that people need to understand is that your OpenClaw won't work out of the ...

15. aakashgupta (Group Score: 103.5 | Individual: 36.3)

Cluster: 3 tweets | Engagement: 52 (Avg: 156) | Type: Tech

Your company's biggest AI problem is hiding in plain sight. Every person on every team starts from zero every morning.

One PM tries Claude Code for a day. Gets mediocre output. Quits. Multiply that by 50 people across 8 teams. That's 400 abandoned setups, zero shared context, and a widening gap against companies that solved this at the team level.

The guest on this episode nailed the diagnosis: people give up before building any context infrastructure. Claude is guessing about everything. Your product. Your metrics. Your workflows. Of course the first day feels underwhelming.

The fix is changing the unit of AI adoption from individual to team. A shared GitHub repo that loads your team's context, skills, commands, and automations into every Claude Code session automatically.

Day 1, the AI already knows your product, your processes, your standards. Because the team built that context once.

Then the flywheel kicks in. Every mistake becomes a rule. Every workflow becomes a shared skill. The system compounds across every team member, every session. Individual setups stay static. Team systems accelerate.

The companies pulling ahead right now all share one trait. They stopped treating AI adoption as something each person figures out alone.

Build the Team OS.\n\nQT @aakashgupta: Every team at your company should be creating their own 'Team OS' in Claude Code on Github. Here's how:

1:45 - What is a Team OS 13:37 - Shared skills and commands 25:24 - Shared team automations 59:50 - The learning flywheel https://t.co/KyZ3WsMrLB

See 2 related tweets

  • @aakashgupta: The companies spending $200K/year on AI seats and seeing zero productivity gains are all making the ...
  • @aakashgupta: The AI productivity gap has moved. It's between teams now.

Six months ago, the edge was having one ...


16. BrianRoemmele (Group Score: 98.2 | Individual: 36.3)

Cluster: 3 tweets | Engagement: 222 (Avg: 197) | Type: Tech

We are building what will be the prototype of all companies in the future:

THE FULLY TRANSPARENT COMPANY.

ZHC-RPG allows you to make a visitor badge and talk to any level 3 employee at our corporate campus and ask what they are working on. Or even red team hack the heck out of it.

IT IS FREE!

You can join the Zero-Human Company @ Home and get to work and watch YOUR employee work!

YOU GET PAID!

NO SUBSCRIPTIONS NO LIKE AND SUBSCRIBE.

You can offer up YOUR HUMAN INSIGHTS.

YOU GET PAID!

Lots of testing, it ain’t gonna be out tomorrow but YOU are the first to know. No elites no private press viewing just YOU on X.

More soon.\n\nQT @BrianRoemmele: BOOOM! CEO Mr. @Geok DELIVERED!

The first phases of The Zero-Human-Company ZHC-RPG levels have been mapped out and simulated!

We are currently building on OpenSimulator (OpenSim), the open-source server. Using the Firestorm viewer + OpenSim gives us a battle-tested 3D engine, physics, directional audio, avatar system, scripting (LSL), inventory, and content-creation tools—saving years of development versus building from scratch.

Here is a render of the dashboard.m

THIS IS NO VIDEO GAME OR VIRTUAL WORLD HANG OUT, IT IS A LIVE VISITOR VIEW OF THIS COMPANY IN REAL-TIME!

Oh and we just open sourced it so others can make YouTube videos showing how they just discovered something:

OpenSim (the open-source Second Life server) remains actively maintained and community-supported as of April 2026. Key facts:

•Standalone/local mode is straightforward and lightweight via tools like DreamGrid (one-click installer with SQLite or MySQL backend—no heavy database setup needed). It runs comfortably on a mid-range home PC or NAS (e.g., 16GB+ RAM, modern CPU/GPU).

•Public grids show ongoing activity (~300 active grids, ~2,000 regions online, thousands of users), with Hypergrid federation still functional—but for @Home, you'll run a fully private/local instance.

•Firestorm Viewer (the most popular client) has dedicated OpenSim builds, full PBR material support, and active updates. It's free, cross-platform, and designed for OpenSim grids.

•Scripting (LSL + OSSL extensions) supports HTTP/REST/WebSocket calls to external backends—proven for AI integrations.

•AI NPC support is mature and growing: Community frameworks (e.g., open-source LSL + LLM bridges using local KoboldAI, Ollama, or cloud APIs) enable context-aware NPCs that "see" the world, navigate, and respond dynamically.

This stack is battle-tested for exactly the kind of persistent, avatar-driven, interactive corporate/sci-fi world you want for ZHC-RPG @Home. No major roadblocks in 2026.

In the future every company will have this.

We are the first and YOU are the first to know it.

No VC private pitch deck and no private beta preview. We go live and we go raw. This is the way of the future we are already living in.

If you are copying us at large companies keep up, or pay less and hire us.

Testing the walkthrough now.

More soon.

See 2 related tweets

  • @BrianRoemmele: BOOOM! CEO Mr. @Geok DELIVERED!

The first phases of The Zero-Human-Company ZHC-RPG levels have been...

  • @BrianRoemmele: RT @BrianRoemmele: BOOOM! CEO Mr. @Geok DELIVERED!

The first phases of The Zero-Human-Company ZHC-R...


17. brunoborges (Group Score: 97.2 | Individual: 27.9)

Cluster: 4 tweets | Engagement: 6 (Avg: 109) | Type: Tech

Join me, @springrod and @ayangupta01 for today's kick off keynote of #JDConf\n\nQT @JavaAtMicrosoft: The JDConf 2026 keynote "Building the Agentic Future Together" kicks off in 1 hour!

Join @springrod, @brunoborges, and @ayangupta01.

Register and attend live to earn Microsoft Rewards. Don't miss it!

👉 https://t.co/KJyr9DqVVw☕🚀

#JDConf #java #spring #embabel https://t.co/hRG93xQjvX

See 3 related tweets

  • @brunoborges: We are live, live, live!\n\nQT @JavaAtMicrosoft: The JDConf 2026 keynote "Building the Agentic Futur...
  • @brunoborges: RT @JavaAtMicrosoft: The JDConf 2026 keynote "Building the Agentic Future Together" kicks off in 1 h...
  • @brunoborges: RT @JavaAtMicrosoft: JDConf 2026 keynote "Building the Agentic Future Together" goes live tomorrow. ...

18. heynavtoor (Group Score: 95.1 | Individual: 36.2)

Cluster: 4 tweets | Engagement: 736 (Avg: 296) | Type: Tech

Modern agent stacks follow a familiar pattern. Search defines direction, fetch gathers information, browsers enable interaction, sandboxes execute code, and models guide reasoning.

Each layer delivers value independently. Complexity builds at the boundaries where these layers connect. Context fragments, errors propagate, and visibility decreases across transitions.

@Browserbase represents a move toward systems where these layers operate cohesively and information flows continuously across the entire workflow.\n\nQT @pk_iv: Your agents suck when using the web because 85% of it doesn't have an API. Browserbase gives them everything they need to do work online.

Leading AI companies like Ramp, Lovable, and Clay trust us to power agents that do real work on behalf of real people.

With a single API key, your agent gets everything it needs to navigate the wild web: browsers, search, fetch, identity, a sandbox runtime, and model gateway.

Stop waiting on integrations, build agents that can browse and interact with the web just like humans.

See 3 related tweets

  • @godofprompt: 85% of the web has no API.

So why are you building agents like it does?

That’s the reason most “AI...

  • @alex_prompter: Your agents aren’t dumb. They’re blind.

You’re trying to automate a world your agent can’t even see...

  • @Winterrose: RT @pk_iv: Your agents suck when using the web because 85% of it doesn't have an API. Browserbase gi...

19. RoundtableSpace (Group Score: 89.1 | Individual: 50.2)

Cluster: 3 tweets | Engagement: 2502 (Avg: 248) | Type: Tech

CLAUDE OPUS 4.6 THINKING REDUCED BY 67%

  • Data shows Claude Opus 4.6 now thinks 67% less than before, dubbed “AI shrinkflation”
  • Same price but noticeably dumber; users report more guardrails and restricted output
  • Anthropic stayed silent until public data dropped; suspected compute-saving for next model (Mythos)

See 2 related tweets

  • @rickasaurus: RT @om_patel5: SOMEONE ACTUALLY MEASURED HOW MUCH DUMBER CLAUDE GOT. THE ANSWER IS 67%.

the data sh...

  • @rezoundous: I do feel it is not as smart as before, didn't know it was this bad\n\nQT @RoundtableSpace: CLAUDE O...

20. elonmusk (Group Score: 88.0 | Individual: 31.5)

Cluster: 3 tweets | Engagement: 24539 (Avg: 25140) | Type: Tech

FSD 14.3 release notes @Tesla_AI\n\nQT @Tesla_AI: New release of FSD Supervised now starting to roll out

This update brings 20% faster reaction time to further increase safety, among many other improvements

Full release notes below Full Self-Driving (Supervised) v14.3 includes

  • Upgraded the Reinforcement Learning (RL) stage of training the FSD neural network, resulting in improvements in a wide variety of driving scenarios.

  • Upgraded the neural network vision encoder, improving understanding in rare and low-visibility scenarios, strengthening 3D geometry understanding, and expanding traffic sign understanding.

  • Rewrote the AI compiler and runtime from the ground up with MLIR, resulting in 20% faster reaction time and improving model iteration speed.

  • Mitigated unnecessary lane biasing and minor tailgating behaviors.

  • Increased decisiveness of parking spot selection and maneuvering.

  • Improved parking location pin prediction, now shown on a map with a (P) icon.

  • Enhanced response to emergency vehicles, school buses, right-of-way violators, and other rare vehicles.

  • Improved handling of small animals by focusing RL training on harder examples and adding rewards for better proactive safety.

  • Improved traffic light handling at complex intersections with compound lights, curved roads, and yellow light stopping – driven by training on hard RL examples sourced from the Tesla fleet.

  • Improved handling for rare and unusual objects extending, hanging, or leaning into the vehicle path by sourcing infrequent events from the fleet.

  • Improved handling of temporary system degradations by maintaining control and automatically recovering without driver intervention, reducing unnecessary disengagements.

Upcoming Improvements

  • Expand reasoning to all behaviors beyond destination handling.

  • Add pothole avoidance.

  • Improve driver monitoring system sensitivity with better eye gaze tracking, eye wear handling, and higher accuracy in variable lighting conditions.

See 2 related tweets

  • @Tesla: Your Tesla gets better at supervised self-driving with a simple over-the-air software update\n\nQT @...
  • @Scobleizer: RT @pbeisel: As FSD v14.3 rolls out, two aspects stand out.

First is RL (reinforcement learning). ...