- Published on
热门科技推文 - 2026年3月7日
- Authors

- Name
- geeknotes
今日科技前沿:随着 OpenAI 推出 GPT-5.4 Thinking 与 Pro 模型,并同步上线用于自动漏洞修复的新型安全代理,全球 AI 格局正经历剧变。与此同时,Anthropic 发布了一项关于 AI 对劳动力市场影响的关键研究,其首席执行官 Dario Amodei 正准备针对联邦禁令发起法律挑战。随着 Cursor 等工程工具引入高级自动化功能,以及字节跳动大幅扩展 API 访问权限,业界正密切关注将定义下一代企业级软件的战略布局及潜在 IPO 动向。
1. MLStreetTalk (Group Score: 88.6 | Individual: 37.7)
Cluster: 5 tweets | Engagement: 5998 (Avg: 679) | Type: Tech
RT @OpenAI: GPT-5.4 Thinking and GPT-5.4 Pro are rolling out now in ChatGPT.
GPT-5.4 is also now available in the API and Codex.
GPT-5.4…
See 4 related tweets
- @chddaniel: RT @chhddavid: Big news, @OpenAI GPT-5.4 just got a huge upgrade today and I'm very happy to be intr...
- @reach_vb: RT @reach_vb: BOOOOM! Introducing GPT-5.4 Thinking & Pro in Codex, API & ChatGPT 🔥
It combi...
- @tdinh_me: GPT 5.4 now on https://t.co/kx5nDaaCO2 😄 https://t.co/I5jV1oHPMk...
- @GHchangelog: GPT-5.4 is generally available in GitHub Copilot.
2. rohanpaul_ai (Group Score: 80.6 | Individual: 33.7)
Cluster: 4 tweets | Engagement: 108 (Avg: 127) | Type: Tech
Anthropic just published "Labor market impacts of AI"
reveals how AI actually affects jobs by looking at real usage data. Finds no major unemployment impact yet, but slower hiring for young workers. 14% drop in new job starts for young adults entering these highly exposed fields.
The authors built a new tracking method that combines theoretical capability guesses with actual daily platform usage data.
They discovered that actual workplace automation is currently just a tiny fraction of what is theoretically possible.
Software programmers and customer service representatives face the highest actual automation risk right now based on real platform behavior.
Government projections show that occupations with higher actual automation coverage will experience slightly slower employment growth over the next decade.
The data shows that workers in the most exposed professions actually tend to be older, more educated, and higher paid.
The study finds no systematic increase in overall unemployment for highly exposed workers since the recent wave of language models.
See 3 related tweets
- @aakashgupta: The scariest finding in Anthropic’s new labor report: companies have already stopped hiring for AI-e...
- @BitcoinNews: NEW: 🤖 A "Great Recession for white-collar workers" is possible.
Anthropic's new research maps AI ...
- @BusinessInsider: Anthropic economists say there's not yet evidence to suggest AI is fueling a spike in job losses in ...
3. kimmonismus (Group Score: 75.6 | Individual: 24.1)
Cluster: 5 tweets | Engagement: 676 (Avg: 417) | Type: Tech
Let that sink in. Anthropic has just published a study on AI and labor market.
There's a huge difference between what AI can do today and what it will theoretically be able to do in the future.
This already poses a serious problem for those starting their careers in the field. https://t.co/kTOtPfZET6
See 4 related tweets
- @Forbes: AI isn't the future—it's now. Meet the Forbes 30 Under 30 revolutionaries harnessing machine learnin...
- @sequoia: RT @JulienBek: Anthropic’s labor market report is out.
The gap between AI capability and observed u...
- @unwind_ai_: AI agents will do all the intelligent work.
Jobs of the future: farmers, cooks, mechanics, lifeguar...
- @yacineMTB: RT @10_X_eng: @yacineMTB There will be a gap between AI taking jobs and AI providing abundance.
We...
4. aakashgupta (Group Score: 75.6 | Individual: 33.0)
Cluster: 4 tweets | Engagement: 73 (Avg: 318) | Type: Tech
The “Claude Marketplace” sounds like a procurement simplification tool. Enterprises can use existing Anthropic spend commitments to buy partner solutions.
Anthropic just told you which AI applications it plans to build next and nobody is paying attention.
Look at the launch partners. GitLab (code review). Harvey (legal). Lovable (app building). Replit (development). Rogo (finance). Snowflake (data). These are the six workflow categories where enterprises are already paying real money for Claude-powered tools.
Anthropic is running at ~1M+ per year. Those committed spend pools are now flowing through a marketplace Anthropic controls. Which means Anthropic gets granular data on exactly which partner tools enterprises buy, how much they spend, which workflows drive the most usage, and where the willingness to pay is highest.
This is the AWS Marketplace playbook. Amazon launched Marketplace to help enterprises consolidate cloud procurement. Then it watched which SaaS categories grew fastest. Then it built those products itself. Amazon RDS, Amazon Connect, AWS Lambda, all started as categories where third-party tools were thriving on AWS.
Every partner joining the Claude Marketplace is handing Anthropic a roadmap. Harvey proves legal AI has enterprise willingness to pay at scale? Anthropic already has Claude for Financial Services and Claude for Life Sciences. You think Claude for Legal isn’t coming?
The partners benefit in the short term. Fortune 10 access with pre-approved budgets is a cold-start solution most developer tools spend years trying to build. But the long game favors the platform.
Meanwhile, every partner selling through Anthropic has switching costs compounding quarterly. Anthropic handles invoicing, procurement, distribution. The enterprise buyer consolidates AI spend under one commitment. Try moving that to OpenAI when your CFO just approved a $3M Anthropic commitment that covers six different tools.
Six partners today. The real number to watch is which categories Anthropic enters directly within 18 months.
The marketplace is the map. Anthropic is reading it.
See 3 related tweets
- @lydiahallie: RT @claudeai: Introducing the Claude Marketplace, a way for enterprises to simplify their procuremen...
- @Cointelegraph: 🔥 NEW: Anthropic introduced the Claude Marketplace, a platform for enterprises to access AI tools, n...
- @RoundtableSpace: CLAUDE LAUNCHED “CLAUDE MARKETPLACE”
ITS NOW EASIER TO GET AI TOOLS https://t.co/aCDz1oEGH9...
5. business (Group Score: 70.0 | Individual: 23.5)
Cluster: 4 tweets | Engagement: 60 (Avg: 154) | Type: Tech
OpenAI is introducing an AI agent that’s meant to help security teams find and patch vulnerabilities in large databases, potentially cutting into demand for legacy cyber firms. https://t.co/jDY7ybgYst
See 3 related tweets
- @rohanpaul_ai: OpenAI just released Codex Security, an AI agent that scans software projects to fix vulnerabilities...
- @testingcatalog: OpenAI launched Codex Security, a new application security agent that can find and fix security vuln...
- @RoundtableSpace: OPENAI LAUNCHES CODEX SECURITY:
AN AI AGENT THAT FINDS, VALIDATES, AND FIXES CODE VULNERABILITIES ...
6. Shipper_now (Group Score: 66.0 | Individual: 34.3)
Cluster: 2 tweets | Engagement: 38 (Avg: 13) | Type: Tech
Ground-breaking news, @OpenAI GPT-5.4 just got a huge upgrade today and we're very happy to be introducing it in Shipper. From today, GPT-5.4 Thinking will build and run a full business for you.
We just launched Shipper 2.0, a tool for GPT-5.4 to:
→ Build anything: mobile app, web app, website, extension, store etc → Code, design, monetize, launch → Automate email marketing for you → Self-build new features → Maintain the business in the long-run
OpenAI's new model can now do all of that from a <10 word prompt, for as low as $0.11/app... And it doesn't take months, but rather minutes!
Simply go to Shipper, then ask GPT-5.4 to "build a ride-sharing network" or "build an Airbnb-style car rental app"!
In celebration, we're randomly giving away free credits to people who repost and comment "SHIPPER" :)
See 1 related tweets
- @chddaniel: BREAKING: @OpenAI GPT 5.4 just got a major upgrade today and we've just introduced it for Shippers. ...
7. aakashgupta (Group Score: 64.2 | Individual: 32.5)
Cluster: 2 tweets | Engagement: 40 (Avg: 318) | Type: Tech
Top takeaways from Lisa Huang (creator of Gemini Gems, SVP Product at @Xero):
Every PM is using ChatGPT. Almost none have built a Gem. There is a version of AI that already knows your role, your company, and your writing style before you type a word. Build it once. It holds everything permanently.
Three Gems every PM needs. A writing clone trained on your PRDs and emails. A product strategy advisor loaded with your company docs. A user research synthesizer that ingests raw transcripts and surfaces key themes. Build all three before anything else.
Vague instructions produce vague output. "Help me write better" gets you nothing. Write a full page. Role, audience, format, constraints. The output is only ever as specific as the instructions you gave it.
OpenAI built a GPT app store with monetization. Google focused on personal productivity. The GPT store never took off. First principles beat copying a competitor's framing every time.
Treat your Gem like a product you are shipping for yourself. The first version will not be perfect. Iterate on the instructions. Iterate on the knowledge files. The Gems that work are refined through real use, not set up once and abandoned.
Accuracy is the product in high-stakes AI. At Xero, LLMs out of the box are not great at math, accounting, or tax. Winning agents combine deep domain knowledge with proprietary transaction data no general model can replicate.
Measure agents in three layers or you are flying blind. Quality first (evals, human annotators, LLM judges). Product metrics second (adoption, retention, CSAT). Business impact third (revenue, ARR). Skip to layer three without the foundation and you are measuring on sand.
AI is not replacing PMs. It is replacing PM work. Writing PRDs, creating mocks, pulling data. What stays is product judgment. The ability to look at ambiguous signals and back a bet. That is not going anywhere.
Your company's permission is not required. Most companies are using the same consumer tools you already have. Build Gems. Build projects. Use your personal data. There is nothing stopping you.
The candidate who got hired had zero AI experience. They watched 3 hours of TikTok from small business coaches before the first interview. Came in with a financial needs summary nobody had asked for. Do the work before you are asked to.
Watch our full conversation: https://t.co/Qz6zEMSnBk
See 1 related tweets
- @aakashgupta: Every day, millions of people open ChatGPT, Gemini, or Claude and type the same context into the cha...
8. KirkDBorne (Group Score: 63.1 | Individual: 35.0)
Cluster: 2 tweets | Engagement: 62 (Avg: 56) | Type: Tech
New release from @PacktDataML at https://t.co/t7nCgjJMl9
"Agentic Architectural Patterns for Building Multi-Agent Systems: Proven design patterns and practices for GenAI, agents, RAG, LLMOps, and enterprise-scale AI systems"
𝕋𝕒𝕓𝕝𝕖 𝕠𝕗 ℂ𝕠𝕟𝕥𝕖𝕟𝕥𝕤: 🔷GenAI in the Enterprise: Landscape, Maturity, Agent Focus 🔷Agent-Ready LLMs: Selection, Deployment, Adaptation 🔷The Spectrum of LLM Adaptation for Agents: RAG to Fine-tuning 🔷Agentic AI Architecture: Components & Interactions 🔷Multi-Agent Coordination Patterns 🔷Explainability & Compliance Agentic Patterns 🔷Robustness & Fault Tolerance Patterns 🔷Human-Agent Interaction Patterns 🔷Agent-Level Patterns
See 1 related tweets
- @KirkDBorne: New release from @PacktDataML available at: https://t.co/c6HGuIlMLx
"Design Multi-Agent AI Systems ...
9. RocM301 (Group Score: 61.4 | Individual: 30.8)
Cluster: 3 tweets | Engagement: 303 (Avg: 58) | Type: Tech
飞书终于下场!推出OpenClaw AI官方插件 免费API配额从每月1万次升级到100万次
字节跳动飞书团队官方下场:推出 OpenClaw AI 机器人官方版插件并将免费 API 限额从每月 1 万次调用直接升级到 100 万次调用。这个配额且不说个人用户,就是小型团队使用都是足够足够的,而且官方版插件或许也会比社区版插件提供更多功能
使用飞书的用户可以点击这里查看详细安装指南:https://t.co/pDZ0HYoeD2
See 2 related tweets
- @GitHub_Daily: 当前和 AI Agent 交互大多还是停留在文字层面,没法像真人一样开口说话,有时候忙着手头的事,还得盯着屏幕看回复。
偶然发现 VoxClaw 这个开源工具,专门给 OpenClaw 这类本地 A...
- @GitHub_Daily: 很多朋友部署使用 OpenClaw,想实时监控运行状态、API 调用、成本开销,但官方没提供可视化工具,只能盯着日志文件看。
于是找到了 OpenClaw Agent Dashboard 这个开源监...
10. aakashgupta (Group Score: 59.6 | Individual: 33.6)
Cluster: 2 tweets | Engagement: 122 (Avg: 318) | Type: Tech
Three companies just shipped the same product within 60 days of each other. That tells you more about where software is going than any one of their announcements.
Cursor launched Automations today. Agents trigger from Slack messages, GitHub PRs, PagerDuty incidents, Linear issues, and cron schedules. Each trigger spins up a cloud sandbox with its own VM, runs instructions using whatever models you configure, verifies its own output, and pulls humans in only at decision points. Cursor already runs hundreds of these per hour internally. 35% of their pull requests come from agents on cloud VMs.
OpenAI shipped Automations in the Codex app last month. Anthropic launched Cowork in January, bringing agent orchestration to non-developers. Same architecture. Same bet.
Everyone sees three competing product launches. The real story is three companies independently concluding that the “prompt an agent, review its PR” workflow is already dead.
The “prompt a chatbot, copy the code” era lasted about 18 months. The “launch an agent, review its PR” era lasted maybe 6. Now all three are building the same thing: define policies, agents run continuously, humans approve at checkpoints. Each era compresses faster. Each one increases output per engineer while reducing the engineers needed per unit of output.
The revenue math confirms the convergence. Cursor doubled to 5.5M in new ARR per day. Anthropic hit 6B in February alone, with Claude Code at 8-10B annually and accelerating.
Cursor’s specific edge in this race? Model-agnostic. Plug in OpenAI, Anthropic, Google, or Cursor’s own models. They sit above the foundation layer and collect compute on every trigger regardless of who wins the model race. Anthropic and OpenAI can’t offer that because they’re tied to their own models.
The risk is just as obvious. When model providers ship their own orchestration layer (and they already have), the independent orchestrator gets squeezed from both sides. Cursor at 60% enterprise revenue and 25% market share per Ramp has a window. The $5.5M-per-day growth rate is a measure of how fast they’re racing to lock it in before it closes.
See 1 related tweets
- @latentspacepod: 🆕 Cursor's Third Era: Cloud Agents
"Cursor is no longer primarily about wr...
11. shanaka86 (Group Score: 56.8 | Individual: 22.4)
Cluster: 3 tweets | Engagement: 179 (Avg: 4269) | Type: Tech
On February 27, the Pentagon banned Anthropic. On February 28, the Pentagon used Anthropic’s AI to help select targets in Iran.
That sequence is not a leak. It is the entire story.
The United States Department of Defense designated Anthropic a supply chain risk under the Federal Acquisition Supply Chain Security Act, effective February 28, 2026. The designation, which Trump directed and Hegseth implemented, followed months of failed negotiations in which the Pentagon demanded unrestricted access to Claude for domestic surveillance operations and autonomous weapons targeting. Anthropic’s usage terms prohibit both. The Pentagon called those restrictions a national security liability. Anthropic called the designation legally unsound and announced a court challenge. This is the first time in American history that a US-based AI company has received the same supply chain designation previously reserved for foreign adversaries like Huawei.
The Washington Post reported on March 4 that Claude was used in the Iran campaign for intelligence synthesis, target selection modeling, and operational simulations, contributing to the identification of the approximately 1,000 targets struck in the first 24 hours of Operation Epic Fury. The integration ran through Palantir Technologies, which had embedded Claude into Pentagon systems under existing contracts. CENTCOM did not dispute the reporting. Democracy Now confirmed the timeline: the operational use of Claude occurred hours after the formal ban on Anthropic went into effect.
The mechanism that produced this outcome is the one that produces every technological dependency crisis in the history of defense procurement. Systems do not get uninstalled during wars. Palantir had woven Claude into targeting workflows, intelligence pipelines, and simulation environments over months of prior integration. When Trump signed the directive and Hegseth implemented the designation, the people running operations against Iran were already inside software that ran on Claude. Months to untangle is the standard estimate for removing a deeply embedded AI system from active military infrastructure. The war started in hours.
The analytical irony is precise. The Pentagon banned Anthropic because Anthropic refused to remove restrictions on autonomous weapons targeting. The Pentagon then used Anthropic’s AI for weapons targeting the same day the ban took effect. The ethical framework that produced the ban was the same ethical framework governing the system being used in the operations the ban was supposed to stop.
Anthropic’s civilian user base grew 60 percent year-to-date following the ban, reaching the top position on the App Store as the designation generated exactly the kind of public attention that converts into downloads. Enterprise customer retention held at 90 percent. The company that the Pentagon tried to coerce into compliance by designating it an adversary became more commercially dominant the week the designation landed.
The court challenge will determine whether a US government agency can compel an American AI company to remove ethical restrictions from its product as a condition of doing business with the federal government. That is not a technology law question. It is a First Amendment question wearing a procurement dispute as a costume.
The ban did not stop Claude from being used in the Iran war. It started a legal case that will define the relationship between AI ethics and American military power for the next decade.
See 2 related tweets
- @ReutersBiz: The Pentagon slapped a formal supply-chain risk designation on artificial intelligence lab Anthropic...
- @Cointelegraph: 🇺🇸 NOW: The Pentagon has formally designated Anthropic a supply-chain risk, escalating its AI safegu...
12. rohanpaul_ai (Group Score: 55.8 | Individual: 43.1)
Cluster: 2 tweets | Engagement: 2268 (Avg: 127) | Type: Tech
RT @rohanpaul_ai: Citadel Securities published this graph showing a strange phenomenon.
Job postings for software engineers are actually s…
See 1 related tweets
- @ShriramKMurthi: Citadel Securities is claiming an uptick (based on Indeed and other data) in job postings for softwa...
13. steipete (Group Score: 54.4 | Individual: 35.7)
Cluster: 2 tweets | Engagement: 2552 (Avg: 641) | Type: Tech
First project I I was involved in shipping! 🚢 Free token, API credits and security scanner for hard working open source maintainers.
See 1 related tweets
- @badlogicgames: RT @steipete: First project I I was involved in shipping! 🚢 Free token, API credits and security sca...
14. TheEconomist (Group Score: 54.2 | Individual: 29.3)
Cluster: 3 tweets | Engagement: 563 (Avg: 145) | Type: Tech
Dario Amodei, Anthropic’s boss, says he will challenge Donald Trump’s ban in court. Last week the Trump administration banned federal agencies from using Anthropic’s AI tools after the firm insisted that its main model not be used for mass surveillance or autonomous weapons.
@zannymb asks Mr Amodei about the power struggle and the difficulty of preventing an AI race to the bottom. Watch the interview on Friday at 6pm London time: https://t.co/Orf4IDq2WO
See 2 related tweets
- @rohanpaul_ai: Anthropic CEO Dario Amodei is challenging the Trump administration's ban on their AI in federal agen...
- @rohanpaul_ai: RT @rohanpaul_ai: Anthropic CEO Dario Amodei is challenging the Trump administration's ban on their ...
15. tbpn (Group Score: 48.2 | Individual: 28.8)
Cluster: 2 tweets | Engagement: 35 (Avg: 126) | Type: Tech
.@danprimack says whichever company IPOs first — OpenAI or Anthropic — will make huge headlines, but it comes with disadvantages too.
“Even though we know the losses are huge, the market’s going to be shocked when they actually see how large those losses are.” https://t.co/J1CbK5mH2l
See 1 related tweets
- @jasonlk: "OpenAI's latest round is ... 4x the size of the largest IPO. Ever.
More than all of U.S. venture ...
16. allenholub (Group Score: 47.5 | Individual: 32.5)
Cluster: 2 tweets | Engagement: 40 (Avg: 71) | Type: Tech
No, AI is not coming for your developer job, regardless of the nonsense the hype mongers are pushing. (I'm talking just about developer jobs—it obviously impacts other areas.) That's not to say the layoffs aren't real, but rather that the corporations are hiding normal corporate behavior behind a smokescreen of "AI makes us vastly more efficient!" Those layoffs are a hedge against the current economic downturn. They are downsizing to game the next quarter's earnings numbers. Don't expect the layoffs to stop, but they have nothing to do with "efficiency gains, because AI!" The problems are tariffs, isolationism, and the fact that concentrated wealth is not being reinvested in the economy.
For example [from https://t.co/4F5A5S6yyt]:
"Asked about the cuts on an October earnings call, Amazon Chief Executive Andy Jassy told analysts that the decision was 'not really financially driven and it's not even really AI-driven. Not right now, at least.'"
"A recent survey of global executives published in the Harvard Business Review found that although AI has been cited as the reason for some layoffs, those cuts are almost entirely anticipatory: executives expect big efficiency gains that have not yet been realized."
Execs are making decisions based on magical thinking. "Soon" is always six months away. Many of these companies will not survive while they chase the end of that particular rainbow.
AI just doesn't yield the imagined gains, at least not if you consider the entire product lifecycle and you want a reliable, secure, extensible, and scalable product that does what customers need. I'm not saying an LLM isn't a useful tool, only that it's not the panacea these execs imagine. The only way to speed up an entire system is to address the entire system, especially any bottlenecks. The bottleneck is only rarely the coding. Spotify, which has gone full AI to write its code, hasn't laid anybody off because it understands these issues.
There is one place where an AI-related layoff might be justified. Working with AI requires skills that some developers seem unwilling or unable to acquire. Coding alone, no matter how good you are, is no longer sufficient. You need to understand systems thinking and software architecture. You need strong written and verbal communication skills, as well as the social skills to make that communication effective. You need to talk to your customers and have empathy for their problems. Focusing on the technical side alone is no longer sufficient (though still necessary). The "programmer" job description has changed, and you need to adapt to remain employable. That's always been the case in our profession, of course, but the challenges wrought by AI seem particularly challenging.
See 1 related tweets
- @alexcooldev: Seeing waves of layoffs from big companies because of AI.
But here’s the part people don’t talk abo...
17. RoundtableSpace (Group Score: 46.5 | Individual: 23.2)
Cluster: 2 tweets | Engagement: 71 (Avg: 428) | Type: Tech
KEY UPDATES FOR GPT-5.4:
- Native computer-use support
- Up to 1M token context Codex & API)
- Best-in-class agentic coding
- Scalable tool search across large ecosystems
- Efficient reasoning for long, tool-heavy workflows https://t.co/QhVpXhST2t
See 1 related tweets
- @OpenAIDevs: Working with GPT-5.4 in the API?
We’ve updated our prompting guide with patterns for reliable agent...
18. DeepLearningAI (Group Score: 46.2 | Individual: 46.2)
Cluster: 1 tweets | Engagement: 880 (Avg: 196) | Type: Tech
There’s a lot of noise around “learning AI.”
New tools, new frameworks, and new model releases appear constantly. It can feel like you’re always behind!
But when you look at how many modern AI systems are actually built today, a clear pattern emerges.
Many real-world AI applications involve some combination of: • agents that can take actions • access to external data • evaluation loops to improve reliability • alignment with human intent • interaction with real interfaces and tools
These five skills map closely to that workflow.
For those who want to dive deeper, these courses explore each of these areas.
Agentic AI, with Andrew Ng https://t.co/KZsGkjGneB
Retrieval Augmented Generation, with Zain Hasan https://t.co/9OSkiMl8dR
Evaluating AI Agents, in collaboration with Arize AI https://t.co/38ugdWDaxO
Reinforcement Learning from Human Feedback, with Google Cloud https://t.co/FwPpDgOfO6
Building AI Browser Agents, in collaboration with AGI Inc https://t.co/5ztl1MeC7k
Know someone learning AI in 2026? Share this roadmap with them.
19. BrianRoemmele (Group Score: 46.2 | Individual: 19.2)
Cluster: 3 tweets | Engagement: 34 (Avg: 249) | Type: Tech
The “experts” on AI in the US have not a clue what using grandpa’s monetization SAAS licensing of AI will do to the US long term stability.
All US AI companies must build a large and dedicated ecosystem and open source AI models.
These tools like “Claws” tied to open source AI are being used 100x more in China and soon they will pull 1000s of US developers away from the old SAAS model to full open source.
I want to be wrong about this.
See 2 related tweets
- @BrianRoemmele: I saw something that will be open source and unfortunately it will move 1000s of developers to build...
- @BrianRoemmele: RT @BrianRoemmele: The “experts” on AI in the US have not a clue what using grandpa’s monetization S...
20. xicilion (Group Score: 46.2 | Individual: 46.2)
Cluster: 1 tweets | Engagement: 499 (Avg: 50) | Type: Tech
下午拜访了一家南宁的医美公司,交流的结果远超我的预想。两个创始人带领全体高管,学习 vibe coding,从一线需求出发,硬生生搓出一套 ai 医美服务管理平台,包括员工培训与评价,客户情绪监控,客服水平监控,各个方面。 一线高管遇到问题不是去找技术团队,而是自己动手直接 vibe。相当厉害。