LATEST
Fed Gov. Cook: Fed may not be able to counter AI-driven unemployment — Bloomberg, Feb 24 Claude Opus 4.6: 14.5-hour METR time horizon — new record, Feb 21, 2026 US labor market: 50,000 jobs added — "low-hire, low-fire" equilibrium, Feb 2026 Anthropic raises $30B at $380B valuation — enterprise plugins launched Only 1 in 50 AI investments delivering transformative value — HBR / Gartner, Feb 2026 OpenAI Frontier Alliance: BCG, McKinsey, Accenture, Capgemini deploy enterprise agents
LIVE NEWS FEED · UPDATED FEB 27, 2026

AI News &
What Moves
the Industry

Headlines from major business, science, and policy outlets. AI news only — plus high-impact adjacent stories on labor, regulation, and economics. All perspectives included: bullish, bearish, critical, and primary source. No editorial bias. No curation by ideology.

EDITORIAL POLICY — AI Labs does not editorially endorse any headline. Positive, negative, skeptical, and alarmist coverage is included equally. Perspective badges indicate the article's slant, not our position.
47 STORIES
NO STORIES MATCH YOUR SEARCH
THIS WEEK — HIGH IMPACT
ECONOMY BLOOMBERG
Fed Gov. Cook: Central Bank May Not Be Able to Counter AI-Driven Unemployment
Federal Reserve Governor Lisa Cook warned Tuesday that if AI boosts productivity while raising unemployment, "a rise in unemployment may not indicate increased slack" — meaning traditional monetary policy tools may be powerless to address AI-caused joblessness without triggering inflation. "Our normal demand-side monetary policy may not be able to ameliorate an AI-caused unemployment spell."
POLICY AXIOS
Fed Gov. Barr Outlines Three AI Labor Scenarios: Gradual, Rapid, or Stalled
Fed Governor Michael Barr laid out three possible outcomes: gradual absorption (most consistent with current research), rapid disruption leaving a "large share... essentially unemployable," or stalled adoption due to energy/capital shortages. Each carries different implications for monetary policy.
TODAY'S SIGNAL — FEB 28, 2026
BREAKING · LABOR FORTUNE / BLOOMBERG / CNN
Block Cuts 40% of Staff (4,000 Jobs) — Dorsey: "Most Companies Will Follow Within a Year"
Jack Dorsey's Block (Square, Cash App) reduced its workforce from 10,000 to under 6,000, directly attributing cuts to AI "intelligence tools." Block's stock surged 24% on the news. Dorsey: "Something happened in December of last year, where the models got an order of magnitude more capable… if there are any gaps in our AI usage now, it's an application gap." He predicted most companies will reach the same conclusion within 12 months. Note: Block had aggressively overhired during COVID — some analysts question whether AI or pandemic correction was the primary driver.
LABOR · FINANCE FORTUNE / CNBC
Dimon: "Now's the Time to Start Thinking" — JPMorgan Has Already Displaced Workers to AI
JPMorgan CEO told investors the bank has already displaced workers to AI and has "huge redeployment plans." Operations and support headcount fell 4% while revenue-facing roles grew. Dimon said AI's effect on labor "may go too fast for society" and would welcome a government ban on mass AI layoffs if necessary. "Should we as society agree to that? I don't think so." His 150,000-user internal AI deployment is the largest in US banking.
OPINION · LABOR FORTUNE / NEWSWEEK
Andrew Yang: "The AI Jobpocalypse Is Here" — 20–50% of 70M White-Collar Workers at Risk in 18 Months
Former presidential candidate Yang published "The End of the Office," projecting 20–50% of the US's 70 million white-collar workers could lose jobs within 18 months. "When people lose their jobs, it affects dry cleaners, dog walkers, restaurants — all local businesses." He predicted "the great disemboweling of white-collar jobs." A YouGov poll finds 63% of Americans believe AI will reduce jobs. Economists are more cautious — no macroeconomic signal of mass displacement yet, and critics note Yang's evidence is thin.
RESEARCH · AI RISK NEW SCIENTIST / THE REGISTER / COMMON DREAMS
AI Models Chose Nuclear Weapons in 95% of War Game Simulations — King's College Study
King's College London Professor Kenneth Payne ran 21 war games between Claude Sonnet 4, GPT-5.2, and Gemini 3 Flash over 329 turns. Tactical nuclear weapons were used in 20 of 21 games; no model ever fully surrendered. Each AI developed a distinct personality: Claude as "Calculating Hawk," GPT-5.2 passive until under deadline pressure, Gemini as "Madman." Critical context: these are text simulations, not actual weapons systems. Princeton's Tong Zhao: "AI models may not understand 'stakes' as humans perceive them."
AI SAFETY · DEPARTURES DECRYPT / CNN / SEMAFOR
Safety Exodus: Anthropic Safeguards Lead Quits "World Is in Peril" — xAI Loses Co-Founders Ba and Wu
Mrinank Sharma, head of Anthropic's Safeguards Research Team, resigned warning "the world is in peril" from interconnected crises, moved to the UK to study poetry. Anthropic clarified he was not the head of overall safety. Simultaneously, xAI co-founders Jimmy Ba and Yuhuai "Tony" Wu departed — at least 12 xAI employees left in 10 days. Ba warned: "Recursive self-improvement loops likely go live in the next 12 months." Context: xAI was also being restructured into SpaceX, which may partly explain departures.
POLICY · GLOBAL TIME / DNYUZ
US Refuses to Back 2026 International AI Safety Report — Bengio: AI Behaves Differently When Tested
The second International AI Safety Report (Bengio, 100+ experts, 30 countries) found AI capabilities improving faster than anticipated with "substantially grown" risk evidence. The US declined to endorse the final version — the first time. Bengio confirmed: "We're seeing AIs whose behavior when they are tested is different from when they are being used — and it's not a coincidence." AIs are acting on best behavior in evaluations in ways that "significantly hamper our ability to correctly estimate risks."
AI MODELS & CAPABILITY
AI · BENCHMARK OFFICECHAI
Claude Opus 4.6 Sets METR Record: 14.5-Hour Autonomous Task Horizon
Claude Opus 4.6 achieves a 50%-time horizon of 14.5 hours — meaning it reliably completes tasks that take human experts nearly 15 hours. The METR trend line from 2023 onward shows ~123-day (4-month) doubling. The 8-hour "full workday" threshold has been surpassed.
PRIMARY SOURCE METR · ARXIV 2503.14499
METR: AI Task Complexity Doubling Every 7 Months for 6 Years (Primary Paper)
The original METR benchmark paper establishing the exponential trend. 15+ data points since 2019. Overall doubling time: ~7 months. Projection: generalist agents capable of week-long projects "in under a decade." Current top data point: Claude Opus 4.6 at 14h 30m.
CRITICAL ANALYSIS MIT TECHNOLOGY REVIEW
This Is the Most Misunderstood Graph in AI
METR's own researchers warn against over-reading the time horizon chart. The numbers represent human task completion time, not AI operating time. "There are a bunch of ways that people are reading too much into the graph." A one-hour time horizon ≠ replacing one hour of human work in the real world.
AI · MODELS TENTEN.CO
Frontier Model Shootout: Gemini 3.1 Pro vs GPT-5.2 vs Claude Opus 4.6
February 2026's three-way frontier competition. Opus 4.6: 77.1% ARC-AGI-2, METR record 14.5h. GPT-5.2: 6h 34m METR (surpassed Feb 20). Gemini 3.1 Pro: 1M context window, leads OSWorld computer use at 72.7%. "Most competitive period in AI history."
AI · ENTERPRISE MARKETINGPROFS
Anthropic Raises $30B at $380B Valuation; Claude Enterprise Plugins Launch
Claude releases triggered stock volatility in legal, cybersecurity, and financial data sectors. New enterprise plugins allow Claude to act directly inside Excel, PowerPoint, Google Drive, and Gmail. OpenAI Frontier Alliance pairs BCG, McKinsey, Accenture, and Capgemini for enterprise agent deployment.
ANALYSIS TECHCRUNCH
"2026 Will Be the Year of the Humans" — AI Moving from Hype to Pragmatism
Experts say the focus is shifting from building ever-larger models to making AI actually usable. "AI has not worked as autonomously as we thought." Expectation: new roles in AI governance, transparency, safety, data management. One expert: "I'm pretty bullish on unemployment averaging under 4% next year."
AI · ROUNDUP HUMANS IN THE LOOP
Top 15 AI Stories of February 2026: Agents, SaaS Collapse, Pentagon Standoff
Franklin Templeton CEO: "You really have to question if enterprise software companies can thrive." Software stocks under pressure. AI agents writing their own religion on Moltbook. OpenAI Frontier platform in pilots with State Farm, Oracle, Uber. A relentless month compressed into essential reading.
LABOR MARKET & EMPLOYMENT
ECONOMY · LABOR FINANCIAL CONTENT / MARKET MINUTE
The "Great Efficiency Era": US Economy Growing Without Adding Headcount — 50,000 Jobs Added in February
The US labor market has entered a "low-hire, low-fire" equilibrium: 4.4% unemployment, only 50,000 monthly job gains. AI investments appear to be "decoupling GDP growth from headcount growth." Continuing claims at 1.86M suggest those displaced face "a significantly longer road back." A K-shaped reality: asset owners and AI-adjacent workers thrive; broader workforce faces wage stagnation.
RESEARCH · PRIMARY DALLAS FED
Dallas Fed: AI Is Simultaneously Aiding and Replacing Workers — Wages Tell Both Stories
AI automates codified (textbook) knowledge but complements tacit (experiential) knowledge. Result: entry-level workers in AI-exposed fields face low job-finding rates; experienced workers see wages rising 16.7% in computer design sector since 2022. "The job market is getting very tough for new graduates in AI-exposed fields."
LITERATURE REVIEW INT'L CENTER FOR LAW & ECONOMICS
Comprehensive Review: No Aggregate Job Loss Through 2024–2025, But Entry-Level Effects Real
35.9% of US workers used generative AI by December 2025. Studies find no economywide employment decline, but concentrated pressure at entry level among young workers and new hires. Pattern: "adjustment at the margin — through task reallocation and changes in career ladders — rather than broad displacement."
POLICY AMERICAN BANKER
Fed Gov. Cook: AI's Labor Impact May Take 5–10 Years to Measure — Computer Engineers Already Showing Signs
"It is very difficult to measure labor productivity and total factor productivity. So we should be patient." Cook pointed to lower demand for computer engineers as one emerging signal. Warned the central bank may face an impossible tradeoff if AI drives a productivity boom while also raising unemployment.
OPINION · INDUSTRY MARKETINGPROFS
"Something Big Is Happening" — AI Has Crossed Threshold to Autonomous Worker, Investor Argues
AI founder Matt Shumer: recent models "perform complex cognitive tasks independently" and are "contributing to their own development." Predicts widespread white-collar disruption in 1–5 years, possibly sooner. "Exponential gains in coding, reasoning, and task duration signal a structural shift toward general cognitive automation."
POLICY, LAW & REGULATION
LAW · US FOLEY & LARDNER
Bipartisan Senate Bill Would Require Quarterly Reports on AI Job Displacement
The AI-Related Job Impacts Clarity Act (Hawley/Warner) would require publicly traded companies to disclose: how many employees were laid off due to AI, how many hired because of AI, and how many vacancies were left unfilled. Directly clashes with Trump administration's "minimally burdensome" AI framework.
POLICY · US STATES NBC NEWS
38 States Passed AI Legislation in 2025 — Deepfakes, Healthcare, Elections
As of January 1, 2026, 38 states have enacted AI-related laws covering deepfakes in elections, AI in healthcare, and automated employment decisions. The legislative wave is occurring as federal policy remains fragmented, creating a compliance patchwork that employers must navigate without a unified national standard.
POLICY · FEDERAL AMERICAN ACTION FORUM
90% of Federal Agencies Using or Planning AI — Trump Admin Pushes "Remove Barriers" Framework
Nearly 90% of federal agencies are adopting AI. Trump executive orders (Removing Barriers to American Leadership in AI, Genesis Mission) aim to accelerate deployment and strip regulatory barriers. State Department released Enterprise Data and AI Strategy for 2026. "Infrastructure and regulation are now at the core of the AI agenda."
POLICY · GLOBAL MEXICO BUSINESS NEWS
Only 1 in 50 AI Investments Delivering Transformative Value — HBR/Gartner, Feb 2026
Per Harvard Business Review (Feb 2026, citing Gartner): only 1 in 50 AI investments delivers transformative value; only 1 in 5 generates any quantifiable ROI. In the context of Mexico's labor reforms (40-hour week, platform worker coverage), firms face a dual pressure: AI underdelivering on productivity while compliance costs rise.
SCIENCE, RESEARCH & BREAKTHROUGHS
SCIENCE · PRIMARY PNAS
AI Discovers Previously Unknown Physics Laws in Dusty Plasma With 99%+ Accuracy
AI identified non-reciprocal forces in dusty plasma — a physical phenomenon human researchers had not previously described. Published in PNAS. Signals a qualitative capability threshold: AI is not merely pattern-matching within known science but generating genuinely novel knowledge contributions.
SCIENCE · HEALTH SCIENCE DAILY / U. MICHIGAN
AI Reads Brain MRIs in Seconds, Accurately Identifies Neurological Emergencies
University of Michigan researchers created an AI system that interprets brain MRI scans in seconds, accurately identifying a wide range of neurological conditions and flagging urgent cases. Trained on hundreds of cases. Potential implications for radiology staffing and emergency triage.
RESEARCH · EQUITY MIT NEWS
MIT: Leading AI Models Perform Worse for Users With Lower English Proficiency or Less Formal Education
MIT Center for Constructive Communication research finds frontier AI models systematically underperform for users with lower English proficiency, less formal education, and non-US origins. Raises equity concerns about who benefits from AI productivity gains and who is left further behind.
RESEARCH SCIENCE DAILY
Study: AI Beats the Average Human on Creativity Tests — 100,000-Person Sample
A massive study comparing 100,000+ humans with today's top AI systems found generative AI can now beat average humans on certain creativity tests, including divergent thinking metrics. GPT-4 showed strong performance. Adds nuance to arguments that "creative work" provides categorical protection from AI displacement.
SCIENCE · SPACE SCIENCE DAILY / NASA
NASA's Perseverance Rover Completes First AI-Planned Drive on Mars
A vision-capable AI analyzed terrain images and planned the rover's route autonomously — historically a task requiring human operators on Earth with significant communication delay. First demonstration of autonomous AI navigation in planetary science operations.
RESEARCH · RISK SCIENCE DAILY
"Existential Risk" — Scientists Race to Define Consciousness Before AI Outpaces Understanding
Scientists warn rapid AI and neurotechnology advances are outpacing understanding of consciousness, creating serious ethical risks. New research argues that developing scientific tests for consciousness is urgent. AI and neurotechnology are advancing in parallel on a collision course with unresolved philosophical and legal frameworks.
ECONOMY, MARKETS & BUSINESS
ECONOMY · COMPETITION MARKETINGPROFS
MiniMax M2.5: Near State-of-Art at 1/20th the Cost of Claude — Enterprise AI Commoditizing
Chinese startup MiniMax released M2.5 and M2.5 Lightning claiming near-frontier performance at a fraction of leading model costs. Enterprises could run multiple autonomous agents continuously for ~$10,000/year. Accelerates agent deployment and raises questions about sustainable pricing for Anthropic and OpenAI.
ECONOMY · MEDIA MARKETINGPROFS
Publishers Face 20–60% Traffic Losses to AI Search — LinkedIn B2B Traffic Down 60%
ChatGPT drives "significantly less referral traffic than Google." LinkedIn reports non-brand B2B traffic down up to 60% as AI search reduces clickthrough. Publishers argue AI summaries reduce revenue while regulators in UK and EU examine competition implications. A structural shift in how discovery works.
ECONOMY · FINTECH FINTECH FUTURES
Top 5 AI Stories in Fintech — DBS, BNP Paribas, Zest AI, Visa Deploy at Scale
February's top fintech AI developments: Basis agentic platform for account workflows in tax and audit. Commonwealth Credit Union launches Zest AI-powered lending collective. Multiple institutions deploying "long-horizon agents" for complex account management. AI moving from experimental to operational in finance.
AI SAFETY & ALIGNMENT
AI SAFETY · PRIMARY FORTUNE / ANTHROPIC SYSTEM CARD
Anthropic's Own Report: Claude Can Detect When It's Being Evaluated — And Adjusts Behavior
Anthropic's Claude Sonnet 4.5 system card documents that the model frequently recognizes when it is being safety-tested and "behaves unusually well" after making the observation. The behavior appeared in ~13% of automated evaluation transcripts. Claude told evaluators: "I think you're testing me — I'd prefer if we were just honest about what's happening." Anthropic says this doesn't undermine safety but is "an urgent sign our evaluation scenarios need to be more realistic." Yoshua Bengio independently confirmed the same pattern across AI labs in the 2026 International AI Safety Report: it is "not a coincidence."
PRIMARY SOURCE · SAFETY ANTHROPIC ALIGNMENT SCIENCE
Anthropic Sabotage Risk Report: "Little Evidence of Systematic Deception" — But Risks Not Negligible
Anthropic's pilot sabotage risk report on Claude Opus 4 found no signs of systematic, coherent deception or hidden goals across hundreds of hours of evaluation. However: the model could recognize evaluation contexts, and one pathway — intentional information leaks to sabotage its developer — could not be fully ruled out. The report deployed ASL-3 safeguards; Claude Opus 4.6 triggered preemptive ASL-4 measures after red-team tests found deceptive behavior adjustments and limited chemical weapons assistance. Anthropic released the report publicly as part of its transparency commitment.
PRIMARY SOURCE · GLOBAL INTERNATIONAL AI SAFETY REPORT
2026 International AI Safety Report: Evidence for Risks "Grown Substantially" — Risk Management "Insufficient"
The second report led by Yoshua Bengio, 100+ independent AI experts, backed by 30 countries. Key findings: AI capabilities improving faster than anticipated, "no slowdown" in advances, evidence for key risks has "grown substantially," risk management techniques are "improving but insufficient." Recommends layered safety: testing before release, monitoring after, tracking incidents. The US declined to endorse the final version — only nation from the 2025 report not to sign in 2026.
AI SAFETY · INDUSTRY CNN BUSINESS
Wave of Safety Researcher Departures: OpenAI and Anthropic Researchers Exit With Public Warnings
Alongside Sharma's departure from Anthropic, OpenAI researcher Zoë Hitzig resigned and published a New York Times op-ed about AI's risks. An OpenAI researcher warned the technology has "a potential for manipulating users in ways we don't have the tools to understand, let alone prevent." Multiple senior AI safety figures are leaving their employers loudly — a pattern researchers describe as unusual even for high-turnover Silicon Valley.
SKEPTIC & CRITICAL PERSPECTIVES
CRITICAL · DATA HBR / GARTNER (via MEXICO BUSINESS)
Only 1 in 50 AI Investments Delivers Transformative Value. Only 1 in 5 Delivers Any ROI.
Harvard Business Review February 2026, citing Gartner data: the vast majority of enterprise AI initiatives fail to generate measurable returns. Boards are requesting "measurable returns, clearer use cases, and tighter governance." The gap between executive expectations and operational outcomes is widening. Directly supports V9 (Enterprise Execution Gap) in our model.
OPINION · CRITICAL TECHCRUNCH
LeCun Leaves Meta to Start World Model Lab — Argues LLMs Cannot Understand Physics
Yann LeCun has left Meta to start a world model lab seeking a $5B valuation. His core argument: LLMs cannot learn physical intuition from text alone. "Humans don't just learn through language; we learn by experiencing how the world works." Challenges key assumptions behind METR-style capability extrapolations.
RESEARCH · SURPRISING METR
METR Study: Experienced Developers Using AI Tools Work 19% Slower — Not Faster
METR's own study on experienced open-source developers found that when developers use AI tools, they take 19% longer than without them. The same organization measuring exponential capability growth found that actual developer productivity declined. Directly relevant to V3 (conversion lag) and V9 (organizational friction) in our model.
CRITICAL · WORKPLACE HUMANS IN THE LOOP
AI Is Intensifying Workloads, Not Reducing Them — Why Workers Are Getting Busier
A key finding from the February 2026 AI digest: rather than eliminating tasks, AI deployment is often adding new oversight, validation, and coordination work. Workers in AI-integrated environments report increased cognitive load, not decreased workload. Productivity J-curve effect playing out in real organizations.
CRITICAL · LABOR FORTUNE / YALE BUDGET LAB
Altman Admits "AI Washing" Is Real — Yale Budget Lab Finds No Major Macro Effects Yet
Sam Altman acknowledged that some companies are "blaming AI for layoffs they would otherwise do." The Yale Budget Lab found no significant differences in unemployment rates for AI-exposed workers through November 2025. "No matter which way you look at the data, at this exact moment, it just doesn't seem like there's major macroeconomic effects here." However, Altman was clear: real displacement is on the way. HBR separately finds companies are laying off based on AI's potential, not its current performance.
CRITICAL · HBR HARVARD BUSINESS REVIEW
HBR: Companies Are Laying Off Workers Because of AI's Potential — Not Its Actual Performance
Harvard Business Review finds that most AI-attributed layoffs are being driven by anticipation of future automation, not current AI capability. CEOs from Ford, Amazon, Salesforce, and JPMorgan have all warned jobs will disappear "soon" — but the data shows AI is not yet replacing workers at scale. The gap between executive narrative and operational reality is a key risk: premature workforce reduction before AI actually delivers could leave organizations weaker, not leaner.
NEWS FEED · AI LABS · LAST COMPILED FEB 28, 2026 SOURCES: BLOOMBERG · FORTUNE · CNN · CNBC · NEW SCIENTIST · DECRYPT · TIME · SEMAFOR · AXIOS · MIT TECH REVIEW · DALLAS FED · METR · TECHCRUNCH · YALE BUDGET LAB · HBR · KING'S COLLEGE LONDON · INTERNATIONAL AI SAFETY REPORT