Le buffet à volonté d'IA est-il épuisé ? Le coût de GitHub Copilot est trop élevé pour le supporter, à partir du 1er juin, la facturation sera basée sur la consommation.

Incapable of bearing the high costs of computing power, GitHub Copilot announced that starting June 1st, it will cancel the “unlimited” mode and switch to usage-based billing. This move has triggered strong backlash and a wave of cancellations among developers, highlighting the infrastructure challenges faced by the AI industry.

The era of AI unlimited usage ends, GitHub Copilot changes its billing model

Previously praised as the “all-you-can-eat” AI service, GitHub Copilot is now transforming from an unlimited plan to a regular restaurant model.

According to The Register, Microsoft-owned GitHub admitted in a recent announcement that, due to an inability to continue absorbing losses, it has decided that starting June 1, 2026, the billing method will shift from request-based to usage-based.

Under the original model, subscribers could submit a fixed number of advanced requests without considering task complexity, leading to high costs for prompts requiring extensive computation, far exceeding subscription revenue.

GitHub product manager Mario Rodriguez revealed that simple chat and multi-hour AI coding tasks could cost the same, as the company has absorbed the rising inference costs, making the current model unsustainable.

Image source: GitHub Copilot announcement - End of AI unlimited usage era, GitHub Copilot changes billing model

From June 1st, GitHub Copilot introduces virtual billing units

Previously, GitHub Copilot offered unlimited AI assistance for a fixed monthly fee, earning it the nickname “AI all-you-can-eat,” and compared to mainstream options like Codex, Cursor, and Claude Code, it has been a low-profile, high-cost-performance choice for developers.

However, with the shift to usage-based billing, GitHub Copilot’s charges will now be directly linked to Token consumption.

Because different models have different rates, GitHub has designed a virtual unit called GitHub AI points, valued at $0.01 USD. Microsoft will convert user input, output, and cache tokens into points based on API rate tables.

Rodriguez stated that future GitHub Copilot subscription plans will include a fixed monthly amount of AI points, with optional additional purchases.

Since usage-based billing involves uncertainty, users cannot predict how many tokens a specific input will consume, and calculating this across other tools becomes more complex. Therefore, GitHub plans to launch a billing preview feature in early May, allowing users to estimate costs before the June transition.

Reddit community backlash, users threaten to cancel subscriptions

Unsurprisingly, the billing change for GitHub Copilot has sparked significant backlash on Reddit.

Some users responded that if billing depends on usage, services like OpenRouter that offer free, subscription-free access already exist, and the new system effectively makes users pay the full API list price, completely losing the value of a subscription.

Many annual subscribers feel their rights are compromised, estimating that costs for certain models could skyrocket by dozens of times, and are threatening to cancel.

There is also a wave of calls to switch platforms, with many developers indicating they will switch to tools like Claude Code or Cursor, or even upgrade hardware to run open-source models like Alibaba’s Qwen 3.6 27B locally.

Image source: Reddit - The billing change for GitHub Copilot sparks backlash on Reddit

OpenClaw sparks new wave, AI infrastructure overload

GitHub Copilot’s change reflects the broader infrastructure challenges faced by the AI industry.

In February this year, the open-source AI assistant OpenClaw, nicknamed “Lobster,” attracted widespread attention, prompting many developers to experiment with AI agents running 24/7 on various tasks, and the improved capabilities of models encouraged more exploration into AI coding.

This has led AI companies that previously offered subscription subsidies to face enormous demands far beyond their inference infrastructure capacity, including giants like Anthropic and OpenAI, which have also experienced capacity issues. Claude Code recently fixed major bugs that improved output quality and reduced delays (commonly called “dumbing down” or “decreasing intelligence”) and reset user quotas.

Until the industry finds a way to balance costs and user experience, the resource consumption driven by massive AI computational demands will continue to cause a “price correction effect” across the entire AI sector.

Further reading:
Claude Code really got dumber! Officially admits three major bugs, user subscription quotas fully reset

Legislators propose banning data centers, environmental groups criticize ecological disaster! Meanwhile, the First Lady is walking into the White House with an AI robot

Voir l'original
Cette page peut inclure du contenu de tiers fourni à des fins d'information uniquement. Gate ne garantit ni l'exactitude ni la validité de ces contenus, n’endosse pas les opinions exprimées, et ne fournit aucun conseil financier ou professionnel à travers ces informations. Voir la section Avertissement pour plus de détails.
  • Récompense
  • Commentaire
  • Reposter
  • Partager
Commentaire
Ajouter un commentaire
Ajouter un commentaire
Aucun commentaire
  • Épingler