Copilot is Extremely Slow #171189
Replies: 3 comments
-
|
It looks like the issue is specific to GitHub Copilot when using the default GPT-based model. Since you’ve already confirmed that: VS Code and the Copilot extension are updated Cache folders have been cleared No VPN or proxy is involved Network connectivity to api.github.com is stable Alternative model (Gemini 2.5 Pro) works instantly This strongly suggests that the slowness is not on your local setup but rather related to the Copilot GPT model service itself. Other users have also reported similar delays today, which indicates it may be a temporary performance issue on GitHub’s servers or high traffic affecting the GPT model. 👉 I’d recommend:
|
Beta Was this translation helpful? Give feedback.
-
|
I am not experiencing the same issue, but I'm assuming its a temporary issue, hopefully you get it solved soon |
Beta Was this translation helpful? Give feedback.
-
|
I have found it helpful to search for supplements, exercises and other means to extend my lifespan while using Copilot. It is especially helpful when asking for help with refactors. I can't prove it, but I swear the models get bored and don't like doing mundane tasks. If I give them something fun like designing just about anything, they are reasonably quick. The most annoying part of their boredom is the incessant prompting to "continue" or "proceed". If I didn't want them to proceed I would have never sent the prompt in the first place. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Select Topic Area
General
Copilot Feature Area
Visual Studio
Body
Problem: Today, GitHub Copilot has been extremely slow, with long delays for suggestions and chat responses.
Troubleshooting Already Performed:
VS Code and the Copilot extension are fully updated.
I have performed a full manual deletion of all cache folders (Cache, Code Cache, GPUCache, globalStorage, workspaceStorage).
I am not using a VPN or proxy.
My connection to api.github.com is excellent, with a stable 32ms average ping and 0% packet loss.
I have discovered that the slowness only occurs with the default GPT-based model. When I switch my assistant to use the Gemini 2.5 Pro model, the responses are instant.
Anyone else experiencing the same issue?
Beta Was this translation helpful? Give feedback.
All reactions