Skip to content

github copilot language models is emptyΒ #282931

@huhayayo

Description

@huhayayo

Type: Bug

github copilot language models is empty, when prompting, "Language model unavailable" returned

Extension version: 0.35.0
VS Code version: Code 1.107.0 (618725e, 2025-12-10T07:43:47.883Z)
OS version: Windows_NT x64 10.0.19045
Modes:
Remote OS version: Linux x64 6.8.0-86-generic

Logs
Trace: [NES][DiagnosticsInlineCompletionProvider][Import] created
Trace: [NES][DiagnosticsInlineCompletionProvider][Async] created
Info: [code-referencing] Public code references are enabled.
Debug: Finished handling auth change event.
Error: TypeError: fetch failed
	at node:internal/deps/undici/undici:14900:13
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async n._fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9430)
	at async n.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9078)
	at async Z4n (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:12547)
	at async Qv.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:15297)
	at async QO._fetchLatestModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:2513:13399)
  Error: read ECONNRESET
  	at TLSWrap.onStreamRead (node:internal/stream_base_commons:216:20): Failed to fetch model list
Error: TypeError: fetch failed
	at node:internal/deps/undici/undici:14900:13
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async n._fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9430)
	at async n.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9078)
	at async Z4n (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:12547)
	at async Qv.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:15297)
	at async rW._doRefreshRemoteAgents (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:2066:9814)
  Error: read ECONNRESET
  	at TLSWrap.onStreamRead (node:internal/stream_base_commons:216:20): Failed to load remote copilot agents
Error: TypeError: fetch failed
	at node:internal/deps/undici/undici:14900:13
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async n._fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9430)
	at async n.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9078)
	at async Z4n (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:12547)
	at async Qv.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:15297)
	at async tC._fetchModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:4542)
  Error: read ECONNRESET
  	at TLSWrap.onStreamRead (node:internal/stream_base_commons:216:20): Failed to fetch models (21cd150a-8a74-4d5a-8e54-615a245ab13b)
Debug: Refetch model metadata: Skipped.
Trace: [BaseExperimentationService] User info changed, refreshed treatments
Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: true.
Info: BYOK: Copilot Chat known models list fetched successfully.
Debug: [RateLimit] REST rate limit remaining: 4964, user
Error: TypeError: fetch failed
	at node:internal/deps/undici/undici:14900:13
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async n._fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9430)
	at async n.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9078)
	at async Z4n (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:12547)
	at async Qv.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:15297)
	at async Qv.fetchWithPagination (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:13757)
	at async dP.getAllOpenSessions (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:25902)
	at async /home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1879:4993
  Error: read ECONNRESET
  	at TLSWrap.onStreamRead (node:internal/stream_base_commons:216:20)
Debug: [context keys] Updating context keys.
Error: TypeError: fetch failed
	at node:internal/deps/undici/undici:14900:13
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async n._fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9430)
	at async n.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9078)
	at async Z4n (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:12547)
	at async Qv.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:15297)
	at async tC._fetchModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:4542)
	at async tC.getAllChatModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:2353)
	at async uM.getAllChatEndpoints (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:13748)
	at async a$._provideLanguageModelChatInfo (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1219:1491)
	at async kA.$provideLanguageModelChatInfo (file:///home/max/.vscode-server/cli/servers/Stable-618725e67565b290ba4da6fe2d29f8fa1d4e3622/server/out/vs/workbench/api/node/extensionHostProcess.js:118:30901)
  Error: read ECONNRESET
  	at TLSWrap.onStreamRead (node:internal/stream_base_commons:216:20): Failed to fetch models (9fd82e45-55e5-44d5-92b6-7f5c69590257)
Debug: Refetch model metadata: Skipped.
Error: Error: Unable to verify Ollama server version. Please ensure you have Ollama version 0.6.4 or higher installed. If you're running an older version, please upgrade from https://ollama.ai
	at GS._checkOllamaVersion (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:13037)
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async GS.getAllModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:10977)
	at async GS.provideLanguageModelChatInformation (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:8846)
	at async kA.$provideLanguageModelChatInfo (file:///home/max/.vscode-server/cli/servers/Stable-618725e67565b290ba4da6fe2d29f8fa1d4e3622/server/out/vs/workbench/api/node/extensionHostProcess.js:118:30901): Error fetching available Ollama models
Error: TypeError: fetch failed
	at node:internal/deps/undici/undici:14900:13
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async n._fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9430)
	at async n.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9078)
	at async Z4n (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:12547)
	at async Qv.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:15297)
	at async tC._fetchModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:4542)
	at async tC.getAllChatModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:2353)
	at async uM.getAllChatEndpoints (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:13748)
	at async a$._provideLanguageModelChatInfo (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1219:1491)
	at async kA.$provideLanguageModelChatInfo (file:///home/max/.vscode-server/cli/servers/Stable-618725e67565b290ba4da6fe2d29f8fa1d4e3622/server/out/vs/workbench/api/node/extensionHostProcess.js:118:30901)
  Error: read ECONNRESET
  	at TLSWrap.onStreamRead (node:internal/stream_base_commons:216:20): Failed to fetch models (8da836f6-6f96-4df7-9bbd-3c2be557e53e)
Debug: Refetch model metadata: Skipped.
Error: Error: Unable to verify Ollama server version. Please ensure you have Ollama version 0.6.4 or higher installed. If you're running an older version, please upgrade from https://ollama.ai
	at GS._checkOllamaVersion (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:13037)
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async GS.getAllModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:10977)
	at async GS.provideLanguageModelChatInformation (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:8846)
	at async kA.$provideLanguageModelChatInfo (file:///home/max/.vscode-server/cli/servers/Stable-618725e67565b290ba4da6fe2d29f8fa1d4e3622/server/out/vs/workbench/api/node/extensionHostProcess.js:118:30901): Error fetching available Ollama models
Error: TypeError: fetch failed
	at node:internal/deps/undici/undici:14900:13
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async n._fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9430)
	at async n.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9078)
	at async Z4n (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:12547)
	at async Qv.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:15297)
	at async tC._fetchModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:4542)
	at async tC.getAllChatModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:2353)
	at async uM.getAllChatEndpoints (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:13748)
	at async a$._provideLanguageModelChatInfo (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1219:1491)
	at async kA.$provideLanguageModelChatInfo (file:///home/max/.vscode-server/cli/servers/Stable-618725e67565b290ba4da6fe2d29f8fa1d4e3622/server/out/vs/workbench/api/node/extensionHostProcess.js:118:30901)
  Error: read ECONNRESET
  	at TLSWrap.onStreamRead (node:internal/stream_base_commons:216:20): Failed to fetch models (5cab7e33-b4e1-45ed-b21c-7a566676d6c5)
Debug: Refetch model metadata: Skipped.
Error: Error: Unable to verify Ollama server version. Please ensure you have Ollama version 0.6.4 or higher installed. If you're running an older version, please upgrade from https://ollama.ai
	at GS._checkOllamaVersion (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:13037)
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async GS.getAllModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:10977)
	at async GS.provideLanguageModelChatInformation (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:8846)
	at async kA.$provideLanguageModelChatInfo (file:///home/max/.vscode-server/cli/servers/Stable-618725e67565b290ba4da6fe2d29f8fa1d4e3622/server/out/vs/workbench/api/node/extensionHostProcess.js:118:30901): Error fetching available Ollama models
Error: TypeError: fetch failed
	at node:internal/deps/undici/undici:14900:13
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async n._fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9430)
	at async n.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9078)
	at async Z4n (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:12547)
	at async Qv.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:15297)
	at async tC._fetchModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:4542)
	at async tC.getAllChatModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:2353)
	at async uM.getAllChatEndpoints (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:13748)
	at async a$._provideLanguageModelChatInfo (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1219:1491)
	at async kA.$provideLanguageModelChatInfo (file:///home/max/.vscode-server/cli/servers/Stable-618725e67565b290ba4da6fe2d29f8fa1d4e3622/server/out/vs/workbench/api/node/extensionHostProcess.js:118:30901)
  Error: read ECONNRESET
  	at TLSWrap.onStreamRead (node:internal/stream_base_commons:216:20): Failed to fetch models (fd1efc59-4cbb-461b-b8d3-3a100486d461)
Debug: Refetch model metadata: Skipped.
Error: Error: Unable to verify Ollama server version. Please ensure you have Ollama version 0.6.4 or higher installed. If you're running an older version, please upgrade from https://ollama.ai
	at GS._checkOllamaVersion (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:13037)
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async GS.getAllModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:10977)
	at async GS.provideLanguageModelChatInformation (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:8846)
	at async kA.$provideLanguageModelChatInfo (file:///home/max/.vscode-server/cli/servers/Stable-618725e67565b290ba4da6fe2d29f8fa1d4e3622/server/out/vs/workbench/api/node/extensionHostProcess.js:118:30901): Error fetching available Ollama models
Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: false.
Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: true.
Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: false.
Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: true.
Trace: [NES][Triggerer][onDidChangeTextEditorSelection] created
Trace: [NES][Triggerer][onDidChangeTextEditorSelection][editorSwitch] document switch disabled
Trace: [NES][Triggerer][onDidChangeTextEditorSelection] Return: document not tracked - does not have recent changes
Trace: [Diagnostics] got diagnostics 
Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: false.
Error: Error: fetch failed
	at Qx.getAllModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:8606)
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async Qx.provideLanguageModelChatInformation (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:8954)
	at async kA.$provideLanguageModelChatInfo (file:///home/max/.vscode-server/cli/servers/Stable-618725e67565b290ba4da6fe2d29f8fa1d4e3622/server/out/vs/workbench/api/node/extensionHostProcess.js:118:30901): Error fetching available OpenAI models
Error: TypeError: fetch failed
	at node:internal/deps/undici/undici:14900:13
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async n._fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9430)
	at async n.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:9078)
	at async Z4n (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:12547)
	at async Qv.fetch (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:4451:15297)
	at async tC._fetchModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:4542)
	at async tC.getAllChatModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:2353)
	at async uM.getAllChatEndpoints (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:5764:13748)
	at async a$._provideLanguageModelChatInfo (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1219:1491)
	at async kA.$provideLanguageModelChatInfo (file:///home/max/.vscode-server/cli/servers/Stable-618725e67565b290ba4da6fe2d29f8fa1d4e3622/server/out/vs/workbench/api/node/extensionHostProcess.js:118:30901)
  Error: read ECONNRESET
  	at TLSWrap.onStreamRead (node:internal/stream_base_commons:216:20): Failed to fetch models (8f1b105a-77bb-4c5c-b655-9d7152b0c5e1)
Debug: Refetch model metadata: Skipped.
Error: Error: Unable to verify Ollama server version. Please ensure you have Ollama version 0.6.4 or higher installed. If you're running an older version, please upgrade from https://ollama.ai
	at GS._checkOllamaVersion (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:13037)
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async GS.getAllModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:10977)
	at async GS.provideLanguageModelChatInformation (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:8846)
	at async kA.$provideLanguageModelChatInfo (file:///home/max/.vscode-server/cli/servers/Stable-618725e67565b290ba4da6fe2d29f8fa1d4e3622/server/out/vs/workbench/api/node/extensionHostProcess.js:118:30901): Error fetching available Ollama models
Error: Error: fetch failed
	at Qx.getAllModels (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:8606)
	at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
	at async Qx.provideLanguageModelChatInformation (/home/max/.vscode-server/extensions/github.copilot-chat-0.35.0/dist/extension.js:1220:8846)
	at async kA.$provideLanguageModelChatInfo (file:///home/max/.vscode-server/cli/servers/Stable-618725e67565b290ba4da6fe2d29f8fa1d4e3622/server/out/vs/workbench/api/node/extensionHostProcess.js:118:30901): Error fetching available OpenAI models
Trace: [Diagnostics] got diagnostics 
Debug: [context keys] Window state change. Needs offline check: false, active: false, focused: false.
Debug: [context keys] Window state change. Needs offline check: false, active: false, focused: true.
Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: true.
Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: false.
Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: true.
Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: false.
Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: true.
Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: false.
Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: true.
Request IDs

System Info
Item Value
CPUs 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz (8 x 2803)
GPU Status 2d_canvas: enabled
direct_rendering_display_compositor: disabled_off_ok
gpu_compositing: enabled
multiple_raster_threads: enabled_on
opengl: enabled_on
rasterization: enabled
raw_draw: disabled_off_ok
skia_graphite: disabled_off
trees_in_viz: disabled_off
video_decode: enabled
video_encode: enabled
webgl: enabled
webgl2: enabled
webgpu: enabled
webnn: disabled_off
Load (avg) undefined
Memory (System) 31.71GB (19.48GB free)
Process Argv --crash-reporter-id 7fc4acfb-0a5a-4d27-8ed5-ce4456a2feb1
Screen Reader no
VM 0%
Item Value
Remote SSH: SHDev
OS Linux x64 6.8.0-86-generic
CPUs AMD EPYC 9124 16-Core Processor (32 x 3686)
Memory (System) 62.49GB (56.19GB free)
VM 0%
A/B Experiments
vsliv368cf:30146710
pythonvspyt551:31249599
binariesv615:30325510
vscrp:30673768
nativeloc1:31344060
dwcopilot:31170013
dwoutputs:31242946
copilot_t_ci:31333650
e5gg6876:31282496
pythonrdcb7:31342333
aj953862:31281341
nes-set-on:31351930
6abeh943:31336334
cloudbuttont:31379625
todos-1:31405332
3efgi100_wstrepl:31403338
trigger-command-fix:31379601
use-responses-api:31390855
je187915:31401257
d5i5i512:31428709
ec5jj548:31422691
terminalsuggestenabled:31431119
cmp-ext-treat:31426748
cp_cls_c_966_ss:31426491
copilot6169-t2000-control:31431385
c0683394:31419495
478ah919:31426797
ge8j1254_inline_auto_hint_haiku:31431912
fa76a614:31426880
5j92g670_sonnet:31426787
rename_disabled:31433536
anthropic_thinking_t:31432745
406hc587_ask_agent:31428393
cp_cls_c_1081:31433293
copilot-nes-oct-trt:31432596

Metadata

Metadata

Assignees

Labels

info-neededIssue requires more information from poster

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions