You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/managing-your-copilot-plan/managing-copilot-policies-as-an-individual-subscriber.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -54,7 +54,7 @@ You can choose whether your prompts and {% data variables.product.prodname_copil
54
54
You can choose whether to allow the following AI models to be used as an alternative to {% data variables.product.prodname_copilot_short %}'s default model.
55
55
56
56
* {% data variables.copilot.copilot_claude_sonnet %} - see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot)
57
-
* {% data variables.copilot.copilot_gemini_flash %} - see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot)
57
+
* {% data variables.copilot.copilot_gemini %} - see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-in-github-copilot)
58
58
59
59
{% data reusables.user-settings.copilot-settings %}
60
60
1. To the right of the model name, select the dropdown menu, then click **Enabled** or **Disabled**.
Copy file name to clipboardExpand all lines: content/copilot/managing-copilot/managing-copilot-for-your-enterprise/managing-policies-and-features-for-copilot-in-your-enterprise.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -93,7 +93,7 @@ Some features of {% data variables.product.prodname_copilot_short %} are availab
93
93
By default, {% data variables.product.prodname_copilot_chat_short %} uses a base model. If you grant access to the alternative models, members of your enterprise can choose to use these models rather than the base model. The available alternative models are:
94
94
95
95
***{% data variables.copilot.copilot_claude_sonnet %}**. See [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot).
96
-
***{% data variables.copilot.copilot_gemini_flash %}**. See [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot).
96
+
***{% data variables.copilot.copilot_gemini %}**. See [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-in-github-copilot).
97
97
***OpenAI's models:**
98
98
***o1**: This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the GPT-4o model. Each member of your enterprise can make 10 requests to this model per day.
99
99
***o3-mini**: This is the next generation of reasoning models, following from o1 and o1-mini. The o3-mini model outperforms o1 on coding benchmarks with response times that are comparable to o1-mini, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. Each member of your enterprise can make 50 requests to this model every 12 hours. {% ifversion copilot-enterprise %}
Copy file name to clipboardExpand all lines: content/copilot/managing-copilot/managing-github-copilot-in-your-organization/managing-policies-for-copilot-in-your-organization.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ Organization owners can set policies to govern how {% data variables.product.pro
37
37
* Suggestions matching public code
38
38
* Access to alternative models for {% data variables.product.prodname_copilot_short %}
39
39
* Anthropic {% data variables.copilot.copilot_claude_sonnet %} in {% data variables.product.prodname_copilot_short %}
40
-
* Google {% data variables.copilot.copilot_gemini_flash %} in {% data variables.product.prodname_copilot_short %}
40
+
* Google {% data variables.copilot.copilot_gemini %} in {% data variables.product.prodname_copilot_short %}
41
41
* OpenAI o1 and o3 models in {% data variables.product.prodname_copilot_short %}
42
42
43
43
The policy settings selected by an organization owner determine the behavior of {% data variables.product.prodname_copilot %} for all organization members that have been granted access to {% data variables.product.prodname_copilot_short %} through the organization.
Copy file name to clipboardExpand all lines: content/copilot/managing-copilot/monitoring-usage-and-entitlements/about-premium-requests.md
+11-10
Original file line number
Diff line number
Diff line change
@@ -39,16 +39,17 @@ The following {% data variables.product.prodname_copilot_short %} features can u
39
39
40
40
Each model has a premium request multiplier, based on its complexity and resource usage. Your premium request allowance is deducted according to this multiplier.
Copy file name to clipboardExpand all lines: content/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat.md
+3-2
Original file line number
Diff line number
Diff line change
@@ -32,14 +32,15 @@ The following models are currently available in the immersive mode of {% data va
32
32
* {% data variables.copilot.copilot_claude_sonnet_35 %}
33
33
* {% data variables.copilot.copilot_claude_sonnet_37 %}
34
34
* {% data variables.copilot.copilot_gemini_flash %}
35
+
* {% data variables.copilot.copilot_gemini_25_pro %}
35
36
* {% data variables.copilot.copilot_gpt_o1 %}
36
37
* {% data variables.copilot.copilot_gpt_o3_mini %}
37
38
38
39
For more information about these models, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task).
39
40
40
41
### Limitations of AI models for {% data variables.product.prodname_copilot_chat_short %}
41
42
42
-
* If you want to use the skills listed in the table above{% ifversion ghec %}, or knowledge bases{% endif %}, on the {% data variables.product.github %} website, only the GPT-4o, {% data variables.copilot.copilot_claude_sonnet %}, and {% data variables.copilot.copilot_gemini_flash %} models are supported.
43
+
* If you want to use the skills listed in the table above{% ifversion ghec %}, or knowledge bases{% endif %}, on the {% data variables.product.github %} website, only the GPT-4o, {% data variables.copilot.copilot_claude_sonnet %}, and {% data variables.copilot.copilot_gemini %} models are supported.
43
44
* Experimental pre-release versions of the models may not interact with all filters correctly, including the duplication detection filter.
44
45
45
46
## Changing your AI model
@@ -229,5 +230,5 @@ To use multi-model {% data variables.product.prodname_copilot_chat_short %}, you
Copy file name to clipboardExpand all lines: content/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task.md
+41-3
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ You can click a model name in the list below to jump to a detailed overview of i
29
29
*[{% data variables.copilot.copilot_claude_sonnet_35 %}](#claude-35-sonnet)
30
30
*[{% data variables.copilot.copilot_claude_sonnet_37 %}](#claude-37-sonnet)
31
31
*[{% data variables.copilot.copilot_gemini_flash %}](#gemini-20-flash)
32
-
32
+
*[{% data variables.copilot.copilot_gemini_25_pro %}](#gemini-25-pro)
33
33
> [!NOTE] Different models have different premium request multipliers, which can affect how much of your monthly usage allowance is consumed. For details, see [AUTOTITLE](/copilot/managing-copilot/monitoring-usage-and-entitlements/about-premium-requests).
34
34
35
35
## GPT-4o
@@ -278,8 +278,8 @@ The following table summarizes when an alternative model may be a better choice:
278
278
279
279
{% data variables.copilot.copilot_gemini_flash %} is Google’s high-speed, multimodal model optimized for real-time, interactive applications that benefit from visual input and agentic reasoning. In {% data variables.product.prodname_copilot_chat_short %}, {% data variables.copilot.copilot_gemini_flash %} enables fast responses and cross-modal understanding.
280
280
281
-
For more information about {% data variables.copilot.copilot_gemini_flash %}, see [Google's documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/gemini-v2).
282
-
For more information on using Gemini in {% data variables.product.prodname_copilot_short %}, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot).
281
+
For more information about {% data variables.copilot.copilot_gemini_flash %}, see [Google's documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-0-flash).
282
+
For more information on using {% data variables.copilot.copilot_gemini %} in {% data variables.product.prodname_copilot_short %}, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-in-github-copilot).
283
283
284
284
### Use cases
285
285
@@ -314,6 +314,44 @@ The following table summarizes when an alternative model may be a better choice:
314
314
315
315
{% endrowheaders %}
316
316
317
+
## {% data variables.copilot.copilot_gemini_25_pro %}
318
+
319
+
{% data variables.copilot.copilot_gemini_25_pro %} is Google's latest AI model, designed to handle complex tasks with advanced reasoning and coding capabilities. It also works well for heavy research workflows that require long-context understanding and analysis.
320
+
321
+
For more information about {% data variables.copilot.copilot_gemini_25_pro %}, see [Google's documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-pro).
322
+
For more information on using {% data variables.copilot.copilot_gemini %} in {% data variables.product.prodname_copilot_short %}, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-in-github-copilot).
323
+
324
+
### Use cases
325
+
326
+
{% data reusables.copilot.model-use-cases.gemini-25-pro %}
327
+
328
+
### Strengths
329
+
330
+
The following table summarizes the strengths of {% data variables.copilot.copilot_gemini_25_pro %}:
331
+
332
+
{% rowheaders %}
333
+
334
+
| Task | Description | Why {% data variables.copilot.copilot_gemini_flash %} is a good fit |
| Complex code generation | Write full functions, classes, or multi-file logic. | Provides better structure, consistency, and fewer logic errors. |
337
+
| Debugging complex systems | Isolate and fix performance bottlenecks or multi-file issues. | Provides step-by-step analysis and high reasoning accuracy. |
338
+
| Scientific research | Analyze data and generate insights across scientific disciplines. | Supports complex analysis with heavy researching capabilities. |
| Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | o3-mini or {% data variables.copilot.copilot_gemini_flash %} are more cost-effective for basic use cases. |
0 commit comments