Skip to content

Commit 53d1b0b

Browse files
sunbryeCopilotthispaulam-stead
authored
Copilot Chat users can use the Gemini 2.5 Pro model (Public Preview) (#55242)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Paul Loeb <90000203+thispaul@users.noreply.github.com> Co-authored-by: Anne-Marie <102995847+am-stead@users.noreply.github.com>
1 parent c2d9ca8 commit 53d1b0b

File tree

13 files changed

+129
-79
lines changed

13 files changed

+129
-79
lines changed

content/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/managing-your-copilot-plan/managing-copilot-policies-as-an-individual-subscriber.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ You can choose whether your prompts and {% data variables.product.prodname_copil
5454
You can choose whether to allow the following AI models to be used as an alternative to {% data variables.product.prodname_copilot_short %}'s default model.
5555

5656
* {% data variables.copilot.copilot_claude_sonnet %} - see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot)
57-
* {% data variables.copilot.copilot_gemini_flash %} - see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot)
57+
* {% data variables.copilot.copilot_gemini %} - see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-in-github-copilot)
5858

5959
{% data reusables.user-settings.copilot-settings %}
6060
1. To the right of the model name, select the dropdown menu, then click **Enabled** or **Disabled**.

content/copilot/managing-copilot/managing-copilot-for-your-enterprise/managing-policies-and-features-for-copilot-in-your-enterprise.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ Some features of {% data variables.product.prodname_copilot_short %} are availab
9393
By default, {% data variables.product.prodname_copilot_chat_short %} uses a base model. If you grant access to the alternative models, members of your enterprise can choose to use these models rather than the base model. The available alternative models are:
9494

9595
* **{% data variables.copilot.copilot_claude_sonnet %}**. See [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot).
96-
* **{% data variables.copilot.copilot_gemini_flash %}**. See [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot).
96+
* **{% data variables.copilot.copilot_gemini %}**. See [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-in-github-copilot).
9797
* **OpenAI's models:**
9898
* **o1**: This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the GPT-4o model. Each member of your enterprise can make 10 requests to this model per day.
9999
* **o3-mini**: This is the next generation of reasoning models, following from o1 and o1-mini. The o3-mini model outperforms o1 on coding benchmarks with response times that are comparable to o1-mini, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. Each member of your enterprise can make 50 requests to this model every 12 hours. {% ifversion copilot-enterprise %}

content/copilot/managing-copilot/managing-github-copilot-in-your-organization/managing-policies-for-copilot-in-your-organization.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Organization owners can set policies to govern how {% data variables.product.pro
3737
* Suggestions matching public code
3838
* Access to alternative models for {% data variables.product.prodname_copilot_short %}
3939
* Anthropic {% data variables.copilot.copilot_claude_sonnet %} in {% data variables.product.prodname_copilot_short %}
40-
* Google {% data variables.copilot.copilot_gemini_flash %} in {% data variables.product.prodname_copilot_short %}
40+
* Google {% data variables.copilot.copilot_gemini %} in {% data variables.product.prodname_copilot_short %}
4141
* OpenAI o1 and o3 models in {% data variables.product.prodname_copilot_short %}
4242

4343
The policy settings selected by an organization owner determine the behavior of {% data variables.product.prodname_copilot %} for all organization members that have been granted access to {% data variables.product.prodname_copilot_short %} through the organization.

content/copilot/managing-copilot/monitoring-usage-and-entitlements/about-premium-requests.md

+11-10
Original file line numberDiff line numberDiff line change
@@ -39,16 +39,17 @@ The following {% data variables.product.prodname_copilot_short %} features can u
3939

4040
Each model has a premium request multiplier, based on its complexity and resource usage. Your premium request allowance is deducted according to this multiplier.
4141

42-
| Model | Premium requests |
43-
|--------------------------------------|------------|
44-
| Base model (GPT-4o)[^1] | 0 (paid users), 1 ({% data variables.product.prodname_copilot_free_short %}) |
45-
| {% data variables.copilot.copilot_claude_sonnet_35 %} | 1 |
46-
| {% data variables.copilot.copilot_claude_sonnet_37 %} | 1 |
47-
| {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking | 1.25 |
48-
| {% data variables.copilot.copilot_gemini_flash %} | 0.25 |
49-
| GPT-4.5 | 50 |
50-
| o1 | 10 |
51-
| o3-mini | 0.33 |
42+
| Model | Premium requests |
43+
|----------------------------------------------------------------|------------------------------------------------------------------------------|
44+
| Base model (GPT-4o)[^1] | 0 (paid users), 1 ({% data variables.product.prodname_copilot_free_short %}) |
45+
| {% data variables.copilot.copilot_claude_sonnet_35 %} | 1 |
46+
| {% data variables.copilot.copilot_claude_sonnet_37 %} | 1 |
47+
| {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking | 1.25 |
48+
| {% data variables.copilot.copilot_gemini_flash %} | 0.25 |
49+
| {% data variables.copilot.copilot_gemini_25_pro %} | 1 |
50+
| GPT-4.5 | 50 |
51+
| o1 | 10 |
52+
| o3-mini | 0.33 |
5253

5354
[^1]: Response times may vary during periods of high usage.
5455

content/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -32,14 +32,15 @@ The following models are currently available in the immersive mode of {% data va
3232
* {% data variables.copilot.copilot_claude_sonnet_35 %}
3333
* {% data variables.copilot.copilot_claude_sonnet_37 %}
3434
* {% data variables.copilot.copilot_gemini_flash %}
35+
* {% data variables.copilot.copilot_gemini_25_pro %}
3536
* {% data variables.copilot.copilot_gpt_o1 %}
3637
* {% data variables.copilot.copilot_gpt_o3_mini %}
3738

3839
For more information about these models, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task).
3940

4041
### Limitations of AI models for {% data variables.product.prodname_copilot_chat_short %}
4142

42-
* If you want to use the skills listed in the table above{% ifversion ghec %}, or knowledge bases{% endif %}, on the {% data variables.product.github %} website, only the GPT-4o, {% data variables.copilot.copilot_claude_sonnet %}, and {% data variables.copilot.copilot_gemini_flash %} models are supported.
43+
* If you want to use the skills listed in the table above{% ifversion ghec %}, or knowledge bases{% endif %}, on the {% data variables.product.github %} website, only the GPT-4o, {% data variables.copilot.copilot_claude_sonnet %}, and {% data variables.copilot.copilot_gemini %} models are supported.
4344
* Experimental pre-release versions of the models may not interact with all filters correctly, including the duplication detection filter.
4445

4546
## Changing your AI model
@@ -229,5 +230,5 @@ To use multi-model {% data variables.product.prodname_copilot_chat_short %}, you
229230

230231
* [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-code-completion)
231232
* [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot)
232-
* [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot)
233+
* [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-in-github-copilot)
233234
* [AUTOTITLE](/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task)

content/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task.md

+41-3
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ You can click a model name in the list below to jump to a detailed overview of i
2929
* [{% data variables.copilot.copilot_claude_sonnet_35 %}](#claude-35-sonnet)
3030
* [{% data variables.copilot.copilot_claude_sonnet_37 %}](#claude-37-sonnet)
3131
* [{% data variables.copilot.copilot_gemini_flash %}](#gemini-20-flash)
32-
32+
* [{% data variables.copilot.copilot_gemini_25_pro %}](#gemini-25-pro)
3333
> [!NOTE] Different models have different premium request multipliers, which can affect how much of your monthly usage allowance is consumed. For details, see [AUTOTITLE](/copilot/managing-copilot/monitoring-usage-and-entitlements/about-premium-requests).
3434
3535
## GPT-4o
@@ -278,8 +278,8 @@ The following table summarizes when an alternative model may be a better choice:
278278

279279
{% data variables.copilot.copilot_gemini_flash %} is Google’s high-speed, multimodal model optimized for real-time, interactive applications that benefit from visual input and agentic reasoning. In {% data variables.product.prodname_copilot_chat_short %}, {% data variables.copilot.copilot_gemini_flash %} enables fast responses and cross-modal understanding.
280280

281-
For more information about {% data variables.copilot.copilot_gemini_flash %}, see [Google's documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/gemini-v2).
282-
For more information on using Gemini in {% data variables.product.prodname_copilot_short %}, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot).
281+
For more information about {% data variables.copilot.copilot_gemini_flash %}, see [Google's documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-0-flash).
282+
For more information on using {% data variables.copilot.copilot_gemini %} in {% data variables.product.prodname_copilot_short %}, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-in-github-copilot).
283283

284284
### Use cases
285285

@@ -314,6 +314,44 @@ The following table summarizes when an alternative model may be a better choice:
314314

315315
{% endrowheaders %}
316316

317+
## {% data variables.copilot.copilot_gemini_25_pro %}
318+
319+
{% data variables.copilot.copilot_gemini_25_pro %} is Google's latest AI model, designed to handle complex tasks with advanced reasoning and coding capabilities. It also works well for heavy research workflows that require long-context understanding and analysis.
320+
321+
For more information about {% data variables.copilot.copilot_gemini_25_pro %}, see [Google's documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-pro).
322+
For more information on using {% data variables.copilot.copilot_gemini %} in {% data variables.product.prodname_copilot_short %}, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-in-github-copilot).
323+
324+
### Use cases
325+
326+
{% data reusables.copilot.model-use-cases.gemini-25-pro %}
327+
328+
### Strengths
329+
330+
The following table summarizes the strengths of {% data variables.copilot.copilot_gemini_25_pro %}:
331+
332+
{% rowheaders %}
333+
334+
| Task | Description | Why {% data variables.copilot.copilot_gemini_flash %} is a good fit |
335+
|---------------------------|-------------------------------------------------------------------|---------------------------------------------------------------------|
336+
| Complex code generation | Write full functions, classes, or multi-file logic. | Provides better structure, consistency, and fewer logic errors. |
337+
| Debugging complex systems | Isolate and fix performance bottlenecks or multi-file issues. | Provides step-by-step analysis and high reasoning accuracy. |
338+
| Scientific research | Analyze data and generate insights across scientific disciplines. | Supports complex analysis with heavy researching capabilities. |
339+
| Long-context processing | Analyze extensive documents, datasets, or codebases. | Handles long-context inputs effectively. |
340+
341+
{% endrowheaders %}
342+
343+
### Alternative options
344+
345+
The following table summarizes when an alternative model may be a better choice:
346+
347+
{% rowheaders %}
348+
349+
| Task | Description | Why another model may be better |
350+
|---------------------------|----------------------------------------------------|------------------------------------------------------------------------------------------------------------|
351+
| Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | o3-mini or {% data variables.copilot.copilot_gemini_flash %} are more cost-effective for basic use cases. |
352+
353+
{% endrowheaders %}
354+
317355
## Further reading
318356

319357
* [AUTOTITLE](/copilot/using-github-copilot/ai-models/examples-for-ai-model-comparison)

content/copilot/using-github-copilot/ai-models/index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ children:
1010
- /changing-the-ai-model-for-copilot-chat
1111
- /changing-the-ai-model-for-copilot-code-completion
1212
- /using-claude-sonnet-in-github-copilot
13-
- /using-gemini-flash-in-github-copilot
13+
- /using-gemini-in-github-copilot
1414
- /choosing-the-right-ai-model-for-your-task
1515
- /comparing-ai-models-using-different-tasks
1616
---

content/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot.md

-49
This file was deleted.

0 commit comments

Comments
 (0)