You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/introduction/overview.rst
+70-9
Original file line number
Diff line number
Diff line change
@@ -30,37 +30,93 @@ ScrapGraphAI supports a wide range of AI models from various providers. Each mod
30
30
OpenAI Models
31
31
-------------
32
32
- GPT-3.5 Turbo (16,385 tokens)
33
-
- GPT-4 (8,192 tokens)
33
+
- GPT-3.5 (4,096 tokens)
34
+
- GPT-3.5 Turbo Instruct (4,096 tokens)
34
35
- GPT-4 Turbo Preview (128,000 tokens)
35
-
- GPT-4o (128000 tokens)
36
-
- GTP-4o-mini (128000 tokens)
36
+
- GPT-4 Vision Preview (128,000 tokens)
37
+
- GPT-4 (8,192 tokens)
38
+
- GPT-4 32k (32,768 tokens)
39
+
- GPT-4o (128,000 tokens)
40
+
- O1 Preview (128,000 tokens)
41
+
- O1 Mini (128,000 tokens)
37
42
38
43
Azure OpenAI Models
39
44
-------------------
40
45
- GPT-3.5 Turbo (16,385 tokens)
41
-
- GPT-4 (8,192 tokens)
46
+
- GPT-3.5 (4,096 tokens)
42
47
- GPT-4 Turbo Preview (128,000 tokens)
43
-
- GPT-4o (128000 tokens)
44
-
- GTP-4o-mini (128000 tokens)
48
+
- GPT-4 (8,192 tokens)
49
+
- GPT-4 32k (32,768 tokens)
50
+
- GPT-4o (128,000 tokens)
51
+
- O1 Preview (128,000 tokens)
52
+
- O1 Mini (128,000 tokens)
45
53
46
54
Google AI Models
47
55
----------------
48
56
- Gemini Pro (128,000 tokens)
57
+
- Gemini 1.5 Flash (128,000 tokens)
49
58
- Gemini 1.5 Pro (128,000 tokens)
59
+
- Gemini 1.0 Pro (128,000 tokens)
50
60
51
61
Anthropic Models
52
62
----------------
53
63
- Claude Instant (100,000 tokens)
54
-
- Claude 2 (200,000 tokens)
64
+
- Claude 2 (9,000 tokens)
65
+
- Claude 2.1 (200,000 tokens)
55
66
- Claude 3 (200,000 tokens)
67
+
- Claude 3.5 (200,000 tokens)
68
+
- Claude 3 Opus (200,000 tokens)
69
+
- Claude 3 Sonnet (200,000 tokens)
70
+
- Claude 3 Haiku (200,000 tokens)
56
71
57
72
Mistral AI Models
58
73
-----------------
59
-
- Mistral Large (128,000 tokens)
74
+
- Mistral Large Latest (128,000 tokens)
75
+
- Open Mistral Nemo (128,000 tokens)
76
+
- Codestral Latest (32,000 tokens)
60
77
- Open Mistral 7B (32,000 tokens)
61
78
- Open Mixtral 8x7B (32,000 tokens)
79
+
- Open Mixtral 8x22B (64,000 tokens)
80
+
- Open Codestral Mamba (256,000 tokens)
62
81
63
-
For a complete list of supported models and their token limits, please refer to the API documentation.
82
+
Ollama Models
83
+
-------------
84
+
- Command-R (12,800 tokens)
85
+
- CodeLlama (16,000 tokens)
86
+
- DBRX (32,768 tokens)
87
+
- DeepSeek Coder 33B (16,000 tokens)
88
+
- Llama2 Series (4,096 tokens)
89
+
- Llama3 Series (8,192-128,000 tokens)
90
+
- Mistral Models (32,000-128,000 tokens)
91
+
- Mixtral 8x22B Instruct (65,536 tokens)
92
+
- Phi3 Series (12,800-128,000 tokens)
93
+
- Qwen Series (32,000 tokens)
94
+
95
+
Hugging Face Models
96
+
------------------
97
+
- Grok-1 (8,192 tokens)
98
+
- Meta Llama 3 Series (8,192 tokens)
99
+
- Google Gemma Series (8,192 tokens)
100
+
- Microsoft Phi Series (2,048-131,072 tokens)
101
+
- GPT-2 Series (1,024 tokens)
102
+
- DeepSeek V2 Series (131,072 tokens)
103
+
104
+
Bedrock Models
105
+
-------------
106
+
- Claude 3 Series (200,000 tokens)
107
+
- Llama2 & Llama3 Series (4,096-8,192 tokens)
108
+
- Mistral Series (32,768 tokens)
109
+
- Titan Embed Text (8,000 tokens)
110
+
- Cohere Embed (512 tokens)
111
+
112
+
Fireworks Models
113
+
---------------
114
+
- Llama V2 7B (4,096 tokens)
115
+
- Mixtral 8x7B Instruct (4,096 tokens)
116
+
- Llama 3.1 Series (131,072 tokens)
117
+
- Mixtral MoE Series (65,536 tokens)
118
+
119
+
For a complete and up-to-date list of supported models and their token limits, please refer to the API documentation.
64
120
65
121
Understanding token limits is crucial for optimizing your scraping tasks. Larger token limits allow for processing more text in a single API call, which can be beneficial for scraping lengthy web pages or documents.
0 commit comments