Model HQ

Intel Supported Models

Comprehensive list of 225+ AI models optimized for Intel processors using OpenVINO runtime

Intel Optimization Features
Key benefits of Intel-optimized models

Performance Benefits

  • • Optimized for Intel CPU, GPU, and NPU architectures
  • • Enhanced inference speed with OpenVINO runtime
  • • Reduced memory footprint and power consumption
  • • Hardware-specific optimizations for latest Intel chips

Supported Hardware

  • • Intel Core processors (Meteor Lake, Lunar Lake, Arrow Lake)
  • • Intel Xeon processors
  • • Intel Arc GPUs
  • • Intel Neural Processing Units (NPUs)
Intel Supported Models
Complete catalog of 228 models optimized for Intel processors
Model TypeAvailable ModelsParameters
Embedding Models
industry-bert-contracts-ov
NA
industry-bert-insurance-ov
NA
industry-bert-asset-management-ov
NA
industry-bert-sec-ov
NA
industry-bert-loans-ov
NA
all-mini-lm-l6-v2-ov
NA
all-mpnet-base-v2-ov
NA
paraphrase-multilingual-MiniLM-L12-v2-ov
NA
gte-small-ov
NA
gte-base-ov
NA
gte-large-ov
NA
bge-small-en-v1.5-ov
NA
bge-base-en-v1.5-ov
NA
bge-large-en-v1.5-ov
NA
protectai-prompt-injection-ov
NA
malicious-url-detector-ov
NA
xlm-roberta-language-detector-ov
NA
valurank-bias-ov
NA
unitary-toxic-roberta-ov
NA
jina-reranker-v1-tiny-en-ov
NA
jina-reranker-v1-turbo-en-ov
NA
jina-reranker-tiny-onnx
NA
jina-reranker-turbo-onnx
NA
protectai-prompt-injection-onnx
NA
valurank-bias-onnx
NA
unitary-toxic-roberta-onnx
NA
Qwen Models
qwen2-vl-2b-instruct-ov
2
qwen2-vl-7b-instruct-ov
7
bling-qwen-500m-ov
0.5
bling-qwen-1.5b-ov
1.5
dragon-qwen-7b-ov
7
slim-extract-qwen-0.5b-ov
0.5
slim-extract-qwen-1.5b-ov
1.5
qwen2-7b-instruct-ov
7
qwen2-1.5b-instruct-ov
1.5
qwen2-0.5b-chat-ov
0.5
qwen2.5-1.5b-instruct-ov
1.5
qwen3-8b-ov
8
qwen3-1.7b-ov
1.7
qwen3-4b-ov
4
qwen3-14b-ov
14
qwen2.5-0.5b-instruct-ov
0.5
qwen2.5-3b-instruct-ov
3
qwen2.5-14b-instruct-ov
14
qwen2.5-32b-instruct-ov
32
qwen2.5-72b-instruct-ov
72
qwen2.5-coder-7b-instruct-ov
7
bling-qwen-mini-tool
1.5
bling-qwen-0.5b-gguf
0.5
dragon-qwen-7b-gguf
7
qwen2-7B-instruct-gguf
7
qwen3-1.7b-gguf
1.7
qwen3-4b-instruct-gguf
4
qwen3-8b-gguf
8
qwen3-14b-gguf
14
qwen2-1.5b-instruct-gguf
1.5
qwen2-0.5b-instruct-gguf
0.5
slim-extract-qwen-1.5b-gguf
1.5
slim-extract-qwen-nano-gguf
0.5
qwen-2.5-7b-coder-gguf
7
qwen-2.5-14b-instruct-gguf
14
deepseek-qwen-14b-gguf
14
deepseek-qwen-7b-gguf
7
qwen2.5-32b-gguf
32
Llama-Based Models
llama-11b-vision-instruct-ov
11
bling-tiny-llama-ov
1.1
bling-tiny-llama-npu-ov
1.1
dragon-llama2-ov
7
llama-2-chat-ov
7
llama-2-13b-chat-ov
13
tiny-llama-chat-ov
1.1
llama-3.1-instruct-ov
8
llama-3.1-8b-instruct-npu-ov
8
nvidia-llama3-chatqa-1.5-8b-ov
8
dolphin-2.9.4-llama3.1-8b-ov
8
llama-3.2-3b-instruct-ov
3
llama-3.2-3b-instruct-npu-ov
3
llama-3.2-1b-instruct-ov
1.1
llama-3.2-1b-instruct-npu-ov
1.1
bling-tiny-llama-onnx
1.1
llama-3.2-3b-onnx-qnn
3
llama-2-chat-onnx
7
llama-3.1-instruct-onnx
8
llama-3.2-1b-instruct-onnx
1.3
llama-3.2-3b-instruct-onnx
3
dragon-llama-3.1-gguf
8
dragon-llama-answer-tool
7
llama-3.1-instruct-gguf
8
llama-2-7b-chat-gguf
7
llama-3-8b-instruct-gguf
8
tiny-llama-chat-gguf
1.1
llama-3.2-1b-instruct-gguf
1.3
llama-3.2-3b-instruct-gguf
3
Phi Models
bling-phi-3-ov
3.8
slim-xsum-phi-3-ov
3.8
slim-boolean-phi-3-ov
3.8
slim-sa-ner-phi-3-ov
3.8
slim-summary-phi-3-ov
3.8
slim-sql-phi-3-ov
3.8
slim-extract-phi-3-ov
3.8
phi-3-ov
3.8
phi-3-npu-ov
3.8
phi-4-ov
14
phi-4-mini-ov
4
phi-4-mini-npu-ov
4
phi-4-npu-ov
14
bling-phi-3-onnx
3.8
phi-3-onnx
3.8
phi-3.5-onnx-qnn
NA
phi-3-vision-onnx
3.8
slim-summary-phi-3-onnx
3.8
slim-extract-phi-3-onnx
3.8
slim-boolean-phi-3-onnx
3.8
bling-phi-3-gguf
3.8
bling-phi-3.5-gguf
3.8
phi-3.5-gguf
3.8
phi-4-gguf
14
phi-4-mini-gguf
4
phi-4-mini-reasoning-gguf
4
phi-3-gguf
3.8
slim-extract-phi-3-gguf
3.8
slim-xsum-phi-3-gguf
3.8
slim-boolean-phi-3-gguf
3.8
slim-sa-ner-phi-3-gguf
3.8
slim-q-gen-phi-3-tool
3.8
slim-qa-gen-phi-3-tool
3.8
Mistral Models
dragon-mistral-ov
7.3
dragon-mistral-0.3-ov
7.3
mistral-7b-instruct-v0.3-ov
7.3
mistral-7b-v0.3-npu-ov
7.3
mistral-small-instruct-2409-ov
22
mistral-nemo-instruct-2407-ov
12
mistral-7b-instruct-v0.2-ov
7.3
zephyr-mistral-7b-chat-ov
7.3
teknium-open-hermes-2.5-mistral-ov
7.3
dolphin-2.9.3-mistral-7b-32k-ov
7.3
dragon-mistral-0.3-onnx
7.3
mistral-7b-instruct-v0.3-onnx
7.3
dragon-mistral-0.3-gguf
7.3
mistral-3.2-24b-gguf
24
openhermes-2.5-mistral-7b-gguf
7.3
zephyr-7b-beta-gguf
7.3
starling-lm-7b-alpha-gguf
7
dragon-mistral-answer-tool
7.3
mistral-7b-instruct-v0.3-gguf
7.3
Yi Models
dragon-yi-6b-ov
5.8
dragon-yi-9b-ov
8.8
yi-9b-chat-ov
8.8
yi-9b-npu-ov
8.8
yi-6b-1.5v-chat-ov
5.8
dragon-yi-9b-gguf
8.8
dragon-yi-answer-tool
5.8
DRAGON Models
dragon-qwen-7b-ov
7
dragon-llama2-ov
7
dragon-mistral-ov
7.3
dragon-mistral-0.3-ov
7.3
dragon-yi-6b-ov
5.8
dragon-yi-9b-ov
8.8
dragon-mistral-0.3-onnx
7.3
dragon-llama-3.1-gguf
8
dragon-mistral-0.3-gguf
7.3
dragon-yi-9b-gguf
8.8
dragon-qwen-7b-gguf
7
dragon-yi-answer-tool
5.8
dragon-llama-answer-tool
7
dragon-mistral-answer-tool
7.3
Slim Models
slim-sentiment-ov
1.1
slim-sentiment-npu-ov
1.1
slim-xsum-phi-3-ov
3.8
slim-boolean-phi-3-ov
3.8
slim-sa-ner-phi-3-ov
3.8
slim-summary-phi-3-ov
3.8
slim-sql-qwen-base-ov
2
slim-sql-phi-3-ov
3.8
slim-extract-phi-3-ov
3.8
slim-extract-tiny-ov
1.1
slim-extract-tiny-npu-ov
1.1
slim-extract-qwen-0.5b-ov
0.5
slim-extract-qwen-1.5b-ov
1.5
slim-summary-tiny-ov
1.1
slim-summary-tiny-npu-ov
1.1
slim-sql-ov
1.1
slim-sql-npu-ov
1.1
slim-emotions-ov
1.1
slim-emotions-npu-ov
1.1
slim-topics-ov
1.1
slim-topics-npu-ov
1.1
slim-ner-ov
1.1
slim-ner-npu-ov
1.1
slim-intent-ov
1.1
slim-category-ov
1.1
slim-intent-npu-ov
1.1
slim-tags-ov
1.1
slim-tags-npu-ov
1.1
slim-ratings-ov
1.1
slim-ratings-npu-ov
1.1
slim-q-gen-tiny-ov
1.1
slim-qa-gen-tiny-ov
1.1
slim-sentiment-onnx
1.1
slim-extract-tiny-onnx
1.1
slim-summary-tiny-onnx
1.1
slim-sql-onnx
1.1
slim-emotions-onnx
1.1
slim-topics-onnx
1.1
slim-ner-onnx
1.1
slim-intent-onnx
1.1
slim-tags-onnx
1.1
slim-ratings-onnx
1.1
slim-summary-phi-3-onnx
3.8
slim-extract-phi-3-onnx
3.8
slim-boolean-phi-3-onnx
3.8
slim-ner-tool
1.1
slim-sentiment-tool
1.1
slim-emotions-tool
1.1
slim-ratings-tool
1.1
slim-intent-tool
1.1
slim-nli-tool
1.1
slim-topics-tool
1.1
slim-tags-tool
1.1
slim-sql-tool
1.1
bling-answer-tool
1.1
slim-category-tool
1.1
slim-xsum-tool
1.1
slim-extract-tool
2.8
slim-extract-phi-3-gguf
3.8
slim-extract-qwen-1.5b-gguf
1.5
slim-extract-qwen-nano-gguf
0.5
slim-extract-tiny-tool
1.1
slim-summary-tiny-tool
1.1
slim-summary-phi-3-gguf
1.1
slim-xsum-phi-3-gguf
3.8
slim-boolean-tool
2.8
slim-boolean-phi-3-gguf
3.8
slim-sa-ner-phi-3-gguf
3.8
slim-sa-ner-tool
2.8
slim-tags-3b-tool
2.8
slim-summary-tool
2.8
slim-q-gen-phi-3-tool
3.8
slim-q-gen-tiny-tool
1.1
slim-qa-gen-tiny-tool
1.1
slim-qa-gen-phi-3-tool
3.8
StableLM Models
stablelm-zephyr-3b-ov
2.8
stablelm-2-zephyr-1_6b-ov
1.6
stablelm-2-12b-chat-ov
12
bling-stablelm-3b-gguf
2.8
Gemma Models
gemma-7b-it-ov
7
codegemma-7b-it-ov
7
gemma-2b-it-ov
2
gemma-2b-it-onnx
2
gemma-3-4b-gguf
4
gemma-3-12b-gguf
12
gemma-2-9b-instruct-gguf
9
gemma-2-27b-instruct-gguf
27
Specialized Models
intel-neural-chat-7b-v3-2-ov
7
tiny-dolphin-2.8-1.1b-ov
1.1
dreamgen-wizardlm-2-7b-ov
7
openchat-3.6-8b-20240522-ov
8
mathstral-7b-ov
7.3
whisper-cpp-base-english
NA
Multimodal Models
speech-t5-tts-ov
NA
lcm-dreamshaper-ov
NA

Getting Started with Intel Models

To use Intel-optimized models in Model HQ:

  1. Ensure you have an Intel processor with OpenVINO support
  2. Select models with the "-ov" suffix from the Models section
  3. The system will automatically use Intel optimizations when available
  4. Monitor performance improvements in the system metrics

Technical Support

For Intel-specific optimization questions or issues, contact our technical support team at support@aibloks.com

🚀 Performance Tip

For optimal performance on Intel hardware, choose models with the highest parameter count that your system can support. Intel's optimization ensures efficient resource utilization.

Check System Requirements