# Foundational Models

PropulsionAI offers a range of powerful foundational models that can be used as the starting point for your AI projects. These models have been pre-trained on vast datasets and are ready to be fine-tuned for your specific needs. Alternatively, if the pre-trained model meets your requirements, you can deploy it directly without any further fine-tuning.

### **Available Foundational Models**

1. **Meta Llama Series**
   * [**meta-llama/Meta-Llama-3.1-8B-Instruct**](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)\
     The latest and a versatile model designed for a wide range of instructive tasks, featuring 8 billion parameters.
   * [**meta-llama/Meta-Llama-3-8B-Instruct**](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)\
     Another robust option in the Meta Llama series, optimized for instructive tasks with 8 billion parameters.
   * [**meta-llama/Llama-2-7b-chat**](https://huggingface.co/meta-llama/Llama-2-7b-chat)\
     A strong conversational model with 7 billion parameters, ideal for chat-based applications.
2. **Mistral AI Series**
   * [**mistralai/Mistral-7B-Instruct-v0.3**](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)\
     A highly capable model with 7 billion parameters, optimized for instructive tasks across various domains and works interestingly well on tasks such as function calling.
   * [**mistralai/Mistral-Nemo-Instruct-2407**](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)\
     An advanced model in the Mistral AI series, tailored for complex instructions and diverse applications.
3. **Google Gemma Series**
   * [**google/gemma-2-2b-it**](https://huggingface.co/google/gemma-2-2b-it)\
     A compact and efficient model with 2 billion parameters, suitable for lightweight applications and tasks.
   * [**google/gemma-2-9b-it**](https://huggingface.co/google/gemma-2-9b-it)\
     A more powerful model in the Gemma series, featuring 9 billion parameters for handling more complex tasks.
4. **Microsoft Phi Series**
   * [**microsoft/Phi-3-mini-4k-instruct**](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)\
     A specialized small model with 4k context window, designed for high-efficiency instructive tasks in compact applications.
5. **Qwen Series**
   * [**Qwen/Qwen2-0.5B-Instruct**](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct)\
     A smaller but highly efficient model with 0.5 billion parameters, suitable for focused instructive tasks requiring less computational power.

### **Using Foundational Models**

These foundational models provide you with a solid base for your AI projects. You can choose to:

* **Fine-tune**: Customize the model to better suit your specific data and requirements by fine-tuning it with your datasets.
* **Deploy Directly**: If the pre-trained model fits your use case, deploy it directly without any additional fine-tuning. This can save time and resources, especially for applications where the model's general capabilities are sufficient.

Whether you’re looking to fine-tune a model to achieve specific results or deploy a foundational model as-is, PropulsionAI provides the tools and flexibility to meet your needs.

### **Requesting a Model**

If you need a specific model that isn’t currently available in our foundational models, we’re here to help! You can request the addition of new models to the PropulsionAI platform by either:

* **Emailing us at** <support@propulsionhq.com> with the details of the model you need.
* **Sending a message on our** [**Discord channel**](https://discord.gg/J4RF7phwYN)**,** where our support team and community are available to assist you.

We’re committed to continuously expanding our offerings to meet your needs, so don’t hesitate to reach out with your requests!

{% content-ref url="/pages/bxeVJKpKUGGjCWuKogPB" %}
[Deploying Models](/quick-start/deploying-models.md)
{% endcontent-ref %}

{% content-ref url="/pages/ZYsQF2uCnySIxNe8aBgg" %}
[Fine-tuning a Model](/quick-start/fine-tuning-a-model.md)
{% endcontent-ref %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.propulsionhq.com/introduction/foundational-models.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
