Foundational Models
Introduction to the foundational models available and their use cases.
PropulsionAI offers a range of powerful foundational models that can be used as the starting point for your AI projects. These models have been pre-trained on vast datasets and are ready to be fine-tuned for your specific needs. Alternatively, if the pre-trained model meets your requirements, you can deploy it directly without any further fine-tuning.
Available Foundational Models
Meta Llama Series
meta-llama/Meta-Llama-3.1-8B-Instruct The latest and a versatile model designed for a wide range of instructive tasks, featuring 8 billion parameters.
meta-llama/Meta-Llama-3-8B-Instruct Another robust option in the Meta Llama series, optimized for instructive tasks with 8 billion parameters.
meta-llama/Llama-2-7b-chat A strong conversational model with 7 billion parameters, ideal for chat-based applications.
Mistral AI Series
mistralai/Mistral-7B-Instruct-v0.3 A highly capable model with 7 billion parameters, optimized for instructive tasks across various domains and works interestingly well on tasks such as function calling.
mistralai/Mistral-Nemo-Instruct-2407 An advanced model in the Mistral AI series, tailored for complex instructions and diverse applications.
Google Gemma Series
google/gemma-2-2b-it A compact and efficient model with 2 billion parameters, suitable for lightweight applications and tasks.
google/gemma-2-9b-it A more powerful model in the Gemma series, featuring 9 billion parameters for handling more complex tasks.
Microsoft Phi Series
microsoft/Phi-3-mini-4k-instruct A specialized small model with 4k context window, designed for high-efficiency instructive tasks in compact applications.
Qwen Series
Qwen/Qwen2-0.5B-Instruct A smaller but highly efficient model with 0.5 billion parameters, suitable for focused instructive tasks requiring less computational power.
Using Foundational Models
These foundational models provide you with a solid base for your AI projects. You can choose to:
Fine-tune: Customize the model to better suit your specific data and requirements by fine-tuning it with your datasets.
Deploy Directly: If the pre-trained model fits your use case, deploy it directly without any additional fine-tuning. This can save time and resources, especially for applications where the model's general capabilities are sufficient.
Whether you’re looking to fine-tune a model to achieve specific results or deploy a foundational model as-is, PropulsionAI provides the tools and flexibility to meet your needs.
Requesting a Model
If you need a specific model that isn’t currently available in our foundational models, we’re here to help! You can request the addition of new models to the PropulsionAI platform by either:
Emailing us at support@propulsionhq.com with the details of the model you need.
Sending a message on our Discord channel, where our support team and community are available to assist you.
We’re committed to continuously expanding our offerings to meet your needs, so don’t hesitate to reach out with your requests!
Last updated