Overview
KaibanJS supports integration with a variety of additional LLM providers and services, allowing you to expand your AI capabilities beyond the built-in options.
What are Custom Integrations?
Custom integrations in KaibanJS allow you to use language models that aren't pre-integrated into the framework. These integrations require some additional setup but offer greater flexibility and access to specialized models.
Available Custom Integrations
KaibanJS supports custom integrations with:
- Ollama: Run open-source models locally.
- Cohere: Access Cohere's suite of language models.
- Azure OpenAI: Use OpenAI models through Azure's cloud platform.
- Cloudflare: Integrate with Cloudflare's AI services.
- Groq: Utilize Groq's high-performance inference engines.
- Other Integrations: Explore additional options for specialized needs.
Key Benefits
- Flexibility: Choose from a wider range of model providers.
- Local Deployment: Options for running models on your own infrastructure.
- Specialized Models: Access to models optimized for specific tasks or industries.
Getting Started
To use a custom integration, you'll typically need to import the specific LLM package and configure it before passing it to your agent:
import { SomeLLM } from "some-llm-package";
const customLLM = new SomeLLM({
// LLM-specific configuration
});
const agent = new Agent({
name: 'Custom AI Assistant',
role: 'Specialized Helper',
llmInstance: customLLM
});
Explore the individual integration pages for detailed setup instructions and configuration options for each supported LLM.
Is there something unclear or quirky in the docs? Maybe you have a suggestion or spotted an issue? Help us refine and enhance our documentation by submitting an issue on GitHub. We’re all ears!