Other LLM Integrations
KaibanJS, through its integration with LangChain, supports a wide variety of language models beyond the main providers. This section provides an overview of additional LLM integrations you can use with your KaibanJS agents.
Available Integrations
KaibanJS supports the following additional language model integrations through LangChain:
- Alibaba Tongyi: Supports the Alibaba qwen family of models.
- Arcjet Redact: Allows redaction of sensitive information.
- Baidu Qianfan: Provides access to Baidu's language models.
- Deep Infra: Offers hosted language models.
- Fireworks: AI inference platform for running large language models.
- Friendli: Enhances AI application performance and optimizes cost savings.
- Llama CPP: (Node.js only) Enables use of Llama models.
- Minimax: Chinese startup providing natural language processing services.
- Moonshot: Supports the Moonshot AI family of models.
- PremAI: Offers access to PremAI models.
- Tencent Hunyuan: Supports the Tencent Hunyuan family of models.
- Together AI: Provides an API to query 50+ open-source models.
- WebLLM: (Web environments only) Enables browser-based LLM usage.
- YandexGPT: Supports calling YandexGPT chat models.
- ZhipuAI: Supports the Zhipu AI family of models.
Integration Process
The general process for integrating these models with KaibanJS is similar to other custom integrations:
- Install the necessary LangChain package for the specific integration.
- Import the appropriate chat model class from LangChain.
- Configure the model with the required parameters.
- Use the configured model instance in your KaibanJS agent.
Here's a generic example of how you might integrate one of these models:
import { SomeSpecificChatModel } from "@langchain/some-specific-package";
const customModel = new SomeSpecificChatModel({
// Model-specific configuration options
apiKey: process.env.SOME_API_KEY,
// Other necessary parameters...
});
const agent = new Agent({
name: 'Custom Model Agent',
role: 'Assistant',
goal: 'Provide assistance using a custom language model.',
background: 'AI Assistant powered by a specific LLM integration',
llmInstance: customModel
});
Features and Compatibility
Different integrations offer varying levels of support for advanced features. Here's a general overview:
- Streaming: Most integrations support streaming responses.
- JSON Mode: Some integrations support structured JSON outputs.
- Tool Calling: Many integrations support function/tool calling capabilities.
- Multimodal: Some integrations support processing multiple types of data (text, images, etc.).
Refer to the specific LangChain documentation for each integration to understand its exact capabilities and configuration options.
Best Practices
- Documentation: Always refer to the latest LangChain documentation for the most up-to-date integration instructions.
- API Keys: Securely manage API keys and other sensitive information using environment variables.
- Error Handling: Implement robust error handling, as different integrations may have unique error patterns.
- Testing: Thoroughly test the integration, especially when using less common or region-specific models.
Limitations
- Some integrations may have limited documentation or community support.
- Certain integrations might be region-specific or have unique licensing requirements.
- Performance and capabilities can vary significantly between different integrations.
Further Resources
Is there something unclear or quirky in the docs? Maybe you have a suggestion or spotted an issue? Help us refine and enhance our documentation by submitting an issue on GitHub. We're all ears!