Integration with Other AI Models
Overview
UniAuto MCP Server is compatible with various AI models that support the Machine Control Protocol (MCP). This guide covers how to integrate UniAuto with different AI assistants beyond Claude.
Supported Models
UniAuto has been tested and works with the following AI models:
- Anthropic Claude (all versions)
- OpenAI GPT-4 and derivatives
- Cohere Command models
- Any model accessible via Smithery.ai
Integration Methods
There are two primary ways to integrate UniAuto with AI models:
- Direct MCP Integration: For models that natively support MCP
- Smithery.ai Bridge: For models that don’t have native MCP support
OpenAI GPT-4 & ChatGPT Integration
Prerequisites
- OpenAI API key or ChatGPT Plus subscription
- UniAuto MCP Server installed and running
- Smithery.ai account (recommended)
Setup with Smithery.ai
- Install Smithery CLI:
npm install -g @smithery/cli
- Connect UniAuto to Smithery:
smithery connect uniauto-mcp-server
- Connect OpenAI models:
smithery connect --assistant openai
- Configure your OpenAI API key:
smithery config set openai.api_key YOUR_API_KEY
Usage with GPT-4
Once connected, you can use Smithery’s integration to enable GPT-4 to control UniAuto:
smithery chat --model gpt-4
Then ask GPT-4 to automate tasks:
Can you help me test the login form on example.com?
GPT-4 will use UniAuto to execute the automation.
ChatGPT Plus with Custom GPT
If you have ChatGPT Plus, you can create a Custom GPT that uses the UniAuto tool:
- Go to ChatGPT and create a new GPT
- In the Configure tab, add an Action
- Set the Authentication to “None”
- For the API schema, use the UniAuto MCP manifest URL
- Save and publish your GPT
Now you can chat with your Custom GPT and ask it to perform automation tasks.
Cohere Command Integration
Prerequisites
- Cohere API key
- UniAuto MCP Server running
- Smithery.ai account
Setup
- Connect Cohere to Smithery:
smithery connect --assistant cohere
- Configure your Cohere API key:
smithery config set cohere.api_key YOUR_API_KEY
Usage
Start a conversation with a Cohere model through Smithery:
smithery chat --model command
Then request automation tasks as with other models.
Local Models Integration
Supported Local Models
UniAuto can work with local models that support function calling:
- LM Studio models
- Ollama models
- LocalAI models
Setup with Local Models
- Install and configure your local model server
- Expose an API endpoint compatible with OpenAI’s API
- Configure Smithery to use your local endpoint:
smithery config set local.api_url http://localhost:YOUR_PORT
smithery config set local.model YOUR_MODEL_NAME
- Start a chat session:
smithery chat --provider local
Custom Integration for Any Model
For models without direct support, you can create a custom integration:
- Use the UniAuto HTTP API directly
- The key endpoints are:
- GET
/api/mcp/manifest
- Returns the tool capabilities
- POST
/api/mcp/invoke
- Executes automation commands
Example Custom Integration
async function invokeUniAuto(action, parameters) {
const response = await fetch('http://localhost:3000/api/mcp/invoke', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
action,
parameters
})
});
return await response.json();
}
// Example usage
invokeUniAuto('navigate', { url: 'https://example.com' });
Different AI models have different strengths when it comes to test automation:
- Claude models: Excellent at understanding complex workflows and adapting to UI changes
- GPT-4: Strong at generating complex test cases and handling edge cases
- Specialized models: May perform better for domain-specific testing
Troubleshooting Cross-Model Issues
Common Problems
- Model doesn’t recognize UniAuto commands
- Ensure the model has function calling capabilities
- Verify that the MCP manifest is being properly loaded
- Model generates invalid parameters
- Some models may not follow the parameter schema strictly
- Use schema validation on your end before sending commands to UniAuto
- Different reasoning capabilities
- Models vary in their ability to reason about UI and test strategies
- You may need to adjust your prompts based on the model
Best Practices
- Start simple: Begin with basic automation tasks to ensure everything is working
- Use standardized prompts: Create template prompts that work well across different models
- Leverage self-healing: UniAuto’s self-healing features help overcome differences in selector generation
- Test model-specific edge cases: Some models may handle certain automation scenarios better than others
Resources