A Go-based proxy server that provides OpenAI-compatible API endpoints for Any AI website using browser automation.
This project creates a bridge between OpenAI's API format and various AI websites' web interfaces. It uses ChromeDP for browser automation to interact with Any AI website and provides a REST API that mimics OpenAI's chat completions endpoint.
- OpenAI-Compatible API: Supports
/v1/chat/completionsendpoints - Browser Automation: Uses ChromeDP with Fingerprint Chromium browser for web automation
- Request Queue: Implements a queue system to handle requests sequentially
- Configurable Workflows: YAML-based configuration for different automation workflows
- Multi-AI Service Support: Supports ChatGPT, Gemini AI Studio, Grok, and more
- Multi-Instance Support: Can manage multiple AI service instances simultaneously
- Screenshot API: Built-in screenshot functionality for debugging
- Authentication Management: Automatic cookie and session management
Currently supports the following AI services:
- ChatGPT (https://chatgpt.com/)
- Gemini AI Studio (https://aistudio.google.com/)
- Grok (https://grok.com/)
Each service has a dedicated adapter to handle its specific response format and interaction patterns.
The application consists of several key components:
- API Server (
internal/api/): Gin-based HTTP server providing OpenAI-compatible endpoints - Browser Manager (
internal/browser/chrome/): Manages ChromeDP browser instances and contexts - Runner System (
internal/runner/): Executes YAML-defined workflows for browser automation - Method Library (
internal/method/): Collection of automation methods (click, input, etc.) - Adapter System (
internal/adapter/): Handles response format conversion for different AI services - Configuration (
internal/config/): Application configuration management - Utils (
internal/utils/): Common utility functions
- Client sends OpenAI-format request to
/v1/chat/completions - Request is queued in the request queue system
- Runner executes the appropriate YAML workflow to interact with the AI service
- Browser automation performs the necessary actions (input text, click buttons, etc.)
- Adapter intercepts and processes the response from the AI service
- Response is formatted and returned to the client
- Go 1.24 or later
- Fingerprint Chromium browser
- Clone the repository:
git clone https://github.com/luispater/anyAIProxyAPI.git
cd anyAIProxyAPI- Install dependencies:
go mod download- Configure the application by editing
runner/main.yaml
The main configuration file is runner/main.yaml:
version: "1"
debug: true
browser:
fingerprint-chromium-path: "/Applications/Chromium.app/Contents/MacOS/Chromium"
args:
- "--fingerprint=1000"
- "--timezone=America/Los_Angeles"
- "--remote-debugging-port=9222"
- "--lang=en-US"
- "--accept-lang=en-US"
user-data-dir: "/anyAIProxyAPI/user-data-dir"
proxy-url: "http://user:pass@192.168.1.1:8080/" # proxy url for browser, if instance-alone is false, this proxy setting will be ignored
api-port: "2048"
headless: false
instance-alone: true # if true, each instance will have its own browser instance
logfile: "any-ai-proxy.log"
tokens: # Global tokens for API validation (optional)
- "global-token-1"
- "global-token-2"
instance:
- name: "gemini-aistudio"
adapter: "gemini-aistudio"
proxy-url: "socks5://user:pass@192.168.1.1:1080/" # proxy url for each instance browser, if instance-alone is true, this proxy setting will be used
url: "https://aistudio.google.com/prompts/new_chat"
sniff-url:
- "https://alkalimakersuite-pa.clients6.google.com/$rpc/google.internal.alkali.applications.makersuite.v1.MakerSuiteService/GenerateContent"
auth:
file: "auth/gemini-aistudio.json"
check: "ms-settings-menu"
runner: # must be init, chat_completions, context_canceled
init: "init-system" # init runner
chat_completions: "chat_completions" # chat_completions runner
context_canceled: "context-canceled" # context canceled(client disconnect) runner
tokens: # Instance-specific tokens for API validation (optional)
- "gemini-token-3"
- "gemini-token-4"
- name: "chatgpt"
adapter: "chatgpt"
proxy-url: "" # proxy url for each instance browser, if this setting is empty, the browser will be directly connected to the internet
url: "https://chatgpt.com/"
sniff-url:
- "https://chatgpt.com/backend-api/conversation"
auth:
file: "auth/chatgpt.json"
check: 'div[id="sidebar-header"]'
runner:
init: "init"
chat_completions: "chat_completions"
context_canceled: "context-canceled"
- name: "grok"
adapter: "grok"
proxy-url: ""
url: "https://grok.com/"
sniff-url:
- "https://grok.com/rest/app-chat/conversations/new"
auth:
file: "auth/grok.json"
check: 'a[href="/chat#private"]'
runner:
init: "init-system"
chat_completions: "chat_completions"
context_canceled: "context-canceled"debug: Enable debug mode for detailed loggingbrowser: Browser executable settingsfingerprint-chromium-path: Path to Fingerprint Chromium browserargs: Browser launch argumentsuser-data-dir: User data directoryproxy-url: Proxy URL for browser, ifinstance-aloneis false, this proxy setting will be ignored
api-port: Port for the API serverheadless: Run browser in headless modeinstance-alone: Run each instance will have its own browser instancetokens: Global tokens for API validation (optional)instance: Array of AI service instances to manage. Each instance has its own configurationname: Instance nameadapter: Adapter name (corresponds to different AI services)proxy-url: Proxy URL for each instance browser, ifinstance-aloneis false, this proxy setting will be ignoredurl: AI service URLsniff-url: URL patterns for intercepting responsesauth: Authentication configurationfile: File to store authentication informationcheck: CSS selector to check login status
runner: Runner configuration. All runner files must be defined in a directory corresponding to the instance nametokens: Instance specific tokens for API validation (optional)
For details on the runner file syntax, please refer to runner.md
go run main.goThe server will start on the configured port (default: 2048).
http://localhost:2048/v1/auth/upload
curl -X POST http://localhost:2048/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "instance-name/model-name",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
]
}'GET http://localhost:2048/screenshot?instance=instance-namePOST http://localhost:2048/v1/auth/upload \
-H "Content-Type: application/json" \
-d '{
"name": "instance-name",
"auth": "{\"cookies\":[],\"local_storage\":{\"key\":\"value\"}}"
}'GET http://localhost:2048/curl -X POST http://localhost:2048/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "chatgpt/gpt-4",
"messages": [
{
"role": "user",
"content": "Explain the basic principles of quantum computing"
}
]
}'curl -X POST http://localhost:2048/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gemini/gemini-pro",
"messages": [
{
"role": "user",
"content": "Write a Python quicksort algorithm"
}
]
}'curl -X POST http://localhost:2048/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "grok/grok3",
"messages": [
{
"role": "user",
"content": "What are the latest developments in AI?"
}
]
}'The application uses a YAML-based workflow system to define browser automation sequences. Workflows are stored in the runner/ directory and define step-by-step instructions for interacting with AI services.
Each AI service instance has its own workflow directory:
runner/instance-name/- Any AI website related workflows
Each directory contains the following core workflow files:
init.yamlorinit-system.yaml- Initialization workflowchat_completions.yaml- Chat completion workflowcontext-canceled.yaml- Context cancellation workflow
For detailed information about the runner system, see runner.md.
├── main.go # Application entry point
├── go.mod # Go module file
├── go.sum # Go dependency checksum file
├── LICENSE # MIT license
├── README.md # Project documentation
├── runner.md # Runner system documentation
├── internal/ # Internal packages
│ ├── adapter/ # AI website adapters
│ │ ├── adapter.go # Adapter interface
│ │ ├── chatgpt.go # ChatGPT adapter
│ │ ├── gemini-aistudio.go # Gemini AI Studio adapter
│ │ └── grok.go # Grok adapter
│ ├── api/ # HTTP API server
│ │ ├── server.go # Server main
│ │ ├── handlers.go # API handlers
│ │ ├── queue.go # Request queue
│ │ └── processor.go # Chat processor
│ ├── browser/ # Browser management
│ │ └── chrome/ # ChromeDP manager
│ ├── config/ # Configuration handling
│ ├── html/ # HTML content
│ ├── method/ # Automation methods
│ ├── proxy/ # Proxy server
│ ├── runner/ # Workflow execution engine
│ └── utils/ # Utility functions
├── runner/ # Workflow configurations
│ ├── main.yaml # Main configuration file
│ └── instance-name/ # Instance workflows
└── auth/ # Authentication files
go build -o any-ai-proxy main.gogo test ./...- Go 1.24+: Main programming language
- ChromeDP: Browser automation library
- Gin: HTTP web framework
- YAML: Configuration file format
- Logrus: Structured logging library
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License. Refer to the LICENSE file for details.
This project was inspired by AIStudioProxyAPI
This project is for educational and research purposes. Please ensure you comply with Any AI website's terms of service when using this software.
A: You need to create a new adapter (in internal/adapter/) and corresponding workflow configurations (in runner/ directory).
A: Please check if the Fingerprint Chromium path configuration is correct and ensure the browser executable exists.
A: Set debug: true in runner/main.yaml, which will enable detailed debug logging.
A: Supports macOS, Linux, and Windows but requires the corresponding platform's Fingerprint Chromium browser.