AI connectors

Bonitasoft AI connectors let you integrate advanced language models like OpenAI, Anthropic, Google Gemini, Mistral, Azure AI Foundry, Ollama, DeepSeek, Groq, and Cohere into your business processes. These connectors support three powerful use cases:

  • Generate text content from custom prompts

  • Classify documents into predefined categories

  • Extract structured data from unstructured text or images

By securely sending data from your processes to LLMs over HTTPS, you can enhance automation, improve decision-making, and streamline information handling—​all without writing custom code.

The Bonita AI Connectors are available for Bonita 10.2 Community (2024.3) version and above.

AI Connectors flux

Supported providers: OpenAI OpenAI | Anthropic Anthropic | Gemini Gemini | Mistral Mistral | Azure Azure | Ollama Ollama | DeepSeek DeepSeek | Groq Groq | Cohere Cohere

Getting started

To use a connector, add it as an extension dependency to your Bonita project. Choose the one related to your AI provider:

  • OpenAI OpenAI — GPT-4o and GPT models

  • Anthropic Anthropic — Claude models with vision support

  • Gemini Google Gemini — Fast and cost-effective, large context

  • MistralAI Mistral AI — EU-hosted, GDPR-friendly

  • Azure Azure AI Foundry — Enterprise compliance with Azure AD

  • Ollama Ollama — Local/on-premise LLMs, no API key needed

  • DeepSeek DeepSeek — Cost-effective AI with chain-of-thought reasoning (R1)

  • Groq Groq — Ultra-fast LPU inference, 10-100x faster than GPU providers

  • Cohere Cohere — Enterprise RAG with citations and multilingual support

Image documents are not supported yet for the Mistral connector due to a limitation of the underlying library.

Connection configuration (shared parameters)

All AI connectors share these connection parameters regardless of the provider. See each provider page for provider-specific details.

Parameter name Required Description Default value

apiKey

false

The AI provider API key. Parameter is optional for testing purpose but obviously required with official endpoints.

The connector will look for API key value in this order:

  • A system environment variable named AI_API_KEY

  • A JVM property named AI_API_KEY (-DAI_API_KEY=xxx)

  • the provided connector apiKey parameter

And at last it will use a dummy default changeMe value if no other value found.

changeMe

url

false

The AI provider endpoint url. This parameter allows to use an alternate endpoint for tests or custom deployments.

Default to the official provider endpoint if not specified.

requestTimeout

false

The request timeout in milliseconds for AI provider calls.

null

chatModelName

false

The model to use for chat. See each provider page for details.

  • OpenAI: gpt-4o

  • Anthropic: claude-sonnet-4-6

  • Gemini: gemini-2.0-flash

  • MistralAI: pixtral-12b-2409

  • Azure: (your deployment name)

  • Ollama: llama3.1

  • DeepSeek: deepseek-chat

  • Groq: llama-3.3-70b-versatile

  • Cohere: command-r-plus

modelTemperature

false

The temperature to use for the model. Higher values will result in more creative responses. Must be between 0 and 1.

Leave blank if the selected model does not support this parameter.

If the parameter is not present, the temperature will not be set in chat context.

null

Operations overview

AI connectors support three operations. See each provider page for detailed input/output parameter tables and JSON examples.

Ask

Take a user prompt and send it to the AI provider then return the response. The prompt text can ask questions about a provided process document. Supports optional JSON schema for structured output.

Classify

Classify a process document into one of your predefined categories. Returns a JSON object with category and confidence fields.

Extract

Extract structured data from a document using field names or a JSON Schema. You must provide at least one of fieldsToExtract or outputJsonSchema parameters.

When using a JSON schema, you must list in the required property all the fields you want in the JSON response.

Choosing the right provider

Criteria Recommended Provider Why

Best general-purpose reasoning

OpenAI (gpt-4o) or Anthropic (claude-sonnet-4-6)

Strong at complex analysis, extraction, and generation

Cost-sensitive batch processing

DeepSeek (deepseek-chat) or Gemini (gemini-2.0-flash)

DeepSeek V3 is 10-70x cheaper than GPT-4o with comparable quality

Data sovereignty / on-premises

Ollama (local)

Data never leaves your infrastructure

Enterprise compliance (Azure AD)

Azure AI Foundry

Integrates with existing Azure security policies

Fast classification and routing

Groq (llama-3.3-70b-versatile) or Gemini (gemini-2.0-flash)

Groq delivers ~500 tokens/sec, ideal for real-time classification

Document understanding with images

OpenAI (gpt-4o) or Gemini (gemini-1.5-pro)

Native multimodal support

Complex legal and compliance analysis

Anthropic (claude-sonnet-4-6)

Excellent instruction following and nuanced reasoning

EU data residency

Mistral AI

European hosting, GDPR-compliant infrastructure

Multi-language content

OpenAI (gpt-4o) or Anthropic (claude-sonnet-4-6)

Best multilingual capabilities

Auditable AI decisions with reasoning

DeepSeek (deepseek-reasoner)

Chain-of-thought reasoning with visible thinking process

Ultra-low latency inference

Groq (llama-3.1-8b-instant)

Sub-200ms responses for real-time user-facing features

RAG with citations and grounding

Cohere (command-r-plus)

Grounded answers with automatic source citations

Model comparison

Provider Default Model Context Window Strengths Cost Tier

OpenAI

gpt-4o

128K tokens

Versatile, multimodal, strong coding

Medium

Anthropic

claude-sonnet-4-6

200K tokens

Instruction following, long documents, safety

Medium

Google Gemini

gemini-2.0-flash

1M tokens

Speed, large context, multimodal

Low

Mistral AI

pixtral-12b-2409

128K tokens

European hosting, efficiency, open-weights option

Low-Medium

Azure AI Foundry

(deployment-based)

128K tokens

Enterprise compliance, Azure AD integration

Medium-High

Ollama

llama3.1

128K tokens

Free, on-premises, full data control

Free (hardware cost)

DeepSeek

deepseek-chat

64K tokens

Cost-effective, chain-of-thought reasoning (R1)

Very Low

Groq

llama-3.3-70b-versatile

128K tokens

Ultra-fast inference (~500 tok/sec), LPU hardware

Low (free tier available)

Cohere

command-r-plus

128K tokens

RAG with citations, multilingual, enterprise-grade

Medium