Skip to main content

AI Integration

The wiring that makes the AI useful.

AI integration services for Australian businesses. We connect Claude, GPT-4, and open-source LLMs to your CRM, ERP, helpdesk, document store, and internal APIs — REST, GraphQL, database, MCP, webhook, message queue, and RPA when there is no other surface. The integration layer you own outright, no per-execution fees.

61%

Of leaders rank integration top blocker

600+

Native N8N integrations

MCP

Tool discovery at runtime

2 wk

First integration live

Why integration is the bottleneck, not the model

Every AI proof-of-concept demo looks magical. The model summarises a document, drafts an email, answers a question. Then the team tries to put it into production and discovers the model needs to read from the CRM, write to the helpdesk, look up something in the ERP, post to Slack, and respect the per-user permissions of whoever asked. Suddenly the project is an integration project with an AI step in the middle — and the integration is the hard part.

McKinsey's 2024 State of AI reportmeasured the bottleneck directly: "61% of leaders rank integration with existing systems as their top blocker to enterprise AI value capture, ahead of model capability and ahead of skills" (McKinsey & Company, The State of AI in Early 2024). The model has been the easy bit since GPT-4. The wiring is what we ship.

Integration patterns we ship most often

  1. CRM read + write. Salesforce, HubSpot, Pipedrive, Zoho, monday.com — AI reads opportunity history, drafts updates, writes notes back with full audit trail.
  2. ERP integration. NetSuite, SAP, MYOB Acumatica, Xero, MYOB AccountRight, QuickBooks — AI reads invoices, draft journal entries, looks up product master, posts approved transactions.
  3. Helpdesk integration. Zendesk, Freshdesk, Intercom, HubSpot Service, Help Scout, Front — AI drafts ticket responses inside the helpdesk, escalates with full context.
  4. Document store retrieval. Google Drive, SharePoint, Notion, Confluence, Dropbox, Box — AI retrieves relevant chunks at query time with inline citations.
  5. Communication channels. Slack, Microsoft Teams, Telegram, WhatsApp Business, Outlook — AI sends messages, reads channels, drafts emails, schedules meetings.
  6. MCP toolboxes. Custom MCP servers for client-specific systems so the AI can discover and call tools dynamically without prompt-time hard-coding.

The non-negotiables of every integration

Per-user permissions enforced at query time

Never use a bypass-permission service account. The AI can only access data the asking user can see, enforced at the integration layer.

Idempotent write actions

Every write call uses an idempotency key. Retries are safe. The AI cannot accidentally double-create a record by re-running a workflow.

Correlation IDs everywhere

Every AI request gets a correlation ID that flows through every downstream call. End-to-end trace for any user query in seconds.

Transient vs permanent failure handling

5xx, timeout, rate-limit → exponential backoff with jitter. 404, 403, validation → quarantine to poison-message table with alert. Never blindly retry permanent failures.

Rate-limit headroom monitoring

Vendor APIs all have quotas. We monitor headroom continuously and surface it before the AI starts getting throttled.

Vendor API change resilience

Every adapter has a contract test that runs daily against the live API. Schema changes detected before they break production.

Engagement timeline

  1. Week 1 — Inventory + pattern selection. Map every system in scope. Choose the integration pattern per system. Identify access blockers (credentials, sandbox environments, vendor support tickets) and start unblocking immediately.
  2. Week 2 — First integration live. Build the adapter for the highest-priority system. Wire authentication. Test against sandbox. Deploy to production behind a feature flag.
  3. Weeks 3–6 — Programme delivery. Two systems integrated per fortnight on average. Adapters added, AI tools exposed, per-system testing.
  4. Week 7 — End-to-end testing. Cross-system workflows tested end-to-end. Per-user permission enforcement verified on edge cases. Load tested if relevant.
  5. Week 8+ — Hand-off + maintenance. Documentation, runbooks, monitoring dashboards. We exit if you want, or stay on a managed retainer for adapter updates as vendors change their APIs.

Pricing

Single-system integration (one source or destination)$3,500 — $9,000 AUD
Multi-system programme (5–10 systems)$20,000 — $80,000 AUD
Ongoing infrastructure (hosting + monitoring)$100 — $800 AUD/mo
Managed services (breakage triage + adapter updates)$1,500 AUD/mo

All prices ex GST. No per-call or per-execution fees. You pay only for underlying LLM tokens consumed and own the integration layer outright.

Who this is for

AI integration services deliver the strongest ROI when you (a) already know what AI capability you want to ship — assistant, workflow, customer service, document processing — and (b) the bottleneck is connecting it to your existing stack. Typical fit: 50–500 person Australian businesses already running multi-system stacks (CRM + ERP + helpdesk + document store) with a clear use case waiting on the integration.

Poor fit: businesses still in the AI exploration phase with no specific use case in mind (you do not need integration yet — you need scoping), or single-system businesses where Zapier or built-in integrations are already enough.

Frequently Asked Questions

What does "AI integration services" actually mean?

+
AI integration services means connecting large language models to the rest of your business technology — your CRM, your ERP, your helpdesk, your document store, your internal APIs, your databases — so the AI can read context, take actions, and write back results. The model on its own knows nothing about your business; integration is what makes it useful. According to McKinsey's 2024 State of AI report, "the most-cited barrier to enterprise AI value capture is not model capability but integration with existing systems — 61% of leaders rank it as their top blocker" (McKinsey & Company, The State of AI in Early 2024). Solving that blocker is the entire point of this service.

Which integration protocols and patterns do you support?

+
All of: standard REST APIs (the default for 90% of integrations), GraphQL, gRPC, SOAP (when we have to), webhooks for event-driven flows, message queues (RabbitMQ, SQS, Kafka), database direct (PostgreSQL, MySQL, SQL Server, MongoDB), and the newer Anthropic Model Context Protocol (MCP) for tool-use that the AI can discover and call dynamically. We also build custom adapters for proprietary protocols and legacy systems where the only integration surface is screen-scraping or RPA. The specific pattern is chosen per system — REST first, message queue when async is required, MCP when the AI needs to discover capabilities at runtime.

What is MCP and why does it matter for AI integration?

+
Model Context Protocol (MCP) is an open protocol Anthropic published in late 2024 that lets an LLM dynamically discover and call tools at runtime. Instead of hard-coding every API call into the prompt, you expose your business systems as MCP servers and the AI decides at conversation time which tools to call and in which order. The advantage for integration work is that you can add a new system to the AI's toolbox by deploying an MCP server, with no change to the AI's code or prompt. We are currently shipping MCP integrations for Salesforce, HubSpot, Notion, Linear, Slack, and most major databases — and building custom MCP servers for client-specific systems.

How do you handle authentication and per-user permissions?

+
Every integration uses your existing identity provider — Microsoft Entra ID (formerly Azure AD), Google Workspace, Okta, or Auth0 — for SSO. Per-user permissions are enforced at query time, not just at integration setup: when user A asks the AI a question, the AI can only access data user A already has permission to see in the underlying system. We never use a service account that bypasses permissions. For B2C surfaces (customer chat) we use scoped tokens tied to the customer's session. For agent-to-agent integrations we use OAuth client credentials or signed JWTs depending on the surface.

How do you handle errors when an integration call fails?

+
Three patterns. First, transient failures (5xx, timeout, rate limit) automatically retry with exponential backoff and full jitter — typically up to seven attempts. Second, permanent failures (404, 403, validation error) are immediately quarantined to a poison-message table and surfaced via a Telegram or Slack alert; we never blindly retry permanent failures. Third, every integration call is logged with a correlation ID so we can trace a single user request through every downstream system. The result is integrations that degrade gracefully under partial outage rather than fail catastrophically.

What if our internal system has no API at all?

+
Three options in order of preference. First, if there is a database, we read directly from it (with appropriate read-replica setup to protect production load). Second, if there is no API but there is a file export (CSV / XML / Excel), we add a scheduled ingestion pipeline. Third, if there is genuinely no integration surface, we build an RPA bridge using Playwright or a headless browser to drive the system's UI. The RPA approach is brittle and we recommend it only as a last resort while you advocate internally for a real API.

How long does an AI integration project take?

+
A single-system integration (one CRM, one helpdesk, one ERP) typically takes one to three weeks depending on complexity. A whole-stack integration covering five to ten systems typically runs six to twelve weeks. We work in shippable two-week increments — you have working AI-to-system integration within the first month, regardless of programme size. The slowest projects are bottlenecked on access (credentials, sandbox environments, vendor support tickets) rather than on the integration code itself.

What does AI integration cost in Australia?

+
A single-system integration (one source or destination) typically costs $3,500 — $9,000 AUD ex GST as a one-off project. A multi-system integration programme (five to ten systems) typically ranges from $20,000 to $80,000 AUD. Ongoing infrastructure (hosting the integration layer, monitoring, alerting) sits at $100 — $800 AUD per month. Optional managed-services tier at $1,500 AUD per month for monitoring, breakage triage, and adapter updates as vendor APIs change. No per-call fees — you pay only for the underlying LLM tokens consumed.

Why hire Iverel rather than use Zapier, Make.com, or built-in integrations?

+
No-code platforms (Zapier, Make.com, Workato) are excellent for simple A-to-B data flows between mainstream SaaS products. They struggle when you need (a) AI in the middle of the flow making decisions, (b) integration with non-mainstream or internal systems, (c) per-user permission enforcement, (d) volume above a few thousand executions per month (per-task pricing becomes prohibitive), or (e) audit-grade logging and rollback. Built-in integrations from each SaaS vendor work but lock you into that vendor's ecosystem. We build integrations on N8N (self-hosted, portable JSON) and custom code where required — no per-execution fees, full control of the routing logic, and you own the integration layer outright.

Tell us your stack, we'll scope the integration

Book a free 30-minute scoping call. List the systems the AI needs to read from and write to. We'll identify the integration pattern per system, flag any access blockers, and give you a written cost and timeline before you commit to anything.

Book a Free Scoping Call →