LizziAI

What AI models does LizziAI use under the hood?

LizziAI is a multi-model orchestration layer that routes each task to the best-suited model from OpenAI, Anthropic, and other providers. It is not a wrapper around a single LLM; it picks the right model for the job and learns which model performs best for your specific use cases.

Models currently in the rotation

  • Anthropic Claude: long-context reasoning, nuanced client communications, complex summarization
  • OpenAI GPT: structured output, function calling, code generation
  • Specialized embedding models: semantic search across your knowledge base and conversation history
  • Whisper-class transcription: meeting and call transcription
  • Vision models: image analysis for screenshots, attachments, and brand asset processing

Why a multi-model approach

  1. Best tool for the job: no single model wins on every task. Claude excels at long-context drafting; GPT excels at structured outputs. We route accordingly
  2. Resilience: if one provider has an outage, traffic fails over to alternatives
  3. Cost efficiency: cheap fast models handle routine tasks; expensive frontier models are reserved for high-value reasoning
  4. Future-proofing: when a new model class ships (multimodal agents, longer context, etc.), we can adopt it without you rewriting anything

How model selection works

LizziAI uses an internal router that considers task type, content length, latency requirements, and prior performance on similar tasks within your tenant. Over time, the router learns which models perform best for your specific business; a law firm and a fitness studio will end up with different preferred models for client communications because the optimal model genuinely differs.

Privacy and data handling

We use enterprise API tiers with all providers, which means your data is never used to train their public models. Conversations are isolated to your tenant. We do not log prompt content beyond what is needed for your own audit trail and improvement of your LizziAI instance.

What about open source or self-hosted models

For Enterprise+ customers with regulatory requirements, we can deploy with self-hosted open-source models (Llama-class, Mistral) for tasks where data residency is non-negotiable. Talk to us if your compliance team needs that conversation.

Last updated April 21, 2026

Ready to see MiOpsAI in action?

Request access and we’ll walk you through how the platform solves your specific workflow.

Request Access →