Skip to content

Protocol-based AI adapter foundation for Elixir - unified abstractions for gemini_ex, claude_agent_sdk, codex_sdk with automatic fallback, capability detection, and telemetry

License

Notifications You must be signed in to change notification settings

nshkrdotcom/altar_ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Altar.AI

Altar.AI Logo

Unified AI adapter foundation for Elixir - Protocol-based abstractions for multiple AI providers

Hex.pm Documentation License

Features

  • Protocol-Based Architecture - Uses protocols instead of behaviours for maximum flexibility
  • Runtime Capability Detection - Introspect what each adapter supports at runtime
  • Composite Adapters - Automatic fallback chains across multiple providers
  • Framework Agnostic - No dependencies on FlowStone, Synapse, or other frameworks
  • Unified Telemetry - Standard telemetry events for monitoring and debugging
  • Comprehensive Testing - Mock adapters and test utilities included

Supported Providers

  • Gemini - Google Gemini AI (via gemini_ex)
  • Claude - Anthropic Claude (via claude_agent_sdk)
  • Codex - OpenAI models (via codex_sdk)
  • OpenAI - OpenAI chat + embeddings (via openai_ex)
  • Fallback - Heuristic fallback (no external API required)
  • Mock - Configurable mock for testing

All SDK dependencies are optional - Altar.AI works with whatever you have installed.

Installation

Add altar_ai to your list of dependencies in mix.exs:

def deps do
  [
    {:altar_ai, "~> 0.1.0"},
    # Optional: Add the AI SDKs you want to use
    # {:gemini_ex, "~> 0.1.0"},
    # {:claude_agent_sdk, "~> 0.1.0"},
    # {:codex_sdk, "~> 0.1.0"},
    # {:openai_ex, "~> 0.9.18"}
  ]
end

Quick Start

Basic Usage

# Create an adapter
adapter = Altar.AI.Adapters.Gemini.new(api_key: "your-api-key")

# Generate text
{:ok, response} = Altar.AI.generate(adapter, "Explain Elixir protocols")
IO.puts(response.content)

# Check what the adapter can do
Altar.AI.capabilities(adapter)
#=> %{generate: true, stream: true, embed: true, batch_embed: true, ...}

Examples

See examples/basic_generation.exs for a runnable script that exercises generation, embeddings, classification, and streaming using the Mock adapter.

Composite Adapters with Fallbacks

# Create a composite that tries multiple providers
composite = Altar.AI.Adapters.Composite.new([
  Altar.AI.Adapters.Gemini.new(),
  Altar.AI.Adapters.Claude.new(),
  Altar.AI.Adapters.Fallback.new()  # Always succeeds
])

# Or use the default chain (auto-detects available SDKs)
composite = Altar.AI.Adapters.Composite.default()

# Now generate with automatic fallback
{:ok, response} = Altar.AI.generate(composite, "Hello, world!")

Embeddings

adapter = Altar.AI.Adapters.Gemini.new()

# Single embedding
{:ok, vector} = Altar.AI.embed(adapter, "semantic search query")
length(vector)  #=> 768 (or model-specific dimension)

# Batch embeddings
{:ok, vectors} = Altar.AI.batch_embed(adapter, ["query 1", "query 2", "query 3"])

Classification

# Use fallback adapter for simple keyword-based classification
fallback = Altar.AI.Adapters.Fallback.new()

{:ok, classification} = Altar.AI.classify(
  fallback,
  "I love this product!",
  ["positive", "negative", "neutral"]
)

classification.label       #=> "positive"
classification.confidence  #=> 0.8
classification.all_scores  #=> %{"positive" => 0.8, "negative" => 0.2, "neutral" => 0.2}

Code Generation

adapter = Altar.AI.Adapters.Codex.new()

# Generate code
{:ok, code_result} = Altar.AI.generate_code(
  adapter,
  "Create a fibonacci function in Elixir",
  language: "elixir"
)

IO.puts(code_result.code)

# Explain code
{:ok, explanation} = Altar.AI.explain_code(
  adapter,
  "def fib(0), do: 0\ndef fib(1), do: 1\ndef fib(n), do: fib(n-1) + fib(n-2)"
)

IO.puts(explanation)

Architecture

Altar.AI uses protocols instead of behaviours, providing several advantages:

  1. Runtime Dispatch - Protocols dispatch on adapter structs, allowing cleaner composite implementations
  2. Capability Detection - Easy runtime introspection of what each adapter supports
  3. Flexibility - Adapters only implement the protocols they support

Core Protocols

  • Altar.AI.Generator - Text generation and streaming
  • Altar.AI.Embedder - Vector embeddings
  • Altar.AI.Classifier - Text classification
  • Altar.AI.CodeGenerator - Code generation and explanation

Capability Detection

adapter = Altar.AI.Adapters.Gemini.new()

# Check specific capability
Altar.AI.supports?(adapter, :embed)  #=> true
Altar.AI.supports?(adapter, :classify)  #=> false

# Get all capabilities
Altar.AI.capabilities(adapter)
#=> %{
#=>   generate: true,
#=>   stream: true,
#=>   embed: true,
#=>   batch_embed: true,
#=>   classify: false,
#=>   generate_code: false,
#=>   explain_code: false
#=> }

# Human-readable description
Altar.AI.Capabilities.describe(adapter)
#=> "Gemini: text generation, streaming, embeddings, batch embeddings"

Testing

Altar.AI provides a Mock adapter for testing:

# Create a mock adapter
mock = Altar.AI.Adapters.Mock.new()

# Configure responses
mock = Altar.AI.Adapters.Mock.with_response(
  mock,
  :generate,
  {:ok, %Altar.AI.Response{content: "Test response", provider: :mock, model: "test"}}
)

# Use in tests
{:ok, response} = Altar.AI.generate(mock, "any prompt")
assert response.content == "Test response"

# Or use custom functions
mock = Altar.AI.Adapters.Mock.with_response(
  mock,
  :generate,
  fn prompt -> {:ok, %Altar.AI.Response{content: "Echo: #{prompt}"}} end
)

Telemetry

All operations emit telemetry events under [:altar, :ai]:

:telemetry.attach(
  "my-handler",
  [:altar, :ai, :generate, :stop],
  fn event, measurements, metadata, _config ->
    IO.inspect({event, measurements, metadata})
  end,
  nil
)

# Events:
# [:altar, :ai, :generate, :start]
# [:altar, :ai, :generate, :stop]
# [:altar, :ai, :generate, :exception]
# [:altar, :ai, :embed, :start]
# [:altar, :ai, :embed, :stop]
# ... and more

Hexagonal Architecture

Altar.AI follows the Hexagonal (Ports & Adapters) architecture:

  • Ports - Protocols define the interface (Generator, Embedder, etc.)
  • Adapters - Concrete implementations for each provider (Gemini, Claude, Codex)
  • Core - Framework-agnostic types and logic

This makes it easy to:

  • Swap providers without changing application code
  • Add new providers by implementing protocols
  • Test with mock adapters
  • Build composite adapters with fallback chains

License

MIT License - see LICENSE for details

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Acknowledgments

  • Inspired by the adapter pattern in Ecto and other Elixir libraries
  • Built for use with FlowStone and Synapse

About

Protocol-based AI adapter foundation for Elixir - unified abstractions for gemini_ex, claude_agent_sdk, codex_sdk with automatic fallback, capability detection, and telemetry

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •  

Languages