Unified AI adapter foundation for Elixir - Protocol-based abstractions for multiple AI providers
- Protocol-Based Architecture - Uses protocols instead of behaviours for maximum flexibility
- Runtime Capability Detection - Introspect what each adapter supports at runtime
- Composite Adapters - Automatic fallback chains across multiple providers
- Framework Agnostic - No dependencies on FlowStone, Synapse, or other frameworks
- Unified Telemetry - Standard telemetry events for monitoring and debugging
- Comprehensive Testing - Mock adapters and test utilities included
- Gemini - Google Gemini AI (via
gemini_ex) - Claude - Anthropic Claude (via
claude_agent_sdk) - Codex - OpenAI models (via
codex_sdk) - OpenAI - OpenAI chat + embeddings (via
openai_ex) - Fallback - Heuristic fallback (no external API required)
- Mock - Configurable mock for testing
All SDK dependencies are optional - Altar.AI works with whatever you have installed.
Add altar_ai to your list of dependencies in mix.exs:
def deps do
[
{:altar_ai, "~> 0.1.0"},
# Optional: Add the AI SDKs you want to use
# {:gemini_ex, "~> 0.1.0"},
# {:claude_agent_sdk, "~> 0.1.0"},
# {:codex_sdk, "~> 0.1.0"},
# {:openai_ex, "~> 0.9.18"}
]
end# Create an adapter
adapter = Altar.AI.Adapters.Gemini.new(api_key: "your-api-key")
# Generate text
{:ok, response} = Altar.AI.generate(adapter, "Explain Elixir protocols")
IO.puts(response.content)
# Check what the adapter can do
Altar.AI.capabilities(adapter)
#=> %{generate: true, stream: true, embed: true, batch_embed: true, ...}See examples/basic_generation.exs for a runnable script that exercises
generation, embeddings, classification, and streaming using the Mock adapter.
# Create a composite that tries multiple providers
composite = Altar.AI.Adapters.Composite.new([
Altar.AI.Adapters.Gemini.new(),
Altar.AI.Adapters.Claude.new(),
Altar.AI.Adapters.Fallback.new() # Always succeeds
])
# Or use the default chain (auto-detects available SDKs)
composite = Altar.AI.Adapters.Composite.default()
# Now generate with automatic fallback
{:ok, response} = Altar.AI.generate(composite, "Hello, world!")adapter = Altar.AI.Adapters.Gemini.new()
# Single embedding
{:ok, vector} = Altar.AI.embed(adapter, "semantic search query")
length(vector) #=> 768 (or model-specific dimension)
# Batch embeddings
{:ok, vectors} = Altar.AI.batch_embed(adapter, ["query 1", "query 2", "query 3"])# Use fallback adapter for simple keyword-based classification
fallback = Altar.AI.Adapters.Fallback.new()
{:ok, classification} = Altar.AI.classify(
fallback,
"I love this product!",
["positive", "negative", "neutral"]
)
classification.label #=> "positive"
classification.confidence #=> 0.8
classification.all_scores #=> %{"positive" => 0.8, "negative" => 0.2, "neutral" => 0.2}adapter = Altar.AI.Adapters.Codex.new()
# Generate code
{:ok, code_result} = Altar.AI.generate_code(
adapter,
"Create a fibonacci function in Elixir",
language: "elixir"
)
IO.puts(code_result.code)
# Explain code
{:ok, explanation} = Altar.AI.explain_code(
adapter,
"def fib(0), do: 0\ndef fib(1), do: 1\ndef fib(n), do: fib(n-1) + fib(n-2)"
)
IO.puts(explanation)Altar.AI uses protocols instead of behaviours, providing several advantages:
- Runtime Dispatch - Protocols dispatch on adapter structs, allowing cleaner composite implementations
- Capability Detection - Easy runtime introspection of what each adapter supports
- Flexibility - Adapters only implement the protocols they support
Altar.AI.Generator- Text generation and streamingAltar.AI.Embedder- Vector embeddingsAltar.AI.Classifier- Text classificationAltar.AI.CodeGenerator- Code generation and explanation
adapter = Altar.AI.Adapters.Gemini.new()
# Check specific capability
Altar.AI.supports?(adapter, :embed) #=> true
Altar.AI.supports?(adapter, :classify) #=> false
# Get all capabilities
Altar.AI.capabilities(adapter)
#=> %{
#=> generate: true,
#=> stream: true,
#=> embed: true,
#=> batch_embed: true,
#=> classify: false,
#=> generate_code: false,
#=> explain_code: false
#=> }
# Human-readable description
Altar.AI.Capabilities.describe(adapter)
#=> "Gemini: text generation, streaming, embeddings, batch embeddings"Altar.AI provides a Mock adapter for testing:
# Create a mock adapter
mock = Altar.AI.Adapters.Mock.new()
# Configure responses
mock = Altar.AI.Adapters.Mock.with_response(
mock,
:generate,
{:ok, %Altar.AI.Response{content: "Test response", provider: :mock, model: "test"}}
)
# Use in tests
{:ok, response} = Altar.AI.generate(mock, "any prompt")
assert response.content == "Test response"
# Or use custom functions
mock = Altar.AI.Adapters.Mock.with_response(
mock,
:generate,
fn prompt -> {:ok, %Altar.AI.Response{content: "Echo: #{prompt}"}} end
)All operations emit telemetry events under [:altar, :ai]:
:telemetry.attach(
"my-handler",
[:altar, :ai, :generate, :stop],
fn event, measurements, metadata, _config ->
IO.inspect({event, measurements, metadata})
end,
nil
)
# Events:
# [:altar, :ai, :generate, :start]
# [:altar, :ai, :generate, :stop]
# [:altar, :ai, :generate, :exception]
# [:altar, :ai, :embed, :start]
# [:altar, :ai, :embed, :stop]
# ... and moreAltar.AI follows the Hexagonal (Ports & Adapters) architecture:
- Ports - Protocols define the interface (
Generator,Embedder, etc.) - Adapters - Concrete implementations for each provider (
Gemini,Claude,Codex) - Core - Framework-agnostic types and logic
This makes it easy to:
- Swap providers without changing application code
- Add new providers by implementing protocols
- Test with mock adapters
- Build composite adapters with fallback chains
MIT License - see LICENSE for details
Contributions are welcome! Please feel free to submit a Pull Request.