Quick Start Guide¶
Get started with ullm in 5 minutes!
Installation¶
Set API Key¶
Your First Request¶
import ullm
response = ullm.completion(
model="openai/gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
That's it! You've made your first LLM request with ullm.
Try Different Providers¶
Add Streaming¶
for chunk in ullm.completion(
model="openai/gpt-4o-mini",
messages=[{"role": "user", "content": "Write a poem"}],
stream=True
):
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
Use Async¶
import asyncio
async def main():
response = await ullm.acompletion(
model="openai/gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
asyncio.run(main())
Control Parameters¶
response = ullm.completion(
model="openai/gpt-4o-mini",
messages=[{"role": "user", "content": "Be creative!"}],
temperature=0.9, # More random (0-2)
max_tokens=500, # Limit response length
num_retries=3, # Retry on failure
timeout=60.0 # Timeout in seconds
)
Handle Errors¶
try:
response = ullm.completion(
model="openai/gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
except ullm.AuthenticationError:
print("Invalid API key")
except ullm.RateLimitError:
print("Rate limit exceeded - will auto-retry")
except ullm.APIError as e:
print(f"API error: {e}")
Next Steps¶
Now that you're up and running:
- Basic Usage - Learn the fundamentals
- User Guide - Explore all features
- API Reference - Detailed API docs
- Examples - See more examples
Common Issues¶
ModuleNotFoundError: No module named 'ullm'
Make sure ullm is installed: pip install ullm
AuthenticationError: Invalid API key
Set your API key: export OPENAI_API_KEY=sk-...
ImportError: boto3 not found (for Bedrock)
Install AWS extras: pip install ullm[aws]
Different results than litellm?
ullm is designed as a drop-in replacement but may have slight differences in token counting or streaming chunk formatting.