AuroraCore 是 AuroraToolkit 中的基础库——AuroraToolkit 是一套旨在简化将 AI 功能集成到您的项目中的工具套件。此软件包为 AI 驱动的工作流程提供强大的支持,包括任务编排、工作流程管理以及与大型语言模型 (LLM)(如 OpenAI、Anthropic 和 Ollama)的无缝集成。其模块化架构使开发人员能够轻松地自定义、扩展以及与外部服务集成。
AuroraToolkit 核心软件包被组织成多个模块,以增强灵活性和可维护性
无论您是构建复杂的 AI 驱动的应用程序,还是将模块化组件集成到您的工作流程中,AuroraCore 都提供了将您的想法变为现实的工具和灵活性。
基础库,为工作流程、任务编排和实用工具函数提供核心框架。
一个专门用于管理大型语言模型 (LLM) 和促进 AI 驱动的工作流程的软件包。它包括多模型管理、领域路由和令牌处理。
一个预构建任务集合,旨在快速启动开发并与工作流程无缝集成。这些任务涵盖常见的 AI 和基于实用程序的 operations。
一个独立的软件包,展示了工作流程、LLM 集成和任务的真实世界实现。示例可作为最佳实践和快速入门指南的参考。
要使用 Swift Package Manager 将 AuroraCore 集成到您的项目中,请将以下行添加到您的 Package.swift
文件中
.package(url: "https://github.com/AuroraToolkit/AuroraCore.git", from: "0.1.0")
然后将所需的模块作为依赖项添加到您的目标。例如
.target(
name: "YourTarget",
dependencies: [
.product(name: "AuroraCore", package: "AuroraCore"),
.product(name: "AuroraLLM", package: "AuroraCore"),
.product(name: "AuroraTaskLibrary", package: "AuroraCore")
]
),
您可以仅在您的项目中包含您需要的模块,以使其保持轻量级和专注。
import AuroraLLM
let contextController = ContextController(maxTokenLimit: 4096)
contextController.addItem(content: "This is a new item.")
let summary = contextController.summarizeContext()
import AuroraCore
let workflow = Workflow(name: "Example Workflow", description: "This is a sample workflow") {
Workflow.Task(name: "Task_1", description: "This is the first task.")
Workflow.Task(name: "Task_2", description: "This is the second task.") { inputs in
// Perform some task-specific logic
return ["result": "Task 2 completed."]
}
}
await workflow.start()
print("Workflow completed. Result: \(workflow.outputs["Task_2.result"] as? String)")
import AuroraLLM
let llmManager = LLMManager()
llmManager.registerService(OllamaService(name: "Ollama"))
let request = LLMRequest(prompt: "Hello, World!")
llmManager.sendRequest(request) { response in
print(response?.text ?? "No response")
}
import AuroraLLM
let manager = LLMManager()
// Configure the Domain Routing Service (Ollama)
let router = LLMDomainRouter(
name: "Domain Router",
service: OllamaService(),
supportedDomains: ["sports", "movies", "books"]
)
manager.registerDomainRouter(router)
// Configure the Sports Service (Anthropic)
let sportsService = AnthropicService(
name: "SportsService",
apiKey: "your-anthropic-api-key",
maxOutputTokens: 256,
systemPrompt: """
You are a sports expert. Answer the following sports-related questions concisely and accurately.
"""
)
manager.registerService(sportsService, withRoutings: [.domain(["sports"])])
// Configure the Movies Service (OpenAI)
let moviesService = OpenAIService(
name: "MoviesService",
apiKey: "your-openai-api-key",
maxOutputTokens: 256,
systemPrompt: """
You are a movie critic. Answer the following movie-related questions concisely and accurately.
"""
)
manager.registerService(moviesService, withRoutings: [.domain(["movies"])])
// Configure the Books Service (Ollama)
let booksService = OllamaService(
name: "BooksService",
baseURL: "https://:11434",
maxOutputTokens: 256,
systemPrompt: """
You are a literary expert. Answer the following books-related questions concisely and accurately.
"""
)
manager.registerService(booksService, withRoutings: [.domain(["books"])])
// Configure the Fallback Service (OpenAI)
let fallbackService = OpenAIService(
name: "FallbackService",
apiKey: "your-openai-api-key",
maxOutputTokens: 512,
systemPrompt: """
You are a helpful assistant. Answer any general questions accurately and concisely.
"""
)
manager.registerFallbackService(fallbackService)
// Example questions
let questions = [
"Who won the Super Bowl in 2022?", // Sports domain
"What won Best Picture in 2021?", // Movies domain
"Who wrote The Great Gatsby?", // Books domain
"What is the capital of France?" // General (fallback)
]
// Process each question
for question in questions {
print("\nProcessing question: \(question)")
let request = LLMRequest(messages: [LLMMessage(role: .user, content: question)])
if let response = await manager.routeRequest(request) {
print("Response from \(response.vendor ?? "Uknown"): \(response.text)")
} else {
print("No response received.")
}
}
AuroraCore 包含针对多种语言模型服务的测试。Ollama 测试将始终运行,因为它们不需要任何 API 密钥。为了测试 OpenAI 或 Anthropic 服务,您需要手动提供您的 API 密钥。
一些测试和示例文件使用 OpenAI 或 Anthropic 服务,并且需要 API 密钥才能正常运行。要使用这些服务,请将以下密钥添加到 AuroraToolkit-Package
和 AuroraExamples
scheme 中。确保这些 scheme 未共享,并采取额外的预防措施以避免将 API 密钥提交到存储库中。
ANTHROPIC_API_KEY
以及有效的测试 API 密钥。OPENAI_API_KEY
以及有效的测试 API 密钥。https://:11434
上运行。通过此设置,您可以对多个 LLM 运行测试,并确保您的敏感密钥不会被意外共享。
欢迎贡献!请随时提交 pull request 或提交 issue。有关如何贡献的更多详细信息,请参阅 CONTRIBUTING.md 文件。
我们希望所有参与者都遵守我们的行为准则,以确保为每个人提供一个热情和包容的环境。
AuroraCore 在 Apache 2.0 许可证下发布。
如有任何疑问或反馈,请通过 aurora.toolkit@gmail.com 与我们联系。