LLMChatOpenAI

以简洁优雅的方式与 OpenAI 和兼容 OpenAI 的聊天完成 API 进行交互。

概述

LLMChatOpenAI 是一个简单而强大的 Swift 包,它优雅地封装了与 OpenAI 和兼容 OpenAI 的聊天完成 API 交互的复杂性。它提供了一整套符合 Swift 习惯的方法,用于发送聊天完成请求和流式响应。

安装

您可以使用 Swift Package Manager 将 LLMChatOpenAI 作为依赖项添加到您的项目中,方法是将其添加到 Package.swift 文件的 dependencies 值中。

dependencies: [
    .package(url: "https://github.com/kevinhermawan/swift-llm-chat-openai.git", .upToNextMajor(from: "1.0.0"))
],
targets: [
    .target(
        /// ...
        dependencies: [.product(name: "LLMChatOpenAI", package: "swift-llm-chat-openai")])
]

或者,在 Xcode 中

  1. 在 Xcode 中打开您的项目。
  2. 点击 File -> Swift Packages -> Add Package Dependency...
  3. 输入仓库 URL: https://github.com/kevinhermawan/swift-llm-chat-openai.git
  4. 选择您想要添加的版本。您可能想要添加最新版本。
  5. 点击 Add Package

文档

您可以在这里找到文档:https://kevinhermawan.github.io/swift-llm-chat-openai/documentation/llmchatopenai

用法

初始化

import LLMChatOpenAI

// Basic initialization
let chat = LLMChatOpenAI(apiKey: "<YOUR_OPENAI_API_KEY>")

// Initialize with custom endpoint and headers
let chat = LLMChatOpenAI(
    apiKey: "<YOUR_API_KEY>",
    endpoint: URL(string: "https://custom-api.example.com/v1/chat/completions")!,
    headers: ["Custom-Header": "Value"]
)

发送聊天完成请求

let messages = [
    ChatMessage(role: .system, content: "You are a helpful assistant."),
    ChatMessage(role: .user, content: "What is the capital of Indonesia?")
]

let task = Task {
    do {
        let completion = try await chat.send(model: "gpt-4o", messages: messages)

        print(completion.choices.first?.message.content ?? "No response")
    } catch {
        print(String(describing: error))
    }
}

// To cancel completion
task.cancel()

流式聊天完成

let messages = [
    ChatMessage(role: .system, content: "You are a helpful assistant."),
    ChatMessage(role: .user, content: "What is the capital of Indonesia?")
]

let task = Task {
    do {
        for try await chunk in chat.stream(model: "gpt-4o", messages: messages) {
            if let content = chunk.choices.first?.delta.content {
                print(content, terminator: "")
            }
        }
    } catch {
        print(String(describing: error))
    }
}

// To cancel completion
task.cancel()

使用备用模型 (仅限 OpenRouter)

Task {
    do {
        let completion = try await chat.send(models: ["openai/gpt-4o", "mistralai/mixtral-8x7b-instruct"], messages: messages)

        print(completion.choices.first?.message.content ?? "No response")
    } catch {
        print(String(describing: error))
    }
}

Task {
    do {
        for try await chunk in chat.stream(models: ["openai/gpt-4o", "mistralai/mixtral-8x7b-instruct"], messages: messages) {
            if let content = chunk.choices.first?.delta.content {
                print(content, terminator: "")
            }
        }
    } catch {
        print(String(describing: error))
    }
}

注意:仅在使用 OpenRouter 时才支持备用模型功能。如果您将备用模型方法 (send(models:)stream(models:)) 与其他提供商一起使用,则只会使用数组中的第一个模型,其余模型将被忽略。要了解有关备用模型的更多信息,请查看 OpenRouter 文档

高级用法

视觉

let messages = [
    ChatMessage(
        role: .user,
        content: [
            .image("https://images.pexels.com/photos/45201/kitty-cat-kitten-pet-45201.jpeg", detail: .high),
            .text("What is in this image?")
        ]
    )
]

Task {
    do {
        let completion = try await chat.send(model: "gpt-4o", messages: messages)

        print(completion.choices.first?.message.content ?? "")
    } catch {
        print(String(describing: error))
    }
}

要了解有关视觉的更多信息,请查看 OpenAI 文档

函数调用

let messages = [
    ChatMessage(role: .user, content: "Recommend a book similar to '1984'")
]

let recommendBookTool = ChatOptions.Tool(
    type: "function",
    function: .init(
        name: "recommend_book",
        description: "Recommend a book based on a given book and genre",
        parameters: .object(
            properties: [
                "reference_book": .string(description: "The name of a book the user likes"),
                "genre": .enum(
                    description: "The preferred genre for the book recommendation",
                    values: [.string("fiction"), .string("non-fiction")]
                )
            ],
            required: ["reference_book", "genre"],
            additionalProperties: .boolean(false)
        ),
        strict: true
    )
)

let options = ChatOptions(tools: [recommendBookTool])

Task {
    do {
        let completion = try await chat.send(model: "gpt-4o", messages: messages, options: options)

        if let toolCalls = completion.choices.first?.message.toolCalls {
            print(toolCalls.first?.function.arguments ?? "")
        }
    } catch {
        print(String(describing: error))
    }
}

要了解有关函数调用的更多信息,请查看 OpenAI 文档

预测输出

private let code = """
/// <summary>
/// Represents a user with a first name, last name, and username.
/// </summary>
public class User
{
  /// <summary>
  /// Gets or sets the user's first name.
  /// </summary>
  public string FirstName { get; set; }

  /// <summary>
  /// Gets or sets the user's last name.
  /// </summary>
  public string LastName { get; set; }

  /// <summary>
  /// Gets or sets the user's username.
  /// </summary>
  public string Username { get; set; }
}
"""

let messages = [
   ChatMessage(role: .user, content: "Replace the Username property with an Email property. Respond only with code, and with no markdown formatting."),
   ChatMessage(role: .user, content: code)
]

let options = ChatOptions(
   prediction: .init(type: .content, content: code)
)

Task {
   do {
       let completion = try await chat.send(model: "gpt-4o", messages: messages, options: options)

       print(completion.choices.first?.message.content ?? "")
   } catch {
       print(String(describing: error))
   }
}

要了解有关预测输出的更多信息,请查看 OpenAI 文档

结构化输出

let messages = [
   ChatMessage(role: .system, content: "You are a helpful assistant. Respond with a JSON object containing the book title and author."),
   ChatMessage(role: .user, content: "Can you recommend a philosophy book?")
]

let responseFormat = ChatOptions.ResponseFormat(
    type: .jsonSchema,
    jsonSchema: .init(
        name: "get_book_info",
        schema: .object(
            properties: [
                "title": .string(description: "The title of the book"),
                "author": .string(description: "The author of the book")
            ],
            required: ["title", "author"]
        )
    )
)

let options = ChatOptions(responseFormat: responseFormat)

Task {
   do {
       let completion = try await chat.send(model: "gpt-4o", messages: messages, options: options)

       print(completion.choices.first?.message.content ?? "")
   } catch {
       print(String(describing: error))
   }
}

要了解有关结构化输出的更多信息,请查看 OpenAI 文档

错误处理

LLMChatOpenAI 通过 LLMChatOpenAIError 枚举提供结构化错误处理。此枚举包含三种情况,代表您可能遇到的不同类型的错误

let messages = [
    ChatMessage(role: .system, content: "You are a helpful assistant."),
    ChatMessage(role: .user, content: "What is the capital of Indonesia?")
]

do {
    let completion = try await chat.send(model: "gpt-4o", messages: messages)

    print(completion.choices.first?.message.content ?? "No response")
} catch let error as LLMChatOpenAIError {
    switch error {
    case .serverError(let statusCode, let message):
        // Handle server-side errors (e.g., invalid API key, rate limits)
        print("Server Error [\(statusCode)]: \(message)")
    case .networkError(let error):
        // Handle network-related errors (e.g., no internet connection)
        print("Network Error: \(error.localizedDescription)")
    case .decodingError(let error):
        // Handle errors that occur when the response cannot be decoded
        print("Decoding Error: \(error.localizedDescription)")
    case .streamError:
        // Handle errors that occur when streaming responses
        print("Stream Error")
    case .cancelled:
        // Handle requests that are cancelled
        print("Request was cancelled")
    }
} catch {
    // Handle any other errors
    print("An unexpected error occurred: \(error)")
}

相关软件包

支持

如果您觉得 LLMChatOpenAI 有用并希望支持其开发,请考虑捐款。您的贡献有助于维护项目和开发新功能。

非常感谢您的支持!❤️

贡献

欢迎贡献!如果您有任何建议或改进,请打开 issue 或提交 pull request。

许可证

本仓库根据 Apache License 2.0 许可证提供。