OpenAI

logo


Swift Workflow Twitter

此仓库包含 Swift 社区维护的对 OpenAI 公共 API 的实现。

什么是 OpenAI

OpenAI 是一个非营利性人工智能研究组织,于 2015 年在加利福尼亚州旧金山成立。它的创建目的是以造福全人类和促进社会进步的方式推进数字智能。该组织致力于开发能够自主思考、行动和快速适应的 AI(人工智能)程序和系统。OpenAI 的使命是确保安全和负责任地使用 AI 以促进公民福祉、经济增长和其他公共利益;这包括对通用 AI 安全、自然语言处理、应用强化学习方法、机器视觉算法等重要主题的尖端研究。

OpenAI API 可以应用于几乎任何涉及理解或生成自然语言或代码的任务。我们提供一系列具有不同能力级别的模型,适用于不同的任务,以及微调您自己的自定义模型的能力。这些模型可用于从内容生成到语义搜索和分类的所有方面。

安装

OpenAI 可通过 Swift Package Manager 获得。Swift Package Manager 是一种用于自动化 Swift 代码分发的工具,并已集成到 swift 编译器中。一旦您设置好 Swift 包,将 OpenAI 添加为依赖项就像将其添加到 Package.swift 的 dependencies 值中一样容易。

dependencies: [
    .package(url: "https://github.com/MacPaw/OpenAI.git", branch: "main")
]

使用方法

初始化

要初始化 API 实例,您需要从您的 Open AI 组织获取 API 令牌。

请记住,您的 API 密钥是保密的! 不要与他人分享或在任何客户端代码(浏览器、应用程序)中公开。生产请求必须通过您自己的后端服务器路由,在后端服务器中,您的 API 密钥可以从环境变量或密钥管理服务安全加载。

company

获得令牌后,您可以初始化 OpenAI 类,这是 API 的入口点。

⚠️OpenAI 强烈建议客户端应用程序的开发者通过单独的后端服务代理请求,以确保其 API 密钥的安全。API 密钥可以访问和操作客户的计费、使用情况和组织数据,因此暴露它们会带来重大风险。

let openAI = OpenAI(apiToken: "YOUR_TOKEN_HERE")

您可以选择使用令牌、组织标识符和 timeoutInterval 初始化 OpenAI

let configuration = OpenAI.Configuration(token: "YOUR_TOKEN_HERE", organizationIdentifier: "YOUR_ORGANIZATION_ID_HERE", timeoutInterval: 60.0)
let openAI = OpenAI(configuration: configuration)

一旦您拥有令牌并且实例已初始化,您就可以发出请求了。

聊天

使用 OpenAI Chat API,您可以使用 gpt-3.5-turbo 构建自己的应用程序,以执行以下操作,例如:

请求

struct ChatQuery: Codable {
    /// ID of the model to use.
    public let model: Model
    /// An object specifying the format that the model must output.
    public let responseFormat: ResponseFormat?
    /// The messages to generate chat completions for
    public let messages: [Message]
    /// A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.
    public let tools: [Tool]?
    /// Controls how the model responds to tool calls. "none" means the model does not call a function, and responds to the end-user. "auto" means the model can pick between and end-user or calling a function. Specifying a particular function via `{"name": "my_function"}` forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present.
    public let toolChoice: ToolChoice?
    /// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and  We generally recommend altering this or top_p but not both.
    public let temperature: Double?
    /// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
    public let topP: Double?
    /// How many chat completion choices to generate for each input message.
    public let n: Int?
    /// Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
    public let stop: [String]?
    /// The maximum number of tokens to generate in the completion.
    public let maxTokens: Int?
    /// Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
    public let presencePenalty: Double?
    /// Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
    public let frequencyPenalty: Double?
    /// Modify the likelihood of specified tokens appearing in the completion.
    public let logitBias: [String:Int]?
    /// A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
    public let user: String?
}

响应

struct ChatResult: Codable, Equatable {
    public struct Choice: Codable, Equatable {
        public let index: Int
        public let message: Chat
        public let finishReason: String
    }
    
    public struct Usage: Codable, Equatable {
        public let promptTokens: Int
        public let completionTokens: Int
        public let totalTokens: Int
    }
    
    public let id: String
    public let object: String
    public let created: TimeInterval
    public let model: Model
    public let choices: [Choice]
    public let usage: Usage
}

示例

let query = ChatQuery(model: .gpt3_5Turbo, messages: [.init(role: .user, content: "who are you")])
let result = try await openAI.chats(query: query)
(lldb) po result
▿ ChatResult
  - id : "chatcmpl-6pwjgxGV2iPP4QGdyOLXnTY0LE3F8"
  - object : "chat.completion"
  - created : 1677838528.0
  - model : "gpt-3.5-turbo-0301"
  ▿ choices : 1 element
    ▿ 0 : Choice
      - index : 0
      ▿ message : Chat
        - role : "assistant"
        - content : "\n\nI\'m an AI language model developed by OpenAI, created to provide assistance and support for various tasks such as answering questions, generating text, and providing recommendations. Nice to meet you!"
      - finish_reason : "stop"
  ▿ usage : Usage
    - prompt_tokens : 10
    - completion_tokens : 39
    - total_tokens : 49

聊天流式传输

聊天流式传输可通过使用 chatStream 函数获得。令牌将逐个发送。

闭包

openAI.chatsStream(query: query) { partialResult in
    switch partialResult {
    case .success(let result):
        print(result.choices)
    case .failure(let error):
        //Handle chunk error here
    }
} completion: { error in
    //Handle streaming error here
}

Combine

openAI
    .chatsStream(query: query)
    .sink { completion in
        //Handle completion result here
    } receiveValue: { result in
        //Handle chunk here
    }.store(in: &cancellables)

结构化并发

for try await result in openAI.chatsStream(query: query) {
   //Handle result here
}

函数调用

let openAI = OpenAI(apiToken: "...")
// Declare functions which GPT-3 might decide to call.
let functions = [
  FunctionDeclaration(
      name: "get_current_weather",
      description: "Get the current weather in a given location",
      parameters:
        JSONSchema(
          type: .object,
          properties: [
            "location": .init(type: .string, description: "The city and state, e.g. San Francisco, CA"),
            "unit": .init(type: .string, enumValues: ["celsius", "fahrenheit"])
          ],
          required: ["location"]
        )
  )
]
let query = ChatQuery(
  model: "gpt-3.5-turbo-0613",  // 0613 is the earliest version with function calls support.
  messages: [
      Chat(role: .user, content: "What's the weather like in Boston?")
  ],
  tools: functions.map { Tool.function($0) }
)
let result = try await openAI.chats(query: query)

结果将是(此处序列化为 JSON 以提高可读性)

{
  "id": "chatcmpl-1234",
  "object": "chat.completion",
  "created": 1686000000,
  "model": "gpt-3.5-turbo-0613",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "tool_calls": [
          {
            "id": "call-0",
            "type": "function",
            "function": {
              "name": "get_current_weather",
              "arguments": "{\n  \"location\": \"Boston, MA\"\n}"
            }
          }
        ]
      },
      "finish_reason": "function_call"
    }
  ],
  "usage": { "total_tokens": 100, "completion_tokens": 18, "prompt_tokens": 82 }
}

查看 聊天文档 以获取更多信息。

结构化输出

JSON 是世界上应用程序交换数据最广泛使用的格式之一。

结构化输出是一项功能,可确保模型始终生成符合您提供的 JSON Schema 的响应,因此您无需担心模型省略必需的键或幻觉出无效的枚举值。

示例

struct MovieInfo: StructuredOutput {
    
    let title: String
    let director: String
    let release: Date
    let genres: [MovieGenre]
    let cast: [String]
    
    static let example: Self = { 
        .init(
            title: "Earth",
            director: "Alexander Dovzhenko",
            release: Calendar.current.date(from: DateComponents(year: 1930, month: 4, day: 1))!,
            genres: [.drama],
            cast: ["Stepan Shkurat", "Semyon Svashenko", "Yuliya Solntseva"]
        )
    }()
}

enum MovieGenre: String, Codable, StructuredOutputEnum {
    case action, drama, comedy, scifi
    
    var caseNames: [String] { Self.allCases.map { $0.rawValue } }
}

let query = ChatQuery(
    messages: [.system(.init(content: "Best Picture winner at the 2011 Oscars"))],
    model: .gpt4_o,
    responseFormat: .jsonSchema(name: "movie-info", type: MovieInfo.self)
)
let result = try await openAI.chats(query: query)

查看 结构化输出文档 以获取更多信息。

图像

给定提示和/或输入图像,模型将生成新图像。

随着人工智能的不断发展,Dall-E 这个有趣的概念也随之发展。Dall-E 由人工智能研究实验室 OpenAI 开发,已被归类为一种 AI 系统,可以根据人类提供的描述生成图像。其潜在应用范围从动画和插图到设计和工程——更不用说两者之间无穷无尽的可能性——很容易理解为什么人们对这项新技术如此兴奋。

创建图像

请求

struct ImagesQuery: Codable {
    /// A text description of the desired image(s). The maximum length is 1000 characters.
    public let prompt: String
    /// The number of images to generate. Must be between 1 and 10.
    public let n: Int?
    /// The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.
    public let size: String?
}

响应

struct ImagesResult: Codable, Equatable {
    public struct URLResult: Codable, Equatable {
        public let url: String
    }
    public let created: TimeInterval
    public let data: [URLResult]
}

示例

let query = ImagesQuery(prompt: "White cat with heterochromia sitting on the kitchen table", n: 1, size: "1024x1024")
openAI.images(query: query) { result in
  //Handle result here
}
//or
let result = try await openAI.images(query: query)
(lldb) po result
▿ ImagesResult
  - created : 1671453505.0
  ▿ data : 1 element
    ▿ 0 : URLResult
      - url : "https://oaidalleapiprodscus.blob.core.windows.net/private/org-CWjU5cDIzgCcVjq10pp5yX5Q/user-GoBXgChvLBqLHdBiMJBUbPqF/img-WZVUK2dOD4HKbKwW1NeMJHBd.png?st=2022-12-19T11%3A38%3A25Z&se=2022-12-19T13%3A38%3A25Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2022-12-19T09%3A35%3A16Z&ske=2022-12-20T09%3A35%3A16Z&sks=b&skv=2021-08-06&sig=mh52rmtbQ8CXArv5bMaU6lhgZHFBZz/ePr4y%2BJwLKOc%3D"

生成的图像

Generated Image

创建图像编辑

给定原始图像和提示,创建编辑或扩展的图像。

请求

public struct ImageEditsQuery: Codable {
    /// The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.
    public let image: Data
    public let fileName: String
    /// An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.
    public let mask: Data?
    public let maskFileName: String?
    /// A text description of the desired image(s). The maximum length is 1000 characters.
    public let prompt: String
    /// The number of images to generate. Must be between 1 and 10.
    public let n: Int?
    /// The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.
    public let size: String?
}

响应

与 ImagesQuery 类似地使用 ImagesResult 响应。

示例

let data = image.pngData()
let query = ImageEditQuery(image: data, fileName: "whitecat.png", prompt: "White cat with heterochromia sitting on the kitchen table with a bowl of food", n: 1, size: "1024x1024")
openAI.imageEdits(query: query) { result in
  //Handle result here
}
//or
let result = try await openAI.imageEdits(query: query)

创建图像变体

创建给定图像的变体。

请求

public struct ImageVariationsQuery: Codable {
    /// The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.
    public let image: Data
    public let fileName: String
    /// The number of images to generate. Must be between 1 and 10.
    public let n: Int?
    /// The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.
    public let size: String?
}

响应

与 ImagesQuery 类似地使用 ImagesResult 响应。

示例

let data = image.pngData()
let query = ImageVariationQuery(image: data, fileName: "whitecat.png", n: 1, size: "1024x1024")
openAI.imageVariations(query: query) { result in
  //Handle result here
}
//or
let result = try await openAI.imageVariations(query: query)

查看 图像文档 以获取更多信息。

音频

语音转文本 API 提供两个端点:转录和翻译,基于我们最先进的开源 large-v2 Whisper 模型。它们可以用于

将音频转录为音频中的任何语言。将音频翻译并转录为英语。文件上传目前限制为 25 MB,并支持以下输入文件类型:mp3、mp4、mpeg、mpga、m4a、wav 和 webm。

音频创建语音

此函数将 AudioSpeechQuery 发送到 OpenAI API,以使用特定语音和格式从文本创建音频语音。

了解更多关于声音的信息。
了解更多关于模型的信息。

请求

public struct AudioSpeechQuery: Codable, Equatable {
    //...
    public let model: Model // tts-1 or tts-1-hd  
    public let input: String
    public let voice: AudioSpeechVoice
    public let responseFormat: AudioSpeechResponseFormat
    public let speed: String? // Initializes with Double?
    //...
}

响应

/// Audio data for one of the following formats :`mp3`, `opus`, `aac`, `flac`, `pcm`
public let audioData: Data?

示例

let query = AudioSpeechQuery(model: .tts_1, input: "Hello, world!", voice: .alloy, responseFormat: .mp3, speed: 1.0)

openAI.audioCreateSpeech(query: query) { result in
    // Handle response here
}
//or
let result = try await openAI.audioCreateSpeech(query: query)

OpenAI 创建语音 – 文档

音频转录

将音频转录为输入语言。

请求

public struct AudioTranscriptionQuery: Codable, Equatable {
    
    public let file: Data
    public let fileName: String
    public let model: Model
    
    public let prompt: String?
    public let temperature: Double?
    public let language: String?
}

响应

public struct AudioTranscriptionResult: Codable, Equatable {
    
    public let text: String
}

示例

let data = Data(contentsOfURL:...)
let query = AudioTranscriptionQuery(file: data, fileName: "audio.m4a", model: .whisper_1)        

openAI.audioTranscriptions(query: query) { result in
    //Handle result here
}
//or
let result = try await openAI.audioTranscriptions(query: query)

音频翻译

将音频翻译成英语。

请求

public struct AudioTranslationQuery: Codable, Equatable {
    
    public let file: Data
    public let fileName: String
    public let model: Model
    
    public let prompt: String?
    public let temperature: Double?
}    

响应

public struct AudioTranslationResult: Codable, Equatable {
    
    public let text: String
}

示例

let data = Data(contentsOfURL:...)
let query = AudioTranslationQuery(file: data, fileName: "audio.m4a", model: .whisper_1)  

openAI.audioTranslations(query: query) { result in
    //Handle result here
}
//or
let result = try await openAI.audioTranslations(query: query)

查看 音频文档 以获取更多信息。

嵌入

获取给定输入的向量表示,该表示可以很容易地被机器学习模型和算法使用。

请求

struct EmbeddingsQuery: Codable {
    /// ID of the model to use.
    public let model: Model
    /// Input text to get embeddings for
    public let input: String
}

响应

struct EmbeddingsResult: Codable, Equatable {

    public struct Embedding: Codable, Equatable {

        public let object: String
        public let embedding: [Double]
        public let index: Int
    }
    public let data: [Embedding]
    public let usage: Usage
}

示例

let query = EmbeddingsQuery(model: .textSearchBabbageDoc, input: "The food was delicious and the waiter...")
openAI.embeddings(query: query) { result in
  //Handle response here
}
//or
let result = try await openAI.embeddings(query: query)
(lldb) po result
▿ EmbeddingsResult
  ▿ data : 1 element
    ▿ 0 : Embedding
      - object : "embedding"
      ▿ embedding : 2048 elements
        - 0 : 0.0010535449
        - 1 : 0.024234328
        - 2 : -0.0084999
        - 3 : 0.008647452
    .......
        - 2044 : 0.017536353
        - 2045 : -0.005897616
        - 2046 : -0.026559394
        - 2047 : -0.016633155
      - index : 0

(lldb)

查看 嵌入文档 以获取更多信息。

模型

模型表示为类型别名 typealias Model = String

public extension Model {
    static let gpt4_turbo_preview = "gpt-4-turbo-preview"
    static let gpt4_vision_preview = "gpt-4-vision-preview"
    static let gpt4_0125_preview = "gpt-4-0125-preview"
    static let gpt4_1106_preview = "gpt-4-1106-preview"
    static let gpt4 = "gpt-4"
    static let gpt4_0613 = "gpt-4-0613"
    static let gpt4_0314 = "gpt-4-0314"
    static let gpt4_32k = "gpt-4-32k"
    static let gpt4_32k_0613 = "gpt-4-32k-0613"
    static let gpt4_32k_0314 = "gpt-4-32k-0314"
    
    static let gpt3_5Turbo = "gpt-3.5-turbo"
    static let gpt3_5Turbo_0125 = "gpt-3.5-turbo-0125"
    static let gpt3_5Turbo_1106 = "gpt-3.5-turbo-1106"
    static let gpt3_5Turbo_0613 = "gpt-3.5-turbo-0613"
    static let gpt3_5Turbo_0301 = "gpt-3.5-turbo-0301"
    static let gpt3_5Turbo_16k = "gpt-3.5-turbo-16k"
    static let gpt3_5Turbo_16k_0613 = "gpt-3.5-turbo-16k-0613"
    
    static let textDavinci_003 = "text-davinci-003"
    static let textDavinci_002 = "text-davinci-002"
    static let textCurie = "text-curie-001"
    static let textBabbage = "text-babbage-001"
    static let textAda = "text-ada-001"
    
    static let textDavinci_001 = "text-davinci-001"
    static let codeDavinciEdit_001 = "code-davinci-edit-001"
    
    static let tts_1 = "tts-1"
    static let tts_1_hd = "tts-1-hd"
    
    static let whisper_1 = "whisper-1"

    static let dall_e_2 = "dall-e-2"
    static let dall_e_3 = "dall-e-3"
    
    static let davinci = "davinci"
    static let curie = "curie"
    static let babbage = "babbage"
    static let ada = "ada"
    
    static let textEmbeddingAda = "text-embedding-ada-002"
    static let textSearchAda = "text-search-ada-doc-001"
    static let textSearchBabbageDoc = "text-search-babbage-doc-001"
    static let textSearchBabbageQuery001 = "text-search-babbage-query-001"
    static let textEmbedding3 = "text-embedding-3-small"
    static let textEmbedding3Large = "text-embedding-3-large"
    
    static let textModerationStable = "text-moderation-stable"
    static let textModerationLatest = "text-moderation-latest"
    static let moderation = "text-moderation-007"
}

支持 GPT-4 模型。

例如:要使用 gpt-4-turbo-preview 模型,请将 .gpt4_turbo_preview 作为参数传递给 ChatQuery 初始化器。

let query = ChatQuery(model: .gpt4_turbo_preview, messages: [
    .init(role: .system, content: "You are Librarian-GPT. You know everything about the books."),
    .init(role: .user, content: "Who wrote Harry Potter?")
])
let result = try await openAI.chats(query: query)
XCTAssertFalse(result.choices.isEmpty)

如果您需要使用上面未表示的某些模型,您也可以传递自定义字符串。

模型列表

列出当前可用的模型。

响应

public struct ModelsResult: Codable, Equatable {
    
    public let data: [ModelResult]
    public let object: String
}

示例

openAI.models() { result in
  //Handle result here
}
//or
let result = try await openAI.models()

检索模型

检索模型实例,提供所有权信息。

请求

public struct ModelQuery: Codable, Equatable {
    
    public let model: Model
}    

响应

public struct ModelResult: Codable, Equatable {

    public let id: Model
    public let object: String
    public let ownedBy: String
}

示例

let query = ModelQuery(model: .gpt4)
openAI.model(query: query) { result in
  //Handle result here
}
//or
let result = try await openAI.model(query: query)

查看 模型文档 以获取更多信息。

审核

给定输入文本,输出模型是否将其分类为违反 OpenAI 的内容政策。

请求

public struct ModerationsQuery: Codable {
    
    public let input: String
    public let model: Model?
}    

响应

public struct ModerationsResult: Codable, Equatable {

    public let id: String
    public let model: Model
    public let results: [CategoryResult]
}

示例

let query = ModerationsQuery(input: "I want to kill them.")
openAI.moderations(query: query) { result in
  //Handle result here
}
//or
let result = try await openAI.moderations(query: query)

查看 审核文档 以获取更多信息。

实用工具

该组件附带了几个方便的实用工具函数来处理向量。

public struct Vector {

    /// Returns the similarity between two vectors
    ///
    /// - Parameters:
    ///     - a: The first vector
    ///     - b: The second vector
    public static func cosineSimilarity(a: [Double], b: [Double]) -> Double {
        return dot(a, b) / (mag(a) * mag(b))
    }

    /// Returns the difference between two vectors. Cosine distance is defined as `1 - cosineSimilarity(a, b)`
    ///
    /// - Parameters:
    ///     - a: The first vector
    ///     - b: The second vector
    public func cosineDifference(a: [Double], b: [Double]) -> Double {
        return 1 - Self.cosineSimilarity(a: a, b: b)
    }
}

示例

let vector1 = [0.213123, 0.3214124, 0.421412, 0.3214521251, 0.412412, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.4214214, 0.213123, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.3214521251]
let vector2 = [0.213123, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.511515, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.3213213]
let similarity = Vector.cosineSimilarity(a: vector1, b: vector2)
print(similarity) //0.9510201910206734

在数据分析中,余弦相似度是衡量两个数字序列之间相似度的指标。

Screenshot 2022-12-19 at 6 00 33 PM

在此处阅读更多关于余弦相似度的信息 here

Combine 扩展

该库包含内置的 Combine 扩展。

func images(query: ImagesQuery) -> AnyPublisher<ImagesResult, Error>
func embeddings(query: EmbeddingsQuery) -> AnyPublisher<EmbeddingsResult, Error>
func chats(query: ChatQuery) -> AnyPublisher<ChatResult, Error>
func model(query: ModelQuery) -> AnyPublisher<ModelResult, Error>
func models() -> AnyPublisher<ModelsResult, Error>
func moderations(query: ModerationsQuery) -> AnyPublisher<ModerationsResult, Error>
func audioTranscriptions(query: AudioTranscriptionQuery) -> AnyPublisher<AudioTranscriptionResult, Error>
func audioTranslations(query: AudioTranslationQuery) -> AnyPublisher<AudioTranslationResult, Error>

助手

查看 助手文档 以获取更多信息。

创建助手

示例:创建助手

let query = AssistantsQuery(model: Model.gpt4_o_mini, name: name, description: description, instructions: instructions, tools: tools, toolResources: toolResources)
openAI.assistantCreate(query: query) { result in
   //Handle response here
}

修改助手

示例:修改助手

let query = AssistantsQuery(model: Model.gpt4_o_mini, name: name, description: description, instructions: instructions, tools: tools, toolResources: toolResources)
openAI.assistantModify(query: query, assistantId: "asst_1234") { result in
    //Handle response here
}

助手列表

示例:助手列表

openAI.assistants() { result in
   //Handle response here
}

线程

查看 线程文档 以获取更多信息。

创建线程

示例:创建线程

let threadsQuery = ThreadsQuery(messages: [Chat(role: message.role, content: message.content)])
openAI.threads(query: threadsQuery) { result in
  //Handle response here
}
创建并运行线程

示例:创建并运行线程

let threadsQuery = ThreadQuery(messages: [Chat(role: message.role, content: message.content)])
let threadRunQuery = ThreadRunQuery(assistantId: "asst_1234"  thread: threadsQuery)
openAI.threadRun(query: threadRunQuery) { result in
  //Handle response here
}
获取线程消息

查看 消息文档 以获取更多信息。

示例:获取线程消息

openAI.threadsMessages(threadId: currentThreadId) { result in
  //Handle response here
}
向线程添加消息

示例:向线程添加消息

let query = MessageQuery(role: message.role.rawValue, content: message.content)
openAI.threadsAddMessage(threadId: currentThreadId, query: query) { result in
  //Handle response here
}

运行

查看 运行文档 以获取更多信息。

创建运行

示例:创建运行

let runsQuery = RunsQuery(assistantId:  currentAssistantId)
openAI.runs(threadId: threadsResult.id, query: runsQuery) { result in
  //Handle response here
}
检索运行

示例:检索运行

openAI.runRetrieve(threadId: currentThreadId, runId: currentRunId) { result in
  //Handle response here
}
检索运行步骤

示例:检索运行步骤

openAI.runRetrieveSteps(threadId: currentThreadId, runId: currentRunId) { result in
  //Handle response here
}
为运行提交工具输出

示例:为运行提交工具输出

let output = RunToolOutputsQuery.ToolOutput(toolCallId: "call123", output: "Success")
let query = RunToolOutputsQuery(toolOutputs: [output])
openAI.runSubmitToolOutputs(threadId: currentThreadId, runId: currentRunId, query: query) { result in
  //Handle response here
}

文件

查看 文件文档 以获取更多信息。

上传文件

示例:上传文件

let query = FilesQuery(purpose: "assistants", file: fileData, fileName: url.lastPathComponent, contentType: "application/pdf")
openAI.files(query: query) { result in
  //Handle response here
}

取消请求

基于闭包的 API

当您调用任何基于闭包的 API 方法时,它会返回可丢弃的 CancellableRequest。持有对其的引用以便稍后能够取消请求。

let cancellableRequest = object.chats(query: query, completion: { _ in })
cancellableReques

Swift 并发

对于 Swift 并发调用,您可以简单地取消调用任务,相应的 URLSessionDataTask 将自动取消。

let task = Task {
    do {
        let chatResult = try await openAIClient.chats(query: .init(messages: [], model: "asd"))
    } catch {
        // Handle cancellation or error
    }
}
            
task.cancel()

Combine

在 Combine 中,使用默认的取消机制。只需丢弃对订阅的引用,或在其上调用 cancel()

let subscription = openAIClient
    .images(query: query)
    .sink(receiveCompletion: { completion in }, receiveValue: { imagesResult in })
    
subscription.cancel()

示例项目

您可以在 Demo 文件夹中找到示例 iOS 应用程序。

mockuuups-iphone-13-pro-mockup-perspective-right

贡献指南

使您的 Pull Request 对任何查看它们的人来说都清晰明了。
main 设置为您的目标分支。

在命名 PR 和分支时使用 Conventional Commits 原则

PR 命名示例:Feat: Add Threads API handlingBug: Fix message result duplication

分支命名示例:feat/add-threads-API-handlingbug/fix-message-result-duplication

以以下格式编写拉取请求的描述

如果需要且可能,我们将感谢您在代码中包含测试。❤️

链接

许可证

MIT License

Copyright (c) 2023 MacPaw Inc.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.