How do I integrate Amigo AI SDK into my iOS app?
The SDK provides three integration levels: drop-in SwiftUI/UIKit views, a per-frame API for custom pipelines, and static image swap. All run on-device via CoreML.
System Requirements
- iOS 16.0 or later
- Xcode 15.0 or later
- Swift 5.9 or later
- Device with A12 Bionic chip or later
Installation (Swift Package Manager)
In Xcode: File → Add Package Dependencies → enter:
https://github.com/AmigoAIAdmin/amigo_sdk_reference.git
Step 1: Download Models (Free)
ML models are downloaded from CDN on first use — not bundled in your app binary. Only emap.bin (1MB) is bundled. Call this during onboarding or a loading screen:
try await AmigoFaceSwap.downloadModelsIfNeeded { progress in
print("Downloading models: \(Int(progress * 100))%")
}
This is a free operation — no license charge. If models are already cached, it returns immediately.
Step 2: Initialize ($0.01/session)
try await AmigoFaceSwap.initialize(apiKey: "amigo_sk_your_key")
This validates your API key, downloads models if needed, and pre-warms the CoreML engine so the first frame has no cold-start delay.
Step 3: Enroll a Face
Extract a reusable face embedding from a photo:
let latent = try await AmigoFaceSwap.enrollFace(from: targetPhoto)
The returned FaceLatent is a 512-dimensional embedding. Cache it and reuse it — switch targets at runtime with zero latency. You can also construct from pre-computed server-side embeddings:
let latent = FaceLatent(embedding: myFloatArray) // [Float] with 512 elements
Live Camera — SwiftUI
import AmigoFaceSwapSDK
import SwiftUI
struct FaceSwapView: View {
let latent: FaceLatent
var body: some View {
AmigoLiveCameraView(targetLatent: latent)
.ignoresSafeArea()
}
}
To receive each processed frame (for recording or streaming):
AmigoLiveCameraView(targetLatent: latent) { frame in
myEncoder.encode(frame) // UIImage
}
Live Camera — UIKit
let vc = AmigoLiveViewController(targetLatent: latent)
present(vc, animated: true)
Advanced: AmigoLiveSession
For full control over the camera session:
let session = AmigoLiveSession(targetLatent: latent)
session.delegate = self
view.addSubview(session.previewView)
session.previewView.frame = view.bounds
session.start()
// Switch faces at runtime
session.targetLatent = newLatent
// Background replacement
session.backgroundImage = UIImage(named: "office")
// Toggle face swap on/off
session.isFaceSwapEnabled = false
Per-Frame API (WebRTC / Custom Pipelines)
For WebRTC, video playback, or any custom frame source:
// Call from a serial background queue (e.g., AVFoundation videoDataOutputQueue)
if let output = try AmigoFaceSwap.processFrame(pixelBuffer, using: latent) {
// render output (CIImage) to your display layer
}
// Returns nil when no face detected — render the original frame
Static Image Swap
let result = try await AmigoFaceSwap.swapFace(
in: inputImage,
using: latent,
lipMode: .innerLips
)
imageView.image = result
LipMode
Control lip blending behavior:
| Mode | Description |
|------|-------------|
| .none | Source face lips as-is |
| .outerLips | Blend outer lip region |
| .innerLips | Blend inner lip region (default, most natural) |
Cache Management
// Clear all cached models (forces re-download)
AmigoFaceSwap.clearModelCache()