“Virtual try-on” can mean two completely different products. One is an AR widget embedded on the storefront. The other is AI image generation in a web app, used by the marketing team. Confusing the two leads to wrong tool choices and wasted integrations. Here is the clean breakdown.
AR fitting room (the embedded widget)
How it works: the shopper opens the product page on a phone, taps a try-on button, the storefront opens the camera, and an AR layer renders the garment, eyewear, watch, or makeup onto the live camera feed. The shopper can move, see how it looks, and add to cart from inside the AR experience.
Strengths: real-time feedback for the shopper at the moment of purchase intent, particularly powerful for size and fit decisions. Best fit for high-AOV categories where size and shape matter (eyewear, watches with diameter variation, jewelry sizing).
Costs: serious integration work (SDK, vendor contract, on-storefront performance impact, mobile bug surface). Maintenance overhead. Ongoing licensing fee, often per impression or per session. Usually a 6–12 week implementation with engineering involvement.
AI try-on imagery (the marketing-team workflow)
How it works: the marketing team uploads a garment file and a model image into a web app. The tool renders the garment onto the model. The output is a static image (or batch of images) that the team uses for PDP additional shots, ads, lookbook, email, and social.
Strengths: zero on-storefront engineering, immediate use across every marketing channel, cheap per-image cost, scales to any catalog volume. Not real-time for the shopper, but neither is any other PDP image.
Costs: per-image AI compute. On Apiway, a few credits per try-on shot. (Pricing recap: one credit equals one US cent.)
When each is the right tool
AR fitting room is the right move when the conversion-blocker is genuinely “will this fit?” in a way that a photo cannot resolve. Eyewear is the canonical case: the shape on the face is highly individual, and a static photo does not answer the question.
AI try-on imagery is the right move for the much larger category of products where the conversion-blocker is “does this look good?” and the answer is best given by a well-shot image. Most clothing brands sit here, because clothing size is sized (S/M/L/XL) rather than shaped, and the shopper already has a sizing intuition.
Where Apiway sits
Apiway delivers AI try-on imagery, not an AR widget. Virtual try-on is a web-app workflow used by the marketing team to produce try-on shots in batch, in any aspect ratio (4:5 PDP, 9:16 Stories, 1:1 grid), and against any model image (a creator photo set from the marketplace, an upload, or a White Studio preset).
For a clothing brand evaluating “virtual try-on” as a project, Apiway covers the imagery side without an SDK. If the brand later decides to add an AR widget on top, the AI imagery still serves the rest of the funnel.
Cost shape, side by side
AR widget for a typical clothing brand: $5,000–$30,000 upfront integration plus $0.05–$0.50 per session in ongoing licensing.
AI try-on imagery on Apiway for the same brand: $0 setup, roughly $0.02–$0.05 per shot, no per-session licensing. Fifty shots a week is a $1–$2/week marketing line item.
Where brands run both
Eyewear, watches, and jewelry brands at scale tend to run AR for the storefront try-on and AI for marketing imagery production. The two stacks serve different parts of the funnel and do not conflict.
For most clothing brands, the AI imagery side alone covers the question.
Try the imagery side first
Before committing to an AR widget, run a month of AI try-on imagery on the marketing side and watch the PDP and ad performance. Free accounts ship with 100 one-time credits — enough for the test.
