Former Snap design lead debuts Shader, an AR creation tool that uses AI to generate custom effects

Shader App

Image Credits: Shader (Image has been modified)

Shader aims to challenge industry heavyweights like Snap’s AR development platform Lens Studio and TikTok’s Effect House, with its no-code AR creation tool that generates custom effects, 2D masks and lenses in minutes — rather than spending hours to create a single AR filter, which requires engineering and design skills. The startup built its platform on top of the open source Stable Diffusion model, letting users enter text-based prompts to generate their creations.

The company was founded by Darya Sesitskaya, a former Snap design lead responsible for designing Snapchat’s AR camera, Lens Studio, Lens Cloud and more. She also worked at Wanna (formerly Wannaby), an AR technology company known for its virtual try-ons for sneakers, clothing and watches. Shader’s team is comprised of former Snap AR and Blizzard engineers.

The company’s flagship product — which launched in beta on iOS devices in December 2023 — is a real-time AI camera app where users take a photo of themselves for the app to scan their faces and enter a prompt for the AI to generate a personalized AR effect. Users can then record a video of themselves wearing the mask or filter.

Image Credits: Shader

During our testing, we noticed results were on the simpler side, however, there were no bugs or glitches. Despite the simplicity, there’s potential for the app to become more than just a fun tool to play around with and show your friends. Shader plans to launch a premium subscription option that provides access to features that are higher in quality, Sesitskaya tells TechCrunch.

The cross-platform functionality of the app allows users to share creations on Instagram, TikTok and Snapchat, giving creators the ability to show off exclusive filters that they’ve conceptualized. Shader will eventually launch an in-app social feed where users can post their templates, allowing other creators to like, comment and try out the effects.

Following in the footsteps of other AI editing apps, users can also upload photos from their camera roll to customize them using prompts or select from Shader’s premade templates, including a fox mask, a Yoda-inspired mask and an array of filters.

Since launching its beta version, Shader has garnered approximately 3,000 downloads. An Android version is coming soon.

Image Credits: Shader

Most recently, the company launched a web version that scans faces with a built-in webcam. There’s also a text box for creators to enter prompts. However, it doesn’t look like there are any premade templates available.

Shader is also optimizing its iOS app for the newly released Vision Pro, which takes advantage of Apple’s digital persona technology (shown in the image above). In addition, Shader offers an API and plug-in for companies to implement the technology into their own products.

In terms of funding, Shader raised $580,000 from Betaworks, Greycroft, Differential Ventures, Mozilla Ventures and On Deck. While it’s a modestly sized round, the investment implies there’s still a demand for AR creation tools. The capital is going toward developing new features, such as the ability to use speech-to-text to create prompts and integrating with platforms Twitch, Discord and Zoom, which will allow users to wear AR filters live. It’ll also help grow its marketing team.

“Our mission is to make AR/AI effects accessible to everyone, empowering users to easily create personalized content. Shader is expanding into various social face filters, including background, clothes and hair, prioritizing user-friendly design principles to unlock new possibilities for the 400 million creators on social media. In the near future, we also plan to implement the ability to create voice-to-AR effects and 3D background replacements,” says Sesitskaya.

The company shared a short demo of the speech-to-text technology on its YouTube channel.

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注