
MLLM by Motiff: Shaping the future of UI design


Over the past year, AI has progressed at an unbelievable pace. One thing that remains vivid in my memory is when we tested the AI-generated UI results last year:
I had a sudden realization that it is hard to imagine how quickly AI has advanced in generating UI interfaces with such complexity and expressiveness. For a moment, I even wondered if AI might someday replace the need for human UI designers. If you've tried Motiff’s latest AI Generates UI feature, I believe you will share the same Aha Moment during your first few attempts.
But as a professional design tool, Motiff faces a much tougher challenge in impressing expert UX designers with its AI-generated interfaces. It’s like how a first-time runner might feel proud finishing a 5km race in 30 minutes, while a marathoner would see it as just a warm-up.
To a non-professional, the results of generated UI might look great, but professional designers will spot more issues as they become more experienced.
Over the past few months, we've witnessed a surge of “idea to app” product launches. Most users aren't designers - many are even generating code first before considering design, and some are already “relatively satisfied” with AI-generated interfaces. I believe this has opened up a new market opportunity tailored to non-professional designers. But what role can AI play for professionals in the field?
The answer seems to have made its appearance clearly, much like Cursor’s approach to coding. While AI can’t yet replace developers to generate all the code, it’s already capable of sharing full code context and creating and modifying "code snippets”.
AI presents dual opportunities: enabling non-professionals to generate ready-to-use deliverables, while also serving as an assistant for experts within professional tools. Motiff is inherently better positioned for the latter.
When Motiff AI first launched last year, we introduced seven AI features. Looking back after a year, we realized two areas where we fall short:
Yet one exception stood out: the AI Magic Box (apologies—we still kept the AI prefix here). Unlike others, it didn’t need any prior knowledge of usage scenarios. Users simply draw a box as needed, and then AI gives you the desired results.
Restricted by technical limitations at that time, this feature was only available as an experimental module in Motiff Lab, requiring users to get existing design patterns that were originally pre-defined within a design file.
Fortunately, rapid technological advancement allowed it to successfully graduate from the lab. In Motiff AI 2.0, the AI Magic Box serves as the foundational way of interaction to connect you with your AI design partner. (This time, we've dropped "AI" and named it Magic Box.)
We’ve added the Magic Box as the first group of tools in the toolbar, hoping it becomes as intuitive to use as the Move and Scale tools. Imagine your AI design partner as an assistant sitting right beside you. When you need help, just give a simple signal of how you would naturally call over a colleague: ”Hey, let’s take a look at this design together!” And what would you do next? You’d likely point to a specific area on the screen, guiding your design partner to see and discuss that part with you.
This is also the next “intuitive” interaction after activating the Magic Box. You can click or drag to select layers on the screen, or simply draw a box around the area you want to “direct” its attention to. The AI will then share context for that specific region, collaborating with you in real time to refine the design.
And what can you do next? Theoretically, you can have an AI design partner assist you with anything. Going back to the idea of it being an assistant that “sits” next to you: iterating, editing, or creating new modules and elements.
When we first launched Motiff AI last year, we introduced the idea of PTF (Product Technology Fit) as a way to build AI product features. This framework highlights the real challenge in developing AI features: the difficulty lies not in coming up with innovative ideas, but in effectively integrating mature technologies to create highly usable features.
For example, "AI Layout" (a feature that helps designers add auto layouts with AI) was perfectly aligned with this ideology. However, we gradually realized this mindset had become a constraint for us. Its limitation lies in the fact that scenarios fully matching PTF appear to be relatively sparse and have a low frequency. For high-frequency scenarios, we always felt the "usability" wasn't quite strong enough.
Meanwhile, our definition of "usability" sometimes became excessively strict. "Usability" doesn't necessarily require absolute binary judgment (1 or 0), but rather exists on a spectrum. In the AI era, "usability" exhibits two distinctive characteristics:
Based on the new understanding, we propose a new framework for developing our AI products: Pre-Technology Fit.
This means we must anticipate the trajectory of rapid technological advancement (or operate on a kind of faith), creating features that are both usable today and capable of evolving alongside technological progress. This ideology doesn't require building a perfectly polished product from day one—instead, it embraces the principle of "good enough to be used". The initial solution might only address a fraction of users' existing requirements. What matters most is ensuring that the solution will eventually become "good enough" and that it will continuously enhance the user experience as AI models improve, potentially even reaching the point of delivering value where "people don't know what they want until you show it to them”.
In Motiff AI 2.0, we’ve introduced more Pre-Technology Fit capabilities. These features are already usable today and will iterate with technological advancements. For example:
Meanwhile, behind these capabilities, I believe context will play an increasingly crucial role. Your AI design partner will understand your business and design, removing the need to provide “lengthy backstories” every time you assign it a task, just like collaborating with a human. This represents another dimension of Pre-Technology Fit. AI assistants themselves are evolving at an unprecedented pace.
Finally, let’s introduce a new research direction at Motiff Lab: Shortcuts.
iPhone users are probably familiar with this term—Apple’s native Shortcuts app lets you automate a series of tasks and app workflows to meet diverse long-tail needs or personal preferences. But if you’ve ever tried manually setting up a relatively complex shortcut—like “Create a GIF from selected photos”—you’ll quickly realize it’s not easy. It requires deep familiarity with your phone’s capabilities and even requires product design logic to connect everything together seamlessly.
This type of demand is similar to "plugins" in design tools. Different designers have diverse long-tail needs—for example, some may require rotating a text graphic in a specific pattern. Developers create plugins to address such needs and expand their influence through the community to share their created plugins with others. But what is the essence of a plugin? Basically, it is the same as the developer’s logic for calling the APIs of design tools, sometimes integrating other tools or services. Plugins for design tools are great, but we have also reflected on some issues:
So, is there a simpler way to elegantly and intelligently address designers' long-tail needs? The advancement of LLM and Agent technology has given a definitive answer: If the essence of long-tail design needs is to do a series of tedious tasks on the canvas, you can simply describe your task in natural language to your design partner, and let it handle the work for you.
For example, you can give your design partner these instructions using the shortcuts feature of Motiff AI.
“Generate avatar for all selected rectangles.”
“Split this paragraph into multiple text layers by line breaks.”
“Select all similar buttons to create components and replace with instances.”
“Create a plugin that quickly copies/pastes all stroke and shadow properties from any object.”
Your AI design partner will try to understand your intention and automatically write programs to complete these tasks using Motiff’s APIs. I believe shortcuts won’t completely replace plugins, but they will make it easier for users to solve temporary needs.
The current shortcuts feature still has room for optimization in terms of stability and overall success rate, so we keep it as a lab feature for now.
On the other hand, I have higher expectations for the future development of task-oriented commands. Shortcuts are just the most basic form in the general direction of AI Agents. In the future, designers can input more complex prompts and let AI handle long-term tasks. AI will be able to break down tasks, give plans for workflow, and leverage diverse toolkits (especially tools that designers are using on a daily basis).
Imagine a year from today, we can assign the following tasks to the AI design partner:
“Take a UX research for all mainstream checkout pages and recommend the best fit for our business.”
“Synthesize all design comments in Motiff from this week and propose revision plans.”
“Here is the PRD. Give me two design drafts first.”
Are we being too optimistic about the development of AI? Not at all. The future is already here, it is just not evenly distributed. Can’t wait to see you next year!