New ideas for AI interface/ conventions

I've written before about why I think chat is a poor experience for AI products. This is my post where I'm going to assemble ideas, fragments, examples for other forms AI interactions might take.

New ideas for AI interface/ conventions

I've written before about why I think chat is a poor experience for AI products – in a nutshell: we should be able to make more assumptions for the user based on context, require less longform typing with a blank page, remove the inevitable frustrations caused by personification.

So this is my post where I'm going to assemble ideas, fragments, examples for other forms AI interactions might take.

Expect a lot of wrong answers, but maybe one will spark something useful for you if you are working on this.


Comments

To me, writing is one of the more boring use cases for LLMS. It's obvious, it's a natural fit. But the interface today requires quite a lot of wholesale iteration, in a way that makes it less useful for professional use cases.

When I write something, I don't necessarily want a whole new redraft every time I change something. And I don't necessarily want to change a paragraph one at a time and without context of the wider whole.

So: what if I could set an LLM reviewing my piece of writing and it would leave comments just like Google Docs? "Maybe you could mention the example in your conclusion here, to foreshadow it more?" "This bit seems to repeat a point you made earlier."

And what's nice about comments as a metaphor is they naturally invite a response.

AI comment: "This bit seems to repeat a point you made earlier."
Max reply: "I'm labouring the repetition with this because my audience is non-technical, so I want to make sure they understand it."
AI comment: "maybe you could use a metaphor to make the direct repetition less heavy-handed – something like X Y Z".
Max accepts suggestion


Modifiers "chips"

UI of the Image Playground experience shows a colorful image of a brain surrounded by classical instruments and music notation with suggestions for more elements to add to the image

The Apple Intelligence demo shows a version of this with a very broad use case – image generation. But if you think about a use case with a clearer and more specific moment of intent, you should be able to offer clearer, more useful, more logical options.

For example, at work, the LLM may have suggested a certain action for a task. You might have a handful of natural choice for how you usually modify or respond to that action. Or it might be colleagues that you often delegate it to. Or even workflows that you often ascribe as next steps.

Regardless, intelligent suggestions are superior to chat.

This relates closely to another idea: Spatial intelligence.


Spatial intelligence

If I’m managing AI automation across my business, maybe they sit in a giant Figma board, with a team’s running AI tasks stacked together, or in zones and regions. Maybe the AI can infer from that proximity who is in charge with what.

If I drag a file or folder onto a stack, it can become a new card. When it’s done, it can be replaced with another card to represent how it was transformed.

Maybe the individual experience is more like blackjack, with card outlines in front of the dealer and the user holding a stack of options.


Clipboard

File another one under "happy accidents". I'd love to be using an app, then press a keyboard shortcut and get a list of possible actions I want to take, input on whatever I'm focused on or other context-dependent helpful stuff.


3D interaction

Spline is a 3d design tool that now allows LLM integration on the back end. Generate responses by manoeuvring one object toward another, or press button on objects to get results.