Human computing

What's the relationship between Vision Pro and LLM text interfaces?

Human computing
Photo by Brett Jordan / Unsplash

It can feel like there's a contradiction between two of the biggest directions for computing today – Apple Vision Pro and LLM/ generative AI text prompts.

But I think the common characteristic is they each use one of our most basic human faculties to create a new dynamic.

Vision Pro is about placing computing in our real world. Not limited within screens but located seamlessly around us. A monitor is like a sheet of paper. Augmented reality is like being surrounded by the interface itself.

Meanwhile, language-led products with generative AI are using the most powerful tool we have ever created for communication and understanding: language itself. Instead of having to learn the interface for the computer, we have made them learn ours.

This got me thinking about another aspect of AR/ VR – the sense of touch. The interface still has to be in a recognisable "app" form because we don't have the haptic/ tactile tech in a reasonable form to simulate how it feels picking things up and putting them down.

Then I realised: Vision Pro has solved this with selection by finger tap. You provide your own surface feedback by tapping one finger against another.

Through this lens, it becomes obvious that one of the reasons touchscreens work is because you have something to push against. It's an ingenious way to solve that challenge in AR – but particularly because it recognises the importance and value of human senses to this next phase of computing.

This can get lost in UI design. It's easy to get sucked in to the abstract when you're constantly behind the membrane of keyboard and mouse. Things get flashier, push a serotonin button somewhere in the back of your mind with whizzy animations. But they aren't truly more ergonomic.

It's that idea: ergonomics over design, that feels a crucial part of the next phase of computing. Both physical, and mental.

The true abstract interface is language. The true physical interface should be through AR (made tactile through hand touches.)

If these new directions can fulfil their potential, I think we'll see computing become even more personal than ever before.