
This is probably the biggest shift since point and click
over the last Several months ago, OpenAI and ChatGPT in particular, showed what was possible with a user interface built on a large language model that could answer questions and generate code or images. While this is cool, we can also interact with and modify the byproduct by having a conversation of sorts with the AI. It’s really amazing, but think about how much that’s transformed by applying it to the enterprise apps you use on a daily basis.
What if you could build an interface on top of your existing applications, so that, instead of pointing and clicking, you could simply ask the computer to do a task for you and it would do it, based on the applications basic model or your company’s internal language model.
That would be a huge leap forward in computing. Before now, the biggest leap occurred in 1984, when Apple introduced the GUI that began a slow shift from the command line approach and eventually became mainstream in the early 1990s with the release of Windows 3.1 and later Windows 95.
We’ve had other attempts at user experience, such as voice interfaces like Siri and Alexa, and while they’ve made some changes on the consumer side of things, they’re still just as different as a computer producing work for us. It’s just a matter of finding some answers and in some cases executing simple commands.
It certainly hasn’t changed how we work, and that’s the true measure of whether the new approach to computing is truly transformative. If you could simply write an action like “help me hire a new employee” or “create a monthly profit and loss statement” rather than explicitly instructing systems on what to do, that would be a fundamental leap forward in UX design.
This is what generative AI can do, but like anything else, it’s going to take some creativity to design these new interfaces in an elegant way, so they don’t feel like they’re fixed to the old point-and-click interface. . Large language models are also likely to require more concentration.