With AI, creative teams can produce content in mere days, or even hours, that would otherwise take months. How? By eliminating the all-too-common grunt work bogging down design teams.
In April, for example, Adobe announced new Creative Cloud features that included groundbreaking Content-Aware Fill for video, powered by Adobe Sensei, the company’s AI and machine-learning technology. It also added new capabilities for titles and graphics, crafting animations, refining audio mixing, and organizing and preparing project media. In addition, it rolled out hundreds of performance improvements, including faster Mask Tracking for effects and color workflows, dual GPU optimization, and improved hardware acceleration for HEVC and H.264 formats in Premiere Pro.
All of these new features probably fly under the radar for the average user. But what they are doing is automating previously tedious work processes so that almost anyone — whether they’re an experienced or novice creator — can get involved in motion design, animation, and special effects (whereas those fields were once reserved for individuals who knew how to code and animate and had some grounding in visual and design techniques). In the near future AI will enable more professionals to jump into the game and rapidly create and circulate powerful, compelling, and memorable brand stories that matter.
Before long, AI will enable computers to look, listen, and learn from what a designer is trying to do and make recommendations along the way.
As designers assemble their creative stories, they’ll have options presented in their moments of need. In a sense, the designer will become a curator and the computer will serve as a creative assistant, proposing ideas. In that way, designers working with a drawing and painting application like Adobe Fresco will be able to experiment more quickly and effectively because they won’t have to hunt for the right word, color, shape, or sound based on their past activities and those of other designers. Machine-learning algorithms will help them zero in on the most likely options, saving time so their brains can stay in creative or strategic (rather than tactical) mode.
While that technology will invariably show up in tools like Creative Cloud, it may not be as apparent as it is today. In the future, some industry observers think AI will be invisible. It will be ambient. It will basically serve as a mostly silent assistant enabling its human counterparts to ideate, experiment, and create.
Today, we can extract and use data from machine senses, such as our augmented reality (AR) cameras, voice-controlled speakers, biodata, and, eventually, haptics. Combining this “sensory” data with the power of AI is so powerful that it has produced a new computer platform that performs beyond our typical six senses. We call this spatial computing.
Spatial computing will become the next computing platform, like mobile computing and desktop computing before it. It will essentially operate in the spaces around us and use machine senses as input. Where we once used touchscreens and computer mice to input information, with spatial computing the machine will watch us, mimic our vocal patterns or gestures, and then feed that information into our onscreen creations, with AI acting as the nervous system to spark all of this.
In short, where many of us grew up thinking we had to adapt to the computer, in the future PCs will adapt to us, making our lives a whole lot easier and freeing us to imagine, create, and be more efficient.
During the creative process today, users typically relate to design tools on their level. They must know the terminology of the tools — masking, liquifying, or dodging — and how to apply each to their particular need.
But in the future using AI and machine learning, those tools will become more content aware and serve up capabilities based on the task of the moment. So, for example, in the old days if you noticed a mouth was a little misshapen and wanted to fix it, as you adjusted the shape, the program might try to extend the pattern of the rest of the face to where you adjusted the lips. But with AI, once it has access to data about hundreds or thousands of other mouths, it would use that to predict what should go where and recommend or make accurate adjustments.
Similarly, as the technology progresses, design tools might enable users to change the style of an object from, say, modern classical to look like a Vincent van Gogh painting. Or an artist working with maple leaves may decide they’d look better as oak leaves and ask the program to make that adjustment. With AI and its ability to refer to and utilize mass treasure troves of data, almost anything will be possible.
In the future, as AI becomes more integrated with design tools and is enhanced by capabilities like natural language processing (NLP), it may also be possible to simply utter a few commands and have elements of that creation change in seconds. Many of us are already accustomed to using voice commands with personal digital assistants, such as Alexa and Siri. Why not creative tools?
For instance, by saying, “brighten the sky,” the tool could immediately call upon past experiences to know what that specific designer might have in mind and then make the change. Or by asking the program to “remove the dog” or “take out the people,” it could also identify the pixels associated with those things and cleanly wipe them from the screen.
These assistants will be subtly integrated into applications, only serving up suggestions that make sense in the moment and facilitating — rather than obstructing — creative processes.
The goal behind all of these trends is to remove barriers and enhance creativity. The ability to create should not be limited by your access to software classes or even your capability to invest time to explore a particular application. In the future, AI will level the playing field and make creative storytelling easier and more approachable for a wider array of designers and marketers.