Adobe has spent the last few years totally reinventing itself. Not only are its tools now sold on a cloud subscription basis, but they've thoroughly embraced mobile devices.
Before reading on, read our explanation of their transformation here.
This transformation has really shown fruits in a plethora of new apps. Some have a singular focus like PhotoShop Fix which is available for iOS and Android. While others are totally new tools that are available for via the web and as desktop and mobile apps.
We look at three upcoming tools below:
XD is short for Experience Designer. It's an all-in-one tool for designing, prototyping and sharing user experiences. These user experience designs are a crucial part of the early planning be for website or mobile apps design.
While there are lots of ways this process could've been done in the past, none are as efficient. An example would be creating each page or screen of the design in Photoshop. These would often be printed out and assembled on a wall or whiteboard, to show the user flow.
All of this is now done in the XD tool. You create the pages or screens with tools similar to Photoshop & Illustrator, you can link buttons to actions and then display the whole user flow.
If you have a subscription to Adobe's Creative Cloud, then you can download a preview of XD right now. It's been in preview for about a year now, with updates each month.
Feedback to Adobe has been unprecedented with 23,000 votes and over 100 user requested features incorporated into the tool.
Last week they released a macOS version of the tool, which joins iOS and Android versions already available.
Later this year a universal Windows app will also be released. Universal means that it will work both on the full Windows 10 operating system, as well as the cut-down Windows Mobile version of the operating system. Adobe has been working closely with Microsoft on the release which will be one of the most complex universal apps ever released.
In recent updates to XD, support for Layers and Symbols were included. First made popular in Adobe's flagship design tools Photoshop CC and Illustrator CC, XD reimagines Layers and Symbols to address the specific needs of user experience (UX) designers.
Sneak peaks have already been shown of new capabilities that enable sharing & real-time collaboration including co-editing of documents, visual versioning and Creative Cloud Libraries enhancements.
Read more about XD here.
Felix is the internal name for a new 2D and 3D compositing app in development right now.
Project Felix is a new breed of design tool that enables graphic designers to easily create high-quality, photo-realistic images by combining 2D and 3D assets.
It's ideally suited for projects such as product shot comps, scene visualisation and abstract design. Users have in-app access to 3D models, materials and lights from Adobe's online Stock service as well as the ability to customise specific properties like materials, lighting and camera angles.
Real-time rendering allows users to preview work while editing and before exporting to Photoshop to complete their design.
Behind the scene's some very clever machine learning capabilities are being incorporated to speed up and simplify the 3D design experience. More specifically it will help enable designers with 2D experience to enter the 3D design realm.
Felix is just an internal code name, so it's shipping name will likely change, and it'll be available as a public beta later this year.
Adobe will employ the same community-based feedback model as they're doing with XD while in its beta phase. With much of the feedback ending up in the eventually released final product.
Read more about Project Felix here.
Imagine being able to edit audio speech files as easily as a text document. Well, that is exactly what Adobe's Project Vocal does.
You load up your audio files with a text transcript of them into the tool. You can then edit the transcript like a text file, and the audio will be edited automatically in the same way.
It's seamless, and the result is incredible. You can't tell that the audio has been changed.
It needs 20 minutes of sample speech from the person. From that the tool learns the speaker's phonetics and is then able to synthesise their voice.
It works at a phonium level rather than a word level. As a result, it can even create words that the speaker never spoke in the sample speech ingested. At this stage it only supports English, but the underlying technologies could be applied to other languages.
One can see amazing practical applications for this in the editing of AudioBooks, PodCasts, YouTube video and voiceovers.
If something is wrong in a recording, the tool could save huge amounts of work re-recording.
The company is currently thinking through the ethical side of this tool and believes that it can stop the use of the tool for the wrong reasons.
Adobe Vocal is still in their laboratories, and there is no word about when this revolutionary technology will make it into a shipping product.
See the demonstration below: