Adobe Firefly is a new set of generative AI tools which can be accessed via https://firefly.adobe.com/ by anyone with an Adobe ID. To learn more about Firefly… have a look at their FAQ.
Image 1: Adobe Firefly
One of the more unique aspects of Firefly that sets it apart from other generative AI tools is Adobe’s exploration of procedures that go beyond prompt-based image generation. A good example of this is what is called Text Effects in Firefly.
Text effects are also prompt-based… but use a scaffold determined by font choice and character set to constrain a generated set of styles to these letterforms. The styles themselves are based on user prompts – although there are other variants to consider as well.
In the remainder of this article, we will focus on the text-to-image basics available in Firefly.
As mentioned in the introduction, we will continue our explorations of Adobe Firefly with the ability to generate stylized text effects from a text prompt. This is a bit different from the procedures that users might already be familiar with when dealing with generative AI – yet retains many similarities with such processes.
When you first enter the Firefly web experience, you will be presented with the various workflows available.
Image 2: Firefly modules can be either active and ready to work with or in exploration
These appear as UI cards and present a sample image, the name of the procedure, a procedure description, and either a button to begin the process or a label stating that it is “in exploration”. Those which are in exploration are not yet available to general users.
We want to locate the Text Effects module and click Generate to enter the experience.
Image 3: The Text effects module in Firefly
From there, you’ll be taken to a view that showcases text styles generated through this process. At the bottom of this view is a unified set of inputs that prompt you to enter the text string you want to stylize… along with the invitation to enter a prompt to “describe the text effects you want to generate”.
Image 4: The text-to-image prompt requests your input to begin
In the first part that reads Enter Text, I have entered the text characters “Packt”. For the second part of the input requesting a prompt, enter the following: “futuristic circuitry and neon lighting violet”
Click the Generate button when complete. You’ll then be taken into the Firefly text effects experience.
Image 5: The initial set of four text effect variants is generated from your prompt with the characters entered used as a scaffold
When you enter the text effects module properly, you are presented in the main area with a preview of your input text which has been given a stylistic overlay generated from the descriptive prompt. Below this are a set of four variants, and below that are the text inputs that contain your text characters and the prompt itself.
To the right of this are your controls. These are presented in a user-friendly way and allow you to make certain alterations to your text effects. We’ll explore these properties next to see how they can impact our text effect style.
Along the right-hand side of the interface are properties that can be adjusted. The first section here includes a set of Sample prompts to try out.
Image 6: A set of sample prompts with thumbnail displays
Clicking on any of these sample thumbnails will execute the prompt attributed to it, overriding your original prompt. This can be useful for those new to prompt-building within Firefly to generate ideas for their own prompts and to witness the capabilities of the generative AI. Choosing the View All option will display even more prompts.
Below the sample prompts, we have a very important adjustment that can be made in the form of Text effects fit.
Image 7: Text effects fit determines how tight or loose the visuals are bound to the scaffold
This section provides three separate options for you to choose from… Tight, Medium, or Loose. The default setting is Medium and choosing either of the other options will have the effect of either tightening up all the little visual tendrils that expand beyond the characters – or will let them loose, generating even more beyond the bounds of the scaffold.
Let’s look at some examples with our current scaffold and prompt:
Image 8: Tight - will keep everything bound within the scaffold of the chosen characters
Image 9: Medium - is the default and includes some additional visuals extending from the scaffold
Image 10: Loose - creates many visuals beyond the bounds of the scaffold
One of the nice things about this set is that you can easily switch between them to compare the resulting images and make an informed decision.
Next, we have the ability to choose a Font for the scaffold. There are currently a very limited set of fonts to use in Firefly. Similar to the sample prompts, choosing the View All option will display even more fonts.
Image 11: The font selection properties
When you choose a new font, it will regenerate the imagery in the main area of the Firefly interface as the scaffold must be rebuilt.
I’ve chosen Source Sans 3 as the new typeface. The visual is automatically regenerated based on the new scaffold created from the character structure.
Image 12: A new font is applied to our text and the effect is regenerated
The final section along the right-hand side of the interface is for Color choices. We have options for Background Color and for Text Color.
Image 13: Color choices are the final properties section
There are a very limited set of color swatches to choose from. The most important is whether you want to have the background of the generated image be transparent or not.
Okay – we’ll now look to making final adjustments to the generated image and downloading the text effect image to our local computer. The first thing we’ll choose is a variant – which can be found beneath the main image preview. A set of 4 thumbnail previews are available to choose from.
Image 14: Selecting from the presented variants
Clicking on each will change the preview above it to reveal the full variant – as applied to your text effect.
For instance, if I choose option #3 from the image above, the following changes would result:
Image 15: A variant is selected and the image preview changes to match
Of course, if you do not like any of the alternatives, you can always choose the initial thumbnail to revert back.
Once you have made the choice of variant, you can download the text effect as an image file to your local file system for use elsewhere. Hover over the large preview image and an options overlay appears.
Image 16: A number of options appear in the hover overlay, including the download option
We will explore these additional options in greater detail in a future article. Click the download icon to begin the download process for that image.
As Firefly begins preparing the image for download, a small overlay dialog appears.
Image 17: Content credentials are applied to the image as it is downloaded
Firefly applies metadata to any generated image in the form of content credentials and the image download process begins.
What are content credentials? They are driven as part of the Content Authenticity Initiative to help promote transparency in AI. This is how Adobe describes content credentials in their Firefly FAQ:
Content Credentials are sets of editing, history, and attribution details associated with content that can be included with that content at export or download. By providing extra context around how a piece of content was produced, they can help content producers get credit and help people viewing the content make more informed trust decisions about it. Content Credentials can be viewed by anyone when their respective content is published to a supporting website or inspected with dedicated tools. -- Adobe
Once the image is downloaded, it can be viewed and shared just like any other image file.
Image 18: The text effect image is downloaded and ready for use
Along with content credentials, a small badge is placed upon the lower right of the image which visually identifies the image as having been produced with Adobe Firefly (beta).
There is a lot more Firefly can do, and we will continue this series in the coming weeks. Keep an eye out for an Adobe Firefly deep dive… exploring additional options for your generative AI creations!
Joseph is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by Design
Joseph Labrecque is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.
Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC; a digital media production studio and distribution vehicle for a variety of creative works.
Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.
Author of the book: Mastering Adobe Animate 2023