Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Exploring Text to Image with Adobe Firefly

Save for later
  • 9 min read
  • 28 Jun 2023

article-image

About Adobe Firefly

We first introduced Adobe Firefly in the article Animating Adobe Firefly Content with Adobe Animate published on June 14th of 2023. Let’s have a quick recap before moving ahead with a deeper look at Firefly itself.

Adobe Firefly is a new set of generative AI tools which can be accessed via https://firefly.adobe.com/ by anyone with an Adobe ID that isn’t restricted by age or other factors. While Firefly is in beta… all generated images are watermarked with a Firefly badge in the lower left corner and Adobe recommends that it only be used for personal image generation and not for commercial use. These restrictions will change, of course, once Firefly procedures become integrated within Adobe software such as Photoshop and Illustrator. We plan to explore how to use Firefly-driven workflows within creative desktop software in future articles.

exploring-text-to-image-with-adobe-firefly-img-0

Image 1: The Adobe Firefly website

A couple of things make Firefly unique as a prompt-based image-generation service:

  • The generative models were all trained on Adobe Stock content which Adobe already has rights to. This differs from many other such Ais whose models are trained on sets of content scraped from the web or otherwise acquired in a less-than-ideal way in terms of artist's rights.
  • Firefly is accessed through a web-based interface and not through a Discord bot or installed software. The design of the user experience is pleasant to interface with and provides a low barrier to entry.
  • Firefly goes way beyond prompt-based image generation. A number of additional generative workflows currently exist for use – with a number of additional procedures in exploration.
  •  As mentioned, Firefly as a web-based service may be a temporary channel – as more of the procedures are tested and integrated within existing desktop software.
  • In the remainder of this article, we will focus on the text-to-image basics available in Firefly.

Using Text to Image within Firefly

We will begin our explorations of Adobe Firefly with the ability to generate an image from a detailed text prompt. This is the form of generative AI that most people have some experience with and will likely come easily to those who have used similar services such as MidJourney, Stable Diffusion, or others.

When you first enter the Firefly web experience, you will be presented with the various workflows available.

We want to locate the Text to Image module and click Generate to enter the experience.

exploring-text-to-image-with-adobe-firefly-img-1

Image 2: The Text to image module in Firefly

From there, you’ll be taken to a view that showcases images generated through this process, along with a text input that invites you to enter a prompt to “describe the image you want to generate”.

exploring-text-to-image-with-adobe-firefly-img-2

Image 3: The text-to-image prompt requests your input to begin

Enter the following simple prompt: “Castle in a field of flowers

Click the Generate button when complete. You’ll then be taken into the heart of the Firefly experience.  

exploring-text-to-image-with-adobe-firefly-img-3

Image 4: The initial set of images is generated from your prompt

When you enter the text-to-image module, you are presented with a set of four images generated from the given prompt. The prompt itself appears beneath the set of images and along the right-hand side is a column of options that can be adjusted.

Exploring the Text-to-Image UI

While the initial set of images that Firefly has generated do match our simple prompt pretty closely… we can manipulate a lot of additional controls which can have great influence upon the generated images.

The first set of parameters you will see along the right-hand side of the screen is the Aspect ratio.

exploring-text-to-image-with-adobe-firefly-img-4

Image 5: The image set aspect ratio can be adjusted

There is a rather large set of options in a dropdown selection that determines the aspect ratio of the generated images. As we see above, the default is Square (1:1). Let’s change that to Landscape (4:3) by choosing that option from the dropdown.

Below that set of options, you will find Content-type.

exploring-text-to-image-with-adobe-firefly-img-5

Image 6: Content type defines a stylistic bias for your images

The default is set to Art but you also have Photo, Graphic, and None as alternative choices. Each one of these will apply a bias to how the image is generated to be more photographic, more like a graphic, or even more traditional artwork. Choosing None will remove all such bias and allow your prompt to carry the full weight of intention. Chose None before moving on – as we will change our prompt to be much more descriptive to better direct Firefly.

Beneath this, you will find the Styles section of options.

exploring-text-to-image-with-adobe-firefly-img-6exploring-text-to-image-with-adobe-firefly-img-7

Image 7: Styles are predefined terms that can be activated as needed

Styles are basically keywords that are appended to your prompt in order to influence the results in very specific ways. These style prompts function just as if you’ve written the term as part of your written prompt. They exist as a sort of a predefined list of stylistic options that are easily added and removed from your prompt and are categorized by concepts such as Movements, Techniques, Materials, and more. As styles are added to your prompt, they appear beneath it and can be easily removed in order to allow easy exploration of ideas.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime

At the very bottom of this area of the interface are a set of dropdown selections which include options for Color and tone, Lighting, and Composition.

exploring-text-to-image-with-adobe-firefly-img-8

Image 8: You can also influence color and tone, lighting, and composition

Just as with the sections above, as you apply certain choices in these categories, they appear as keywords below your prompt. Choose a Muted color from the Color and tone list. Additionally, apply the Golden hour option from the Lighting dropdown.  

Remember… you can always add any of these descriptors into the text prompt itself – so don’t feel limited by only the choices presented through the UI.

Using a More Detailed Text Prompt

Okay – now that we’ve adjusted the aspect ratio and have either cleared or set a number of additional options… let’s make our text prompt more descriptive in order to generate a more interesting image.

Change the current text prompt, which reads “castle in a field of flowers” to now read the much more detailed “vampiric castle in a field of flowers with a forest in the distance and mountains against the red sky”.

Click the Generate button to have Firefly re-interpret our intent using the new prompt, presenting a much more detailed set of images derived from the prompt along with any keyword options we’ve included.

exploring-text-to-image-with-adobe-firefly-img-9

Image 9: The more detail you put into your prompt – the more control you have over the generative visuals

If you find one of the four new images to be acceptable, it can be easily downloaded to your computer.

exploring-text-to-image-with-adobe-firefly-img-10

Image 10: There are many options when hovering over an image – including download

Simply hover your mouse over the chosen image and a number of additional options appear. We will explore these additional options in much greater detail in a future article. Click the download icon to begin the download process for that image.

As Firefly begins preparing the image for download, a small overlay dialog appears.

exploring-text-to-image-with-adobe-firefly-img-11

Image 11: Content credentials are applied to the image as it is downloaded

Firefly applies metadata to any generated image in the form of content credentials and the image download process begins.

What are content credentials? They are driven as part of the Content Authenticity Initiative to help promote transparency in AI. This is how Adobe describes content credentials in their Firefly FAQ:

Content Credentials are sets of editing, history, and attribution details associated with content that can be included with that content at export or download. By providing extra context around how a piece of content was produced, they can help content producers get credit and help people viewing the content make more informed trust decisions about it. Content Credentials can be viewed by anyone when their respective content is published to a supporting website or inspected with dedicated tools. -- Adobe

Once the image is downloaded, it can be viewed and shared just like any other image file.

exploring-text-to-image-with-adobe-firefly-img-12

Image 12: The chosen image is downloaded and ready for use

Along with content credentials, a small badge is placed upon the lower right of the image which visually identifies the image as having been produced with Adobe Firefly (beta).

There is a lot more Firefly can do – and we will explore these additional options and procedures in the coming articles.

Author Bio

Joseph is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by Design

Joseph Labrecque is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.

Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC; a digital media production studio and distribution vehicle for a variety of creative works.

Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.

Author of the book: Mastering Adobe Animate 2023