Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Adobe Firefly Integrations in Illustrator and Photoshop

Save for later
  • 12 min read
  • 23 Aug 2023

article-image

Adobe Firefly Overview

Adobe Firefly is a new set of generative AI tools which can be accessed via https://firefly.adobe.com/ by anyone with an Adobe ID. To learn more about Firefly… have a look at their FAQ:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-0

Image 1: Adobe Firefly

For more information around the usage of Firefly to generate images, text effects, and more… have a look at the previous articles in this series:

This current Firefly article will focus on Firefly integrations within the release version of Adobe Illustrator and the public beta version of Adobe Photoshop.

Firefly in Adobe Illustrator

Version 27.7 is the most current release of Illustrator at the writing of this article and this version contains Firefly integrations in the form of Generative Recolor (Beta).

To access this, design any vector artwork within Illustrator or open existing artwork to get started. I’m using the cat.ai file that was used to generate the cat.svg file used in the Generative Recolor with Adobe Firefly article:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-1

Image 2: The cat vector artwork with original colors

1.     Select the artwork you would like to recolor. Artwork must be selected for this to work.

2.     Look to the Properties panel and locate the Quick Actions at the bottom of the panel. Click the Recolor quick action:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-2

Image 3: Choosing the Recolor Quick action

3.     By default, the Recolor overlay will open with the Recolor tab active. Switch to the Generative Recolor (Beta) tab to activate it instead:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-3

Image 4: The Generative Recolor (Beta) view

4.     You are invited to enter a prompt. I’ve written “northern lights green and vivid neon” as my prompt that describes colors I’d like to see. There are also sample prompts you can click on below the prompt input box.

5.     Click the Generate button once a prompt has been entered:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-4

Image 5: Selecting a Recolor variant

A set of recolor variants is presented within the overlay. Clicking on any of these will recolor your existing artwork according to the variant look:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-5

Image 6: Adding a specific color swatch

If you would like to provide even more guidance, you can modify the prompt and even add specific color swatches you’d like to see included in the recolored artwork.

That’s it for Illustrator – very straightforward and easy to use!

Firefly in Adobe Photoshop (beta)

Generative Fill through Firefly is also making its way into Photoshop. While within Illustrator – we have Firefly as part of the current version of the software, albeit with a beta label on the feature, with Photoshop things are a bit different:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-6

Image 7: Generative Fill is only available in the Photoshop public beta

To make use of Firefly within Photoshop, the current release version will not cut it. You will need to install the public beta from the Creative Cloud Desktop application in order to access these features.

With that in mind, let’s use Generative Fill in the Photoshop public beta to expand a photograph beyond its bounds and add in additional objects.

1.     First, open a photograph in the Photoshop public beta. I’m using the Poe.jpg photograph that we previously used in the articles Generative Fill with Adobe Firefly (Parts I & II):

adobe-firefly-integrations-in-illustrator-and-photoshop-img-7

Image 8: The original photograph in Photoshop

2.     With the photograph open, we’ll add some extra space to the canvas to generate additional content and expand the image beyond its bounds. Summon the Canvas Size dialog by choosing Image > Canvas Size… from the application menu.

3.     Change both the width and height values to 200 Percent:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-8

Image 9: Expanding the size of the canvas

4.     Click the OK button to close the dialog and apply the change.

The original canvas is expanded to 200 percent of its original size while the image itself remains exactly the same:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-9

Image 10: The photograph with an expanded canvas

Generative Fill, when used in this manner to expand an image, works best by selecting portions to expand bit by bit rather than all the expanded areas at once. It is also beneficial to select parts of the original image you want to expand from. This feeds and directs the Firefly AI.

5.     Using the Rectangular Marquee tool, make such a selection across either the top, bottom, left, or right portions of the document:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-10

Image 11: Making a selection for Generative Fill

6.     With a selection established, click Generative Fill within the contextual toolbar:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-11

Image 12: Leaving the prompt blank allows Photoshop to make all the decisions

7.     The contextual toolbar will now display a text input where you can enter a prompt to guide the process. However, in this case, we want to simply expand the image based upon the original pixels selected – so we will leave this blank with no prompt whatsoever. Click Generate to continue.

8.     The AI processes the image and displays a set of variants to choose from within the Properties panel. Click the one that conforms closest to the imagery you are looking to produce and that is what will be used upon the canvas:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-12

Image 13: Choosing a Generative Fill variant

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime

Note that if you look to the Layers panel, you will find a new layer type has been created and added to the document layer stack:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-13

Image 14: Generative Layers are a new layer type in Photoshop

The Generative Layer retains both the given prompt and variants so that you can continue to make changes and adjustments as needed – even following this specific point in time.

The resulting expansion of the original image as performed by Generative Fill can be very convincing! As mentioned before, this often works best by performing fills in a piece-by-piece patchwork manner:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-14

Image 15: The photograph with a variant applied across the selection

Continue selecting portions of the image using the Rectangular Marquee tool (or any selection tools, really) and generate new content the same way we have done so already – without supplying any text prompt to the AI:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-15

Image 16: The photograph with all expanded areas filled via generative AI

Eventually, you will complete the expansion of the original image and produce a very convincing deception.

Of course, you can also guide the AI with actual text prompts. Let’s add in an object to the image as a demonstration.

1.     Using the Lasso tool (or again… any selection tool), make a selection across the image in the form of what might hold a standing lamp of some sort:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-16

Image 17: Making an additional selection

2.     With a selection established, click Generative Fill within the contextual toolbar.

3.     Type in a prompt that describes the object you want to generate. I will use the prompt “tall rustic wooden and metal lamp”.

4.     Click the Generate button to process the Generative Fill request:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-17

Image 18: A lamp is generated from our selection and text prompt

A set of generated lamp variants are established within the Properties panel. Choose the one you like the most and it will be applied within the image.

You will want to be careful with how many Generative Layers are produced as you work on any single document. Keep an eye on the Layers panel as you work:

adobe-firefly-integrations-in-illustrator-and-photoshop-img-18

Image 19: Each Generative Fill process produces a new layer

Each time you use Generative Fill within Photoshop, a new Generative Layer is produced.

Depending upon the resources and capabilities of your computer… this might become burdensome as everything becomes more and more complex. You can always flatten your layers to a single pixel layer if this occurs to free up additional resources.

That concludes our overview of Generative Fill in the Photoshop public beta!

Ethical Concerns with Generative AI

I want to make one additional note before concluding this series and that has to do with the ethics of generative AI. This concern goes beyond Adobe Firefly specifically – as it could be argued that Firefly is the least problematic and most ethical implementation of generative AI that is available today.

See https://firefly.adobe.com/faq for additional details on steps Adobe has taken to ensure responsible AI through their use of Adobe Stock content to train their models, through the use of Content Credentials, and more...

Like all our AI capabilities, Firefly is developed and deployed around our AI ethics principles of accountability, responsibility, and transparency.

Data collection: We train our model by collecting diverse image datasets, which have been curated and preprocessed to mitigate against harmful or biased content. We also recognize and respect artists’ ownership and intellectual property rights. This helps us build datasets that are diverse, ethical, and respectful toward our customers and our community.

Addressing bias and testing for safety and harm: It’s important to us to create a model that respects our customers and aligns with our company values. In addition to training on inclusive datasets, we continually test our model to mitigate against perpetuating harmful stereotypes. We use a range of techniques, including ongoing automated testing and human evaluation.

Regular updates and improvements: This is an ongoing process. We will regularly update Firefly to improve its performance and mitigate harmful bias in its output. We also provide feedback mechanisms for our users to report potentially biased outputs or provide suggestions into our testing and development processes. We are committed to working together with our customers to continue to make our model better.

-- Adobe

I have had discussions with a number of fellow educators about the ethical use of generative AI and Firefly in general. Here are some paraphrased takeaways to consider as we conclude this article series:

  •       “We must train the new generations in the respect and proper use of images or all kinds of creative work.”
  •       “I don't think Ai can capture that sensitive world that we carry as human beings.”
  •       “As dire as some aspects of all of this are, I see opportunities.”
  •       “Thousands of working artists had their life's work unknowingly used to create these images.”
  •        “Professionals will be challenged, truly, by all of this, but somewhere in that process I believe we will find our space.”
  •       “AI data expropriations are a form of digital colonialism.”
  •       “For many students, the notion of developing genuine skill seems pointless now.”
  •      “Even for masters of the craft, it’s dispiriting to see someone type 10 words and get something akin to what took them 10 years.”

I’ve been using generative AI for a few years now and can appreciate and understand the concerns expressed above - but also recognize that this technology is not going away. We must do what we can to address the ethical concerns brought up here and make sure to use our awareness of these problematic issues to further guide the direction of these technologies as we rapidly advance forward. These are very challenging times, right now.

 

Author Bio

Joseph Labrecque is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by Design

Joseph Labrecque is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.

Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC; a digital media production studio and distribution vehicle for a variety of creative works.

Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.

Author of the book: Mastering Adobe Animate 2023