AI for Designers: Navigate Legal & Creative Frontiers with Brett Lambe
In 2024, Design Insider is dedicating itself to exploring Artificial Intelligence (AI). Leading the conversation, our AI-focused content spans an array of initiatives from our highly acclaimed AI Forum to in-depth editorials and interviews. As part of this ambitious campaign, we are delighted to present an exclusive interview with Brett Lambe, a Senior Associate at Ashfords, a national law firm renowned for its straightforward and proactive approach.
Brett, with over a decade of experience, specializes in the Technology sector, offering expert advice on commercial and IP matters across various industries including Technology, Creative Industries, Healthcare, and Retail. Known for his ability to blend deep legal expertise with a practical understanding of commercial objectives, Brett has been recognized as a Recommended Lawyer in areas of Commercial, Technology (TMT), and IP by the Legal 500 and lauded as a Rising Star in IP.
Join us as Brett shares valuable insights on the implications of AI integration within business practices, shedding light on both the opportunities and challenges it presents. His comprehensive background ensures a profound perspective on navigating the complex interplay between technology and law, making this interview a must-read for professionals and enthusiasts alike. Stay tuned as we delve into the intricacies of AI with one of the leading legal minds in the sector.
Brett Lambe, Senior Associate at Ashfords
Could you start by introducing yourself, your role, and the focus of your work at your company?
Brett Lambe: I am a Senior Associate at Ashfords in the Commercial and Technology team. As an experienced lawyer I advise a diverse range of businesses on various aspects of commercial contracts and technology issues, with a particular focus on intellectual property (IP).
My recent experience has extensively related to the use of artificial intelligence (AI) by businesses, and this area of work has exploded over the past 18 months, becoming an increasingly significant concern for businesses in every sector. I support clients whether they are customers or suppliers — so those purchasing AI systems to integrate into their existing operations, as well as developers or suppliers of AI technologies.
Additionally, part of the reason we’re speaking today is my collaboration with colleagues across different teams at Ashfords, a full-service firm. This includes working on podcasts and articles with our construction team, who frequently advise architectural practices and designers—professions that increasingly utilise AI in their daily operations.
Can you describe how AI is currently reshaping the fields of building design and architecture, and how these technologies are integrated into practical applications within the industry?
BL: AI is transforming the design and architecture sectors in a multitude of ways. Traditionally, early forms of AI were utilised for basic tasks and fairly simple automation processes, but its application has evolved into more complex roles within the industry itself. Alongside AI, another technology, Virtual Reality (VR), has been extensively used by architects and designers for a number of years now. VR allows professionals to visualise, design and explore creative 3D environments, providing a clearer and more tangible representation of how designs will appear in practice, as opposed to traditional 2D renders on paper or screens.
For many of our clients in the design space, VR is more commonly used than AI. However, we are observing a significant increase in design businesses adopting AI tools. These range widely in their application, including well-known generative AI (or GenAI) tools like ChatGPT. While VR aids the design process by enhancing visualisation, large language models like ChatGPT can streamline more routine tasks. For instance, they help in drafting initial project descriptions, and in non-design tasks such as writing blog articles and press releases.
Another area where AI is gaining traction is in text-to-image generation. GenAI tools like DALL-E, MidJourney, and Stable Diffusion are becoming increasingly influential in the industry. These AI applications allow designers to input text prompts to generate images. These can serve as a digital mood board or image generator, kick-starting the creative process without the need to start from scratch.
The critical aspect from a legal perspective is ensuring that the use of these AI tools is lawful. As we integrate these technologies, we must also consider the long-term implications and potential risks, anticipating issues that may not be immediately apparent but could arise in the future. As a lawyer, my role involves advising on how to use these AI outputs responsibly and legally, setting up processes to mitigate potential risks down the line.
Considering the rise of text-to-image generating programs, could you delineate the key challenges and opportunities these technologies present, especially from a legal perspective?
BL: From a legal viewpoint, the challenges mainly revolve around intellectual property (IP) creation and protection. Firstly, there’s the creation of new IP. We need to ensure that the content generated by these AI tools is created lawfully, helping to protect users from potential IP infringement claims. Such claims might arise if a third party alleges that their design was copied, even if it happened inadvertently through the AI’s internal processes.
With AI tools, inadvertent infringement is a significant risk. Of course, this type of infringement can occur naturally, as the human brain might independently create something similar to existing designs, involving a degree of “subconscious” copying. However now that AI plays a larger role in the design process, this risk becomes more likely, simply by the nature of the way these AI models are trained (by scanning huge quantities of digital data scraped from online image libraries, much of which is likely to be copyright material).
Secondly, from an IP protection standpoint, we need to ensure that original designs or copyright works are not being used without permission to train these AI models. GenAI tools learn by analysing massive data sets of images, but the specific details of their learning algorithms are often closely guarded secrets by developers. It’s therefore incredibly challenging to verify whether your own designs have been used to train AI without your consent.
A current legal battle exemplifies these risks. Getty Images has initiated a lawsuit against the developers of Stable Diffusion in both the UK and the US, claiming massive infringement of their image library. Some of the AI-generated images from Stable Diffusion purportedly bear the Getty Images watermark, which if proven, could significantly bolster their case, as it would demonstrate a strong likelihood that the Getty Images image library was used to train the AI model, without Getty Images’ consent and payment of appropriate licensing fees. Many lawyers in the AI space await the outcome of this case with interest.
As for the opportunities, they are potentially transformative, across a range of industries. GenAI allows for the rapid creation of design elements, significantly speeding up the design process. What used to take days or weeks can now be done in minutes, providing a wealth of inspiration and potential design paths at the click of a button. This capability not only accelerates productivity but also enhances creative possibilities! Of course, the real issue relates to quality of output. There are plenty of examples of AI producing poor output through hallucinations (sometimes with unintentionally hilarious results). So I don’t think we will see humans removed from the creative process. Rather, these tools can enhance the output and increase creativity, while hopefully increasing efficiency and reducing the amount of repetitive or time consuming tasks which have previously been a burden for professionals.
Could you elaborate on how AI technology has streamlined the design process, transforming tasks that used to take considerable time into much quicker endeavours?
BL: Tasks that previously took days, weeks, or even months can now be accomplished in just a few clicks by inputting suitable prompts into AI generators. The real advantage here is the efficient use of time and creative energy. AI tools can handle time-consuming, mundane tasks swiftly, which frees up more time for traditional creative processes where human input is crucial. This not only makes creativity faster but also allows for deeper exploration and development of creative ideas.
By reducing the time required for initial creative processes, professionals can focus on refining their work and delivering their brief. This efficiency is a game-changer which is crucial in industries where billing often correlates directly with time spent on tasks.
Moreover, the efficacy of these AI tools depends significantly on the input provided. A generic prompt might yield broad, unfocused results, but as the inputs become more specific, the outputs become more aligned with the intended design goals. This ability to refine prompts and guide the AI precisely enhances the utility of these tools, turning them into powerful assistants in the creative process. We can foresee a demand for skilled providers of effective prompts to AI tools.
At our recent seminar, our keynote speaker Kwame Nyanning emphasized the principle of “crap in, crap out,” suggesting that the quality of input directly affects the output quality. Can you discuss how AI sources its learning material and the legalities involved, especially regarding permission for use of online content?
BL: Under English law, and likely under other jurisdictions though they may differ, the principle is generally that if a designer (or the owner of the IP in the work) uploads their work to the internet, including on their own or their employer’s websites, any third parties wishing to use this image should only do so with the copyright owner’s permission. This applies to how developers of AI tools source their materials to “train” the AI. No AI provider is legally allowed to scrape images from websites to train their AI without express permission. This is crucial for designers, because the ownership of the IP will depend on the contract you have with the customer who has engaged you to provide the design. Usually (though not always) the designer would retain ownership of the IP, but will license the IP to the customer. Where the designer is employed by a design agency, the ownership of the IP remains with the employer if created during the individual designer’s employment, and any permissions regarding use of such images should be clearly defined by terms on the designer’s website.
For instance, websites usually state that all copyright images are owned by the site owner, and they cannot be copied or used beyond viewing the website. This is designed to prevent unauthorized use of designs, and this would now include feeding or training AI systems. However, in practice, it’s challenging to enforce this right. Given the opaque “black box” nature of many AI models, it’s almost impossible to prove that an image was used to train an AI unless the AI inadvertently exposes its sources, as in the case with Getty Images suing Stable Diffusion over the alleged use of watermarked images.
This legal battle will be interesting as it unfolds and will likely set precedents for how AI tools source their training data. The reality is, once your design is online, it’s vulnerable. Tech companies invest heavily in protecting their algorithms, making it extremely difficult to ascertain whether your work has been used without your consent. This area of law is still developing, and cases like the Getty Images lawsuit are crucial for defining future norms and practices around AI and IP rights.
It is possible that there will be a future market for owner-approved licensing of training data. But it will take time, money and resources to collate the data for this. At the moment, it appears that there is a “Wild West” feel to the approaches of some providers!
It’s clear that watermarking images isn’t a foolproof solution given the capabilities of AI. Could you discuss how architects might use their own designs to train AI systems, and what legal challenges they might face in doing so?
BL: For architects and smaller businesses, creating their own private genAI models might seem out of reach due to the high computational and data requirements, especially when dealing with images or videos compared to text. However, training a proprietary AI with your own data ensures you can closely monitor what goes into the training process and maintain control over your intellectual property. You know what has gone in, and therefore be more confident about the quality and legal status of what comes out.
There are of course significant advantages to using large-scale AI models that draw on vast datasets, in particular, the sheer volume of input training material means you have a seemingly endlessly diverse possible range of outputs. But this approach dilutes the control one might have over the quality and ownership of the output. For example, as alluded to above, using stock images within the bounds of their licenses for training AI could be a potential future route. In the future, licensing models might evolve to specifically accommodate AI training, where stock image providers could charge for AI-ready datasets with clear permissions set out by the rights holders.
However, the primary risk in utilising broader AI models lies in inadvertent IP infringement. You may inadvertently use data that hasn’t been properly licensed or infringe on the copyrights of others without clear visibility into the AI’s training data sources, leaving yourself open to unwanted legal action.
Legal frameworks are still catching up with these advancing technologies. The EU has recently introduced an AI Act, which could have a similar impact on the industry to how GDPR set a benchmark for data protection. This Act might set a high standard for AI compliance internationally (for example, AI providers will have to comply with the AI Act if they wish to do business in the EU, which is a huge market). But as of now, the UK has not aligned with this regulation. This divergence might lead to a fragmented regulatory landscape, making it challenging for technology providers to comply with varying international standards.
Businesses must navigate a complex interplay of innovation and regulation. The legal landscape around AI is still in its infancy, and the coming years will likely see significant developments in how AI-related activities are governed both in the courts and through legislation. This period represents both a ‘Wild West’ of opportunities and significant risks as the industry seeks to define ethical and legal standards for AI use.
As AI continues to grow rapidly in the commercial interior design and architectural sectors, different practices are at various stages of incorporating this technology. What key actions should architectural practices take now concerning AI, especially regarding their legal positioning?
BL: First and foremost, it’s essential to select the right AI tools that align with the specific needs of the practice. The market for AI tools in architecture is not as saturated as in other tech sectors, which simplifies the decision-making process but also limits the options. Practices should thoroughly research available AI technologies to ensure they choose solutions that can deliver efficiencies and enhance service delivery to clients.
Secondly, it is critical for practices to update their standard terms of business to clearly communicate their use of AI. Transparency with clients about the use of generative AI tools in the design process is vital. This not only helps in building trust but also ensures clients are fully aware of the methods being employed to create their projects. Practices should explicitly state how AI tools are used and clarify that while AI can enhance the design process, the final outputs may not always be entirely free from potential IP infringements.
Given the varied comfort levels with AI across clients, architectural firms should also prepare to tailor their AI usage based on client preferences. Some clients may prefer not to use AI tools due to the perceived risks of IP infringement, while others might be more open to leveraging advanced AI capabilities. By having flexible policies in place, practices can accommodate different client needs while still pushing forward with technological advancements.
Lastly, it’s imperative to have robust legal protections integrated into client contracts. This includes clauses that address the use of AI and its potential risks. Firms should ensure they have legal safeguards that protect both the practice and its clients should any issues with AI-generated content arise. Setting up these protections well in advance can mitigate risks and provide a clearer framework for resolving any disputes or challenges that may occur as a result of using AI in architectural design.
Businesses should also consider their existing insurance policies, and speak to their broker to ensure that any use of AI does not breach the terms of their professional indemnity policies.
By addressing these areas proactively, architectural practices can harness the benefits of AI while mitigating legal risks and maintaining strong client relationships.
Considering the pervasive integration of AI in tools like Adobe Photoshop, how feasible is it for design practices to comply with clients who request no AI usage in their projects?
BL: That’s a very pertinent question. AI components are now embedded in many of the software tools we use daily, often without explicit awareness from the users. For instance, features in Photoshop that enhance functionality or automate tasks are forms of AI. This makes it challenging for a design practice to fully comply if a client requests that no AI tools be used whatsoever on a project.
The key is to be specific about the types of AI being referred to. There’s a distinction between generative AI, which might create entirely new concepts from scratch, and more subtle AI implementations that simply enhance or tweak existing designs. It’s important for design practices to clarify these distinctions with their clients.
Having transparent conversations about what AI tools are being used and how they are applied in projects can help manage client expectations and consent. For global brands with stringent internal policies on AI, these discussions are crucial. Such organisations often have very specific requirements that may include restricting certain types of AI or needing explicit approval for any AI use.
In practice, it’s about establishing a baseline understanding of what AI tools are acceptable and then adjusting as needed based on client feedback or contractual stipulations. If a client objects to specific AI uses, those concerns need to be clearly addressed and documented to ensure compliance and maintain trust.
Ultimately, while it may be complex, fostering open communication about AI usage allows design practices to navigate client preferences effectively and ensure that both parties are aligned on how technology is being used in creative processes. This approach not only helps in managing legal risks but also in building and maintaining strong client relationships in an AI-driven landscape.
With new legislation expected to impact the use of AI in architecture, what should practices be particularly vigilant about in the near future?
BL: It’s essential to stay informed about the outcomes of key court cases, like the ongoing Getty Images lawsuit, which could set significant precedents affecting how AI is used in the industry. Regularly reviewing updates from the tech and industry press can provide valuable insights into how these legal battles are shaping the regulatory framework for AI globally.
Given the international nature of many AI technology providers, it’s crucial for practices to develop a global perspective on these developments.
At Ashfords, we constantly publish content and updates on AI, analysing new policies and court decisions to provide actionable advice to our clients. This ongoing analysis is vital as it helps businesses navigate the uncertainties of AI regulation effectively. I am always open to conversations about how to tackle issues in AI, so anyone who is concerned about the issues discussed in this interview can contact me to discuss in more detail.
The nature of technology—and AI in particular—is to evolve rapidly, often outpacing the legal frameworks in place. Current UK laws, such as the Copyright, Designs, and Patents Act of 1988, were not designed to address the complexities introduced by modern AI technologies. This discrepancy highlights the need for practices to be proactive in adapting to legal changes as they occur.
Ultimately, architectural practices should prepare to coexist with AI by understanding how to leverage this technology responsibly and innovatively. This means not only adapting to new legal requirements as they arise but also embracing the potential of AI to drive growth and innovation within the field!