If you have scrolled social media platforms recently, chances are you have come across a flood of Ghibli style images. The Ghibli trend gained momentum, majorly due to the rise of generative AI models like ChatGPT, DALL-E, Stable Diffusion, etc., that allowed anyone to create detailed works of art with a few keystrokes (pun intended).

Yet, beneath this creative explosion lies an intriguing question: who owns the intellectual property rights (IP) over these images? If an AI model generates a scene in Ghibli style, does the IP lie with the entity that owns the AI model, or the individual who gave the input prompt, or Studio Ghibli itself?

At the heart of it all is a debate that is as philosophical as it is practical: can AI be an IP owner? And if not, who gets to claim the IP rights?

The Ghibli Craze: A Perfect Case Study

Studio Ghibli’s distinctive aesthetic soft colours palettes, and intricate details, has long served as an inspiration for artists across the globe and the widespread adoption of AI has further amplified the inspiration, enabling both beginners and professionals to generate Ghibli style artwork in a blink of an eye.

This trend was more than just fan art, it is a cultural fusion of modern technology and nostalgia. However, it also highlights a crucial conflict between human creativity and machine-generated output. When an AI replicates Miyazaki’s style with striking fidelity, where does the line between homage and proprietorship blur?

The solution to this issue lies with copyright law, which was introduced to safeguard original human works of authorship. However, since it came in a pre-artificial intelligence time, copyright law anchors ownership in human creators. Contrarily, AI operates outside of such a framework, adding a layer of complexity that makes traditional ideas of authorship and ownership under copyright law more difficult to understand.

Can AI Own Copyright?

The prevailing stance across most legal jurisdictions is clear: artificial intelligence cannot hold copyright.

In the United States, the Copyright Office maintained this position with unambiguous clarity. A turning point occurred in 2022 when Stephen Thaler attempted to register an image created by his DABUS program as the author, and the Copyright Office denied the request, stating that copyright protection is predicated on human authorship.

By contrast, the United Kingdom adopts a nuanced approach under the Copyright, Designs and Patents Act 1988. Section 9(3) of the Act provides for the copyright of computer-generated works, but it assigns ownership to the person who made the “arrangements necessary” for the work’s creation, this is typically the individual who configures the AI or gives the input prompt. The AI itself, however, remains excluded from any claim to authorship. Thus, while a user who adjusts parameters and initiates the process may secure rights, the machine is considered as a mere instrument of production.

As we all know, Copyright law is not just in place to safeguard material outputs but to encourage and reward human creativity. A computer program can generate a beautiful Ghibli-style landscape, but it does so without the imaginative struggle or narrative vision that are hallmarks of human creativity. Rather, it is a sophisticated tool running code, processing information, and producing patterns based on its training data, more like an advanced computational mechanism than a creator.

Who Owns AI-Generated Content?

If AI is not eligible to own IP, the next question is: who is the owner of the IP rights to an AI-created work? The answer depends on several factors, making the matter anything but simple and requiring a close look at legal doctrine and the real-world.

A.    The Role of the Human User

The most immediate claimant is typically the human user. In many jurisdictions, including the United States, an individual who inputs a prompt into an AI tool and initiates the generation process may be deemed the copyright holder. The U.S. Copyright Office has indicated that human involvement, such as selecting, editing, or enhancing the AI’s output, can render a work eligible for copyright protection. Thus, a user who modifies an AI-generated image perhaps by adding brushstrokes or adjusting hues strengthens their claim to ownership.

B.    The Influence of AI Developers

Complicating this landscape are the developers of AI platforms, and their terms of service add extra layers of consideration. Companies like OpenAI, Stability AI, and Midjourney stipulate contractual provisions that shape ownership rights. For instance, Midjourney’s terms, grant users’ ownership of their generated outputs while reserving a non-exclusive license for the company to utilize those works for training, promotional activities, etc. Consequently, a user may hold copyright over an AI generated image, but the developer retains rights to exploit it within defined bounds.

Nonetheless, if the AI was trained on copyrighted material such as Studio Ghibli’s films does this reliance on protected works undermine the legality of the output? This issue remains unresolved, casting a shadow over the IP status of such content.

Who Owns the IP in AI-Generated Content—The User, the Platform, or No One?

At the heart of the AI copyright debate is a practical concern, if the copyright law does not clearly assign ownership, then who gets to exploit the commercial value of AI-generated content?

Most jurisdictions like India, the U.S., the EU does not provide copyright to AI. However, the treatment of ownership between the user and the platform that built the AI varies significantly, and often comes down to contractual terms, not copyright statutes.

For example, OpenAI’s current terms of use explicitly assign ownership of AI-generated output to the user, provided the terms are followed. GitHub Copilot, on the other hand, assigns ownership to the developer but strongly advises users to filter suggestions to avoid IP infringement. This tension between express contractual terms and broader legal uncertainty is the space where most IP risk currently lives.

However, in India, where the law does recognize “computer-generated works,” the copyright can be held by the person who “caused the work to be created.” That could be the person typing the prompt, the employer of the developer, or the developer themselves, depending on the facts and how the courts interpret it. The law is open to interpretation, and there is still no binding precedent. In one instance, the Indian Copyright Office even withdrew registration granted to an artwork co-authored by an AI system (“RAGHAV”), after realizing the AI lacked legal personhood​.

Globally, the absence of a uniform standard means: users need to rely on licensing agreements, not copyright law, to secure ownership or usage rights over AI-generated outputs. Businesses using generative AI especially for client-facing or commercial projects must review platform terms thoroughly, negotiate explicit rights wherever possible, and consider representations or indemnities from the AI service provider around data sources and output originality.

What Happens Next? Legal Trends and Ongoing Litigation

If there is one thing lawyers and artist can agree on, it is that the law has not caught up to the AI wave which resulted in multiple class actions suits like:

  • Andersen v. Stability AI et al. – In this case, three artists brought a claim against Stability AI for using their artworks as training data without permission, arguing that the outputs are unauthorized derivative works.

  • Getty Images v. Stability AI – In this case Getty Images brought a claim against Stability AI alleging that Stability AI unlawfully used its copyrighted, watermarked images to train its model, violating both copyright and trademark law.

  • Doe v. GitHub, Inc. – This lawsuit challenges how GitHub Copilot, built using OpenAI’s Codex, generates code that potentially mirrors open-source repositories without attribution​.

The issue in all these cases is whether training an AI model using copyrighted content without consent constitutes infringement, and whether the outputs themselves are derivative works. U.S. courts will likely hinge their analysis on the fair use doctrine, which looks at factors like purpose, commercial nature, and the amount of copying. Past judgments like Authors Guild v. Google Inc. and Field v. Google have supported transformative use.

In contrast, Australia, the EU, and India offer little comfort. Australia’s High Court in IceTV v. Nine Network reaffirmed that copyright only protects human-created works and in Acohs Pty Ltd v. Ucorp Pty Ltd, the court found that computer-generated code lacked human authorship and was therefore ineligible for protection​. The EU, while exploring new AI legislation, still requires a “personal intellectual creation” for copyright to apply. India has no tested precedent on AI authorship, and most guidance so far has come from the Copyright Office walking back mistaken registrations.

Conclusion

The explosion of Ghibli-style AI art represents more than a trend, it signifies a deeper cultural change in how we understand creativity, authorship, and ownership in the digital world.

While generative AI technologies enable anyone to produce aesthetically beautiful content, they also create legal ambiguities that conventional copyright law was never meant to address. With most jurisdictions excluding AI as an author, and ownership frequently depending on contractual agreement instead of statutory certainty, creators and businesses have to walk a tightrope. Until the courts or legislatures give clearer guidance, the best course of action are robust contracts, careful use policies, and risk management. The future of IP in an AI world will not only be decided by algorithms, but by how we define, document, and defend the rights on them. In the meantime, it is not merely what AI is capable of making but what the law permits us to take.