1. Introduction
For the CEO of a technology company or the owner of a brand protecting intellectual property in the Ukrainian market, 2026 has been a turning point. A year ago, artificial intelligence in commercial products was discussed as a prospect; today it is a reality that requires contractual, employment and licensing arrangements before a product reaches the market. Daily engineering decisions to use Copilot, Midjourney or fine-tuned models are producing commercially significant assets whose legal regime remains undefined in many companies.
The earlier paradox of the Ukrainian IT industry was: "I paid for the development, so the code is mine." That logic has now been defeated in litigation and addressed clearly in the Law of Ukraine "On Copyright and Related Rights" No. 2811-IX of 1 December 2022 (the "Copyright Law"). The paradox of the AI era is framed even more aggressively: "I wrote the prompt, so the output is mine." The logic is the same, and so is the outcome: natural from a business standpoint, but entirely wrong as a matter of legal title.
In 2022, Ukraine became one of the first jurisdictions worldwide to introduce a dedicated legal regime for objects created without the direct involvement of a natural person. Article 33 of the Copyright Law established a sui generis regime — distinct both from traditional copyright and from the public domain. That Article is considerably less familiar to the business community than Article 20, which protects conventional source code, yet it is Article 33 that will sit at the centre of future disputes over AI-generated products.
For the CEO of a large technology company who views Ukraine as a development hub, as a licensing counterparty or as a market in which its trademark and copyright portfolio needs protection, three practical questions arise — and the standard IP clauses used in agreements before 2023 do not answer any of them: (a) whether an AI-generated product is protected at all; (b) who precisely holds the rights — among the model developer, the licensee, the person who initiated generation, and the employer; and (c) what contractual mechanisms ensure that rights transfer cleanly into the company's portfolio. This article addresses these questions from the perspective of Ukrainian law, institutional practice in 2024–2026, and the comparative regimes of the United Kingdom, the United States and the European Union. It builds on the analysis of human-authored code I set out earlier this year (Disputes Over IT Products and Software Code: Who Owns Them?, Lexology, April 2026) and extends that framework into the domain of AI-generated outputs.
2. Three Categories, Not Two
A common simplification treats the question as binary: protected by copyright, or not protected. The Ukrainian framework is more sophisticated and distinguishes three categories of objects.
Category one — traditional copyright. This applies where a natural person makes a substantial creative contribution to the object: design, structure, selection, arrangement, prompting or editing of the output. In such cases Article 20 of the Copyright Law applies, the code or content is protected as a literary or artistic work, the term of protection is the life of the author plus 70 years, and the rightholder obtains both economic rights and moral rights.
Category two — the sui generis regime under Article 33 of the Copyright Law. This applies to a "non-original object generated by a computer program", defined as "an object that differs from existing similar objects and is generated through the operation of a computer program without direct involvement of a natural person". Works created by natural persons using computer technologies are expressly excluded from this definition. The term of protection is 25 years from 1 January of the year following creation. Critically, the sui generis regime grants economic rights only (reproduction, distribution, translation, adaptation); it does not confer moral rights, because there is no human author whose personality could be the subject of such rights.
Category three — no protection. An object that meets neither the creative-contribution standard for traditional copyright nor the three criteria of Article 33 falls into the public domain. In that case, protection is only available through trade-secret regimes or contractual restrictions.
Correct classification is not an academic question. It determines the term of protection (70 years post mortem auctoris versus 25 years versus none), the bundle of rights (economic plus moral versus economic only versus none), the range of potential rightholders, and the evidentiary framework in litigation. In 2024 the Ukrainian National Office of Intellectual Property and Innovations (UANIPIO) registered the first objects containing AI-generated elements — illustrations for a children's book, a poetry collection and a series of postcards. Registration practice has begun to form, and the first precedents for allocation among the three categories are being created.
3. Three Criteria for Sui Generis Protection
Novelty. The object must differ from existing similar objects. This criterion is less stringent than the "originality" threshold in traditional copyright, but stricter than a zero threshold. In practical terms, two consecutive runs of the same model with the same parameters do not create two protectable objects — protection attaches only to the first; the second is a copy of it.
Automated generation. The object must result from the technical functioning of software. The key distinction is between generation and use of a tool. A designer who uses Adobe Photoshop to draw an illustration is not running an automated generation — the software executes their creative commands. A designer who sends a prompt to Midjourney and receives an illustration without further editing is running an automated generation. The line is drawn where the user's creative contribution to the final output disappears.
Absence of human creativity. Human involvement is limited to activating the software. This is the most difficult criterion in practice. Prompt engineering in its current form may involve dozens of iterations, fine-tuning of parameters, negative prompts and controlled seeds — all of which look like creative work from the user's perspective. Article 33, however, approaches the question from a different angle: it assesses not the user's effort but whether the final output is a product of that user's creative will, or a product of the probabilistic operation of the model. If the same prompt from the same user on the same model produces different results owing to stochasticity, that is an indicator of the absence of human creativity within the meaning of Article 33.
The logic of the sui generis regime is related to the logic of the protection of source code itself under Article 20 of the Copyright Law, which protects the form of expression rather than the underlying idea or algorithm — as confirmed by the Supreme Court of Ukraine in its ruling of 22 May 2023 in Case No. 760/16961/19. Article 33 protects the output of automated generation as a material object, not the creative will of the user.
4. Rightholder: Who Receives Rights When a Human Is Not the Author
The least transparent question in the current regulation is who precisely holds the rights to a sui generis object. Article 33 provides that rights "may belong" to the person who initiated the creation of the object, the developer of the computer program, a licensee of the program, a successor in title, or the person holding the economic rights to the program.
The phrase "may belong" does not resolve the collision that arises where several of those persons claim rights simultaneously. Consider a typical M&A scenario: company A develops a foundation model; company B obtains a commercial licence for that model; an employee of company B (company C as the employer) writes a prompt that generates a commercially valuable output — an element of the product subject to valuation by the acquirer. Who is the rightholder: company A (the developer), company B (the licensee), the employee (the initiator), or company C (the employer)? The Law offers no direct answer.
Academic commentary proposes that priority should be given to the person who lawfully initiated the creation, unless otherwise agreed by contract. Scholarly writing has also discussed the creation of a state register of AI-generated objects with metadata recording the contribution of each participant, the generation architecture and the investment made in the model. Whether Ukraine will move toward such a register is an open question; the practical lesson for businesses is, however, already apparent.
A comparative perspective reveals three distinct national approaches. In the United Kingdom, section 9(3) of the Copyright, Designs and Patents Act 1988 expressly provides that the author of a computer-generated work is the person who made the arrangements necessary for its creation — that is, the United Kingdom extended the concept of authorship. In the United States, the Copyright Office in 2023, in its decision concerning the Zarya of the Dawn comic book, refused protection to images generated by Midjourney on the basis that the human contribution to the selection of prompts was insufficient for authorship — that is, the United States took the route of denying protection. The European Union at the level of the DSM Directive 2019/790 and the AI Act 2024 has not directly addressed this question, leaving room for national solutions.
The Ukrainian approach differs from all three: Ukraine has created a separate legal regime rather than extending authorship (like the UK) or refusing protection (like the US). In this context the practical lesson for a CEO is clear: reliance on the statutory default is unsafe. Contractual allocation of rightholder status in sui generis objects, in every relationship in which AI tools are used, is not a formality but a necessary precondition for the commercial exploitation of the output.
5. Work Made for Hire 2.0: When an Employee Uses AI
Article 14 of the Copyright Law provides that the economic rights in a work made for hire vest in the employer from the moment of creation unless the contract provides otherwise. For the international reader it is useful to note that the Ukrainian concept is broader in scope than its US common-law counterpart under 17 U.S.C. §101: Article 14 applies generally to works created by an employee in the course of employment duties, without the enumerated-categories limitation that applies to US commissioned works. This rule resolved a long-standing conflict between Article 429 of the Civil Code (joint rights between employee and employer) and the previous version of the Copyright Law, and has significantly strengthened the position of employers in traditional IT disputes — see, for example, the ruling of the Kyiv Court of Appeal of 6 February 2024 in Case No. 760/18303/14-ts, where the court recognised a program as a work made for hire notwithstanding the payment of authors' remuneration to the employees.
AI tools disrupt this construction. A work made for hire under Article 14 presupposes the existence of an author-employee who created the work in connection with performing their employment duties. If a developer uses Copilot and 40% of the code is in substance generated by the model without the developer's creative contribution to specific lines — then for that 40% there is no "work" in the traditional sense but a sui generis object under Article 33. And Article 14 does not extend to sui generis objects automatically.
This creates a gap in the employer's legal title. Rights to the "human" portion of the code transfer to the employer through the work-made-for-hire mechanism. Rights to the AI-generated portion transfer through the mechanism of Article 33, but require independent justification: whether the employer was "the person who initiated the creation" (if the employee formulated the prompt, probably not), or "the person holding the economic rights to the program" (yes, if the company licensed Copilot; no, if the employee used their own OpenAI account).
The result is that a typical developer using AI tools creates a hybrid product, the legal status of half of which is unclear. For a CEO conducting due diligence or planning product licensing into Ukraine, this means that standard IP verification is no longer sufficient. The only way to close the gap is to provide expressly in the job description and the employment contract that the employee's duties include the use of AI tools licensed by the employer, and that all outputs of such use — including sui generis objects under Article 33 — belong to the employer.
A related risk arises in the event of an open dispute with an employee. If, on separation, a developer claims that certain modules of the product are not their own code but rather a sui generis output generated through their personal ChatGPT Plus account on a home computer, the employer must disprove that claim. The lesson from previous practice — see Case No. 756/960/15-ts, in which the Kyiv Court of Appeal refused to recognise a contractor's authorship in the absence of a copyright registration certificate — is that the burden of proving authorship and legal title rests on the party claiming economic rights. In the AI-generation context, that burden is materially heavier.
6. Client and Contractor: The Dual IP Clause
Article 15 of the Copyright Law provides that economic rights in a work created on commission transfer to the client from the moment of creation in full, unless the commission agreement provides otherwise. This is one of the most client-favourable rules in the new Law, and it forms the evidentiary basis in disputes with outsourcing contractors — see the ruling of the Supreme Court of Ukraine in Case No. 910/2683/19 concerning the mixed nature of software development agreements.
Article 15 in its current form, however, refers to "works", not to non-original objects under Article 33. That is where the trap lies: a standard IP clause in a software development agreement, which works well for human-authored code, does not legally cover the layer of outputs generated by AI tools during development.
The recommended practice is a dual IP clause. The agreement should separately provide for:
— the transfer of all economic rights in objects protected by copyright under Article 20 of the Copyright Law (traditional code and documentation);
— the transfer of all economic rights in non-original objects under Article 33 of the Copyright Law (AI-generated elements of the product), together with the contractor's confirmation that it acts as the person who lawfully initiated the creation of such objects; and
— the contractor's warranty as to the licensing cleanliness of the AI tools used in development (corporate Copilot Business licences rather than personal accounts) and as to the absence of third-party infringements in the training data of the models used.
The last point is connected to the next critical aspect — the legal status of training data.
7. Training Data: The Hidden Trap That Can Destroy Legal Title
Commercial AI models are trained on vast volumes of data that include copyright-protected works. The current Ukrainian Copyright Law — unlike Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market (the "DSM Directive") — does not contain a specific exception for text-and-data mining ("TDM"). Article 3 of the DSM Directive provides a TDM exception for scientific research; Article 4 provides a general TDM exception with an opt-out mechanism for rightholders. No analogous provisions exist in the Ukrainian Copyright Law.
The practical consequence is that the use of works protected by Ukrainian copyright to train generative AI models requires the rightholder's authorisation. The academic debate as to whether such uses can be brought within existing fair-use-style exceptions continues, but the position is not settled, and the risk of infringement claims exists where training datasets include Ukrainian protected content without appropriate licences.
For a business operating at CEO decision level this translates into three successive risk zones:
The first zone — the model developer. A company developing a Ukrainian foundation model or a fine-tuned model for internal use should audit its training data and secure appropriate licences, or clearly establish that the data belongs to the public domain or to the company's own resources.
The second zone — deployment. A company that uses a foreign model (OpenAI, Anthropic, Google) in commercial products sold into the Ukrainian market is exposed to the risk of indirect infringement if the model's training data included unlawfully used Ukrainian works. That risk is currently theoretical — there is no case law — but it is not nil.
The third zone — the client's legal title. If your model was trained on third-party content, in a future dispute seeking to enforce your sui generis rights the defendant may argue that your product is built on unlicensed data and that your invocation of the court's protection is itself an abuse of rights in the same subject matter.
*Important caveat on the unclean hands doctrine. Unclean hands is a general principle of Anglo-American equity that denies relief to a party that has itself committed wrongdoing in the same subject matter as the relief it seeks. Its classic application is in Lasercomb America, Inc. v. Reynolds (4th Cir. 1990), where the court denied copyright protection to a plaintiff who had included anti-competitive terms in its licensing agreement. In its common-law form the unclean hands doctrine is not part of Ukrainian law and is not applied by Ukrainian courts as an independent legal basis. Its invocation in this section is by analogy only, as a comparative reference point. Ukrainian law operates through its own, conceptually related but autonomous, instruments: Article 13(6) of the Civil Code of Ukraine, which prohibits the exercise of rights in a manner constituting an abuse of rights, and Article 16(3) of the Civil Code, which permits a court to deny protection to a person who abuses civil rights. The standards of proof under these provisions (the need to demonstrate intent or abuse of rights under Ukrainian procedural rules) differ from the Anglo-American unclean hands* test, which rests on the equitable discretion of the court. As of 2026 there is no clear precedent applying Articles 13 and 16 of the Civil Code to situations involving unlicensed training data for AI models; the argument by analogy may form part of a defensive position, but its success will depend on the quality of the factual record and on proving abuse under Ukrainian standards.
As of 2026, Draft Law No. 8153 on personal data protection has been adopted in the first reading; if finally enacted, it would align Ukrainian law with the GDPR and with the Council of Europe's Modernised Convention 108+ and would establish an independent supervisory authority. Its enactment will not resolve the TDM question directly but will materially strengthen regulatory oversight of every operation involving personal data within AI systems, which will indirectly reach training data.
8. How to Prove Rights in an AI-Generated Object
In 2024, UANIPIO registered the first objects containing AI-generated elements: illustrations for a children's book, a poetry collection and a series of postcards. This confirmed that the mechanism of Article 33 of the Copyright Law is functional, not declarative. The institutional infrastructure for IP disputes is active: in 2024, local courts of Ukraine handled 303 commercial IP cases, 194 civil IP cases and 101 criminal IP cases, with trademarks and copyright together accounting for approximately 80% of the commercial caseload.
No separate judicial statistics for the sui generis regime under Article 33 are yet available as of 2026. This means that the first disputes over AI-generated content will proceed without ready-made precedents, and therefore with unusually high demands on the evidentiary record.
A further systemic factor is that the High Court on Intellectual Property of Ukraine, established in law in 2017 as a specialised forum for IP disputes, had still not commenced the administration of justice as of mid-October 2025. The first disputes under Article 33 will therefore be heard by courts of general jurisdiction, whose caseload rose materially in 2025: civil courts received 771,749 cases (a 39% increase over 2024) and commercial courts 67,098 cases (+10%). Time-to-judgment in an initial sui generis dispute will correspondingly be significant, which is an additional incentive to resolve these questions contractually before any conflict arises.
The evidentiary record for a sui generis object differs from that of traditional copyright. In Case No. 760/16961/19, a fragment of source code appended to a registration certificate proved insufficient to identify the program, and the court concluded that the fact of derivation could not be established. The risk is analogous for sui generis objects, but the set of documents is different:
— Prompts and generation parameters. The full text of the query, negative prompts, random seed, model temperature, model version, timestamp of generation. Without this data the generation process cannot be reproduced, and the court cannot satisfy itself that a given object is the result of a given generation.
— The model and its licensing status. The version of the model as at the moment of generation (OpenAI GPT-4 Turbo from January 2024 and "GPT-4 Turbo" from June 2024 are different models); the licence agreement (a corporate Copilot Business licence differs in legal consequence from a personal Copilot Individual licence).
— Initiation architecture. Documents that establish who precisely acted as "the person who lawfully initiated the creation" within the meaning of Article 33: the employment contract, the written assignment, and the handover certificate. In a corporate context it is critical that the initiator acted as a representative of the legal entity, not in their personal capacity.
— Generation logs. Centralised logs of AI-tool usage that preserve the full history of employee queries (corporate wrappers around the OpenAI API with logging, rather than personal ChatGPT Plus accounts).
Academic commentary has discussed the introduction of a register of AI-generated objects with mandatory inclusion of metadata on the generation process. No such register currently exists, and the task of building an evidentiary record falls entirely on the business. The lesson from the "classic" software code cases — preserve everything — remains valid in the AI era, with a broader scope of what must be preserved.
The question of proof of copying also persists. The "Abstraction-Filtration-Comparison Test" formulated in Computer Associates International, Inc. v. Altai, Inc. (2d Cir. 1992) remains a relevant tool in the AI-generated code context. However, the filtration step becomes more complex: the output must be cleared not only of ideas, scène à faire and standard patterns but also of elements that were the typical output of a model trained on public data — that is, effectively a "statistical average" of domain knowledge rather than protectable expression.
9. Recommendations for Business
In-house development. The job description of a developer or designer should expressly cover the use of AI tools in the course of employment duties. An internal AI-tool use policy should be adopted, defining permitted models, the obligation to use corporate licences, the prohibition on using personal accounts for official tasks, and requirements for the retention of logs and prompts. All written assignments should be documented, including a description of components that will be created with the assistance of AI tools. Handover certificates should separately reference both copyright-protected works and sui generis objects under Article 33.
Outsourcing and independent contractors. Agreements should include the dual IP clause described in Section 6. They should additionally provide for: an obligation on the contractor to declare the AI tools used and their licensing status; a prohibition on the use of training-data pipelines whose legal status is not established; warranties as to the absence of third-party infringement in the training data of the models used; and an obligation to deliver all generation artefacts (prompts, models, seeds, logs) together with source code.
M&A due diligence. Verification of the target's IT assets should include an AI IP audit: an inventory of all AI tools used in creating the product; verification of corporate rather than personal licences; verification of the adequacy of contracts with employees and contractors regarding sui generis allocation; and a risk assessment for training data. The absence of such an audit is already, in 2026, a serious defect in due diligence and will rapidly become a market standard as case law develops.
Documentation that works in litigation. Systematic centralised logging of all AI-tool usage, tied to the specific employee and the specific assignment. Versioning of prompts and generation parameters in the same repository as source code. Registration of commercially valuable sui generis objects through UANIPIO by analogy with the registration of traditional copyright (the first 2024 registrations demonstrated that the mechanism is functional). A register of training-data sources for models used or developed internally.
NDAs and restrictive covenants. As in the case of traditional code, the effectiveness of non-compete covenants in the Ukrainian jurisdiction is limited by the absence of direct legislative regulation. Protection is therefore secured principally through NDAs and confidentiality undertakings, which in an AI context must additionally cover prompts, generation parameters and model-use architecture as separate categories of confidential information.
10. Conclusion
The owner of a product created by artificial intelligence under Ukrainian law is not the person who wrote the prompt, nor the person who ran the generation, nor even the person who paid for the model. The owner is the person who holds legal title — statutory under Article 33 of the Copyright Law or contractual under agreement. As in the case of traditional code, factual control is not equivalent to legal title, and that difference is the source of future disputes.
In 2022, Ukrainian law adopted a position distinct from both the British (extension of authorship) and the American (refusal of protection), creating a separate sui generis regime for non-original objects generated by a computer program. That position permits commercial recognition of AI-generated content without diluting the notion of authorship as an act of creative human will. For a CEO this means that protection is available, but requires deliberate contractual and procedural design — which is absent from standard IP clauses drafted before 2023.
Three practical conclusions follow. First, classification of the object (traditional work under Article 20 / non-original object under Article 33 / outside protection) is decisive and must be carried out at the creation stage, not during litigation. Second, standard IP clauses in software development agreements drafted before 2023 are legally insufficient in an AI context and must be revised for dual-clause coverage. Third, the evidentiary record for sui generis objects is broader than for traditional works, and its formation is the responsibility of the business — there is neither a state register of such objects with complete metadata nor established case law at this stage.
Ukraine is in the formation phase of this field. Until a mandatory AI statute enters into force — most likely in 2027–2028 under the bottom-up scenario of the White Paper on AI Regulation issued by the Ministry of Digital Transformation — Ukraine's AI framework is already in concrete implementation: in April 2025 the Ministry opened the WINWIN AI Centre of Excellence, and the Cabinet of Ministers adopted the Action Plan for the Implementation of the AI Concept for 2025–2026. Companies already working with AI tools are in effect setting the initial standards of the industry. Those CEOs who get this right will hold clean legal title to their product. Those who defer the decision will hold litigation instead.