• AI Masters
  • Understanding the EU Artificial Intelligence Act

Understanding the EU Artificial Intelligence Act

0 comments

AI Masters Voice

Subscribe to the AI Masters Voice podcast

Listen on

Apple Podcast (soon)

Listen on

Spotify (soon)

Listen on

Google Podcasts (soon)

Listen on

Soundcloud (soon)

Welcome to AI Masters Voice, the podcast where we connect minds to dive into the future of AI. 

Join us in this in-depth exploration of the European Union Artificial Intelligence Act (EU AI Act), a pivotal regulation that is reshaping the AI landscape in Europe and beyond. In this episode of AI Masters Voice, host Martin Jokub, founder of AI Masters Agency, delves into the complexities and ramifications of the EU AI Act with esteemed guests - Egle Markeviciute, who manages digital innovation policy at the Consumer Choice Center, and Rokas Janauskas, a respected lawyer and founder of Janauskas Law Office.

Key Highlights

  • Comprehensive Overview
    Understand the EU AI Act's objectives, focusing on privacy, data protection, and its impact on AI developers, businesses, and consumers.
  • Global AI Race
    Gain insights into how the EU positions itself in the global AI race, comparing its regulatory approach to those of the US, UK, and Asia.
  • Innovation vs Regulation: Engage in the debate on whether the AI Act fosters or hinders innovation within the EU, especially for startups and SMEs.
  • Intellectual Property Challenges
    Explore the legal complexities surrounding AI-generated content and the use of training data in AI development.
  • Future of AI Regulation: Consider the act's potential to become outdated due to rapid AI advancements and the need for future amendments.
  • EU's AI Investment
    Learn about the EU's investment in supercomputers and funds for AI innovation.
  • Legal and Business Implications
    Understand the varied challenges AI companies face in different regulatory environments.

This episode offers a platform for rich discussion, providing diverse perspectives from the fields of policy advocacy and law. Whether you're an AI enthusiast, developer, legal professional, or just keen on understanding the future of AI regulation, this video promises valuable insights and thought-provoking discussions.

00:00 Intro
00:36 Egle Markeviciute introduction
02:13 Rokas Janauskas introduction
06:07 EU AI impact on Egle's work
09:05 Aya project
09:45  EU AI impact on Rokas work
12:12 EU AI Act passing, institution involved
15:45 What is the biggest fear of EU regarding AI?
17:17 Whom AI Act tries to regulate: technology or people?
18:59 EU AI Act will be activated 2026. Will it be outdated? (Rokas)
20:30 EU AI Act will be activated 2026. Will it be outdated? (Egle)
22:39 AI is developing is at speed of the light
23:35 Access EU AI Act
24:11 Explainer about EU AI Act link
24:25 How EU AI Act will impact AI innovations in Europe?
26:19 Martin's opinion on EU AI Act will impact
27:37 EU AI Act should follow the success of GDPR
28:45 Rokas opinion on EU AI Act and innovations
31:00 Are AI prompts legally protected?
34:15 Who owns the input data?
36:18 Content license for AI training
36:55 Can prompts be protected?
40:38 European supercomputers and European ~20B EUR/year AI innovation support fund
42:40 How these funds will be allocated?
45:31 AI regulatory environment in US, UK, Singapore compared to Europe
49:05 Potential legal implications for AI companies in different jurisdictions
51:52 Egle's conclusions
52:40 Social media links
52:48 Rokas conclusions
53:29 Closing notes: request to like, subscribe and comment
54:30 Credits

In podcast episode mentioned external resources

Draft version of 21-01-2024 at 17h11

Uses EU AI Act draft version of 21-01-2024 at 17h11

Aya is an open science project that aims to build a state of art multilingual generative language model; that harnesses the collective wisdom and contributions of people from all over the world.

Project aims to build a state of art multilingual generative language mode; that harnesses the collective wisdom and contributions of people from all over the world. Support this amazing initiative and make sure no language is left behind!

EUR-Lex is your online gateway to EU Law. It provides the official and most comprehensive access to EU legal documents. It is available in all of the EU’s 24 official languages and is updated daily.

To support the further development and scalability of AI models, access to world-class supercomputers that accelerate AI training and testing is crucial, reducing training time from months or years to a matter of weeks.

Introducing the “Large AI Grand Challenge” – an Open invitation for Europe’s Generative AI community, designed as a big competition to spark the creativity of visionaries and founders from Europe.

Full video transcript

Read transcription

Martin Jokub:

Welcome to AI Masters Voice, the podcast where we connect minds to dive into the future of AI. I'm your host, Martin Jokub, founder of AI Masters Agency. And today we are exploring a hot topic that's changing the legal AI landscape in Europe and around the world. In this episode, we will be discussing the European Union Artificial Intelligence Act and its influence on AI developers, creative businesses, and end users across Europe.

And today I am with Egle Markeviciute, who currently manages the digital innovation policy at the Consumer Choice Center. Egle used to be the deputy minister of Lithuania in the Ministry of Economy and Innovations. And last week she was accepted into the Stern Leadership Academy by Stanford University.

Congratulations, Egle!

Egle Markeviciute:

Thank you So, much.

Martin Jokub:

It's great to have you with us, and can you explain to our audience in simple words, what you do, and how you're related to the artificial intelligence field?

Egle Markeviciute:

Thank you So, much for a very kind introduction. The most difficult answers come after the most simple questions.

I'm a public affairs professional. I work as a head of digital and innovation policies at the U.S.-based policy advocacy group, Consumer Choice Center. As you've rightly said, I alSo, spent three years working on digital and innovation policies at the Ministry of Economy and Innovation of Lithuania, where I have a couple of significant reforms in my pocket, including innovation reform, cloud reform, Lithuanian data embassy. I'm a public affairs person. I've been near politics throughout my life, and I'm fascinated by technology and innovation. And as a person who has seen policies up close in both the public sector and private sector, I do know how impactful they are for the development of technology.

Martin Jokub:

Thank you, Egle. It would be really great to hear more about everything.

And alSo, we have Rokas Janauskas with us, a respected lawyer and founder of Janauskas Law Office. He is skilled in legal matters like court cases, public buying, contracts, and creative rights, especially in IT, entertainment, and films.

Rokas alSo, teaches future lawyers. Thanks for joining us on the AI Masters Voice Podcast, Rokas. Could you briefly introduce yourself and tell us about your relations with AI in your daily routine?

Rokas Janauskas:

Yes sure! Thank you, Martin, for such a lovely introduction. I work now as a lawyer. I'm the founder of my small boutique law firm.

We mainly focus on a diverse client base: from large IT companies working globally to smaller businesses, and startups. I work quite a lot in the entertainment, and film sector, with Lithuania producers, animation producers, and game developers, and throughout my career, I have worked in multinational companies here in Lithuania, which were later acquired by US companies. Also, in bigger law firms now I focus more on my private practice. And about my relationship to AI, I would say it's close to zero. I would say no relationship. I'm a bit old school in my work.

Martin Jokub:

You mean your personal relationship, yeah, but in your work, maybe it's ...

Rokas Janauskas:

I wanted to come to that.

Yeah, in my personal work, I work mostly with our tool translator, if that can count as AI, but from the client's perspective, of course, I work. I have now quite a lot of questions in this regard from the creative industry, mainly about who owns the rights to AI products, and alSo, about developers of AI solutions for themselves and clients.

To spread more about why I'm not a bit having myself in my practice using AI, maybe it's a bit thing of some language barrier at the moment. I don't see that there are solutions, that could catch Lithuanian nuances in laws and the legal environment. So, maybe someone from the community could contact me and change my opinion. Maybe it's my just wrong assumption that at the moment, at least ChatGPT, I found that it tends to make things up.

And then some questions. Provides you with answers, which are not necessarily facts and true. For example, there are alSo, some funny stories that, GPT provides case law, which does not exist, but for some time it persuades you that, no, it's true. Without this like certain liability in the lawyer's dedicated tools, I would not trust it. At least at the moment more trust myself, my own opinion or human's opinion. And, alSo, maybe it's a bit question of my way, not following trends, I was not that much into blockchain, Bitcoin, NFT, and now AI trends. But from legal and working perspective, I was like, very much into it because clients are catching those trends, trying to develop things. So, I had to, work with these technology issues, from my personal perspective and how I work as a client perspective.

Martin Jokub:

Okay. Thank you, Rokas. It will be really interesting to talk about, and especially those examples, like in some court cases, one of the lawyers prepared documents and it was prepared only with ChatGPT.

He didn't check it. I know it happened in the US, I think, half a year ago. It was a little bit interesting case when lawyers trusted the technology fully and they went to the court.

Okay. Now I would like to discuss how the EU AI Act will impact you and your personal work. And Egle could you share your thoughts on how this might impact your work at Consumers Choice Center and maybe its broader impact for consumers and innovative companies in Europe and outside?

Egle Markeviciute:

Disclaimer, it's my personal opinion, and certainly, it might be a little subjective. I think the AI Act, like most of the things in the world or life, is a two-sided coin. On one hand, in my opinion, the biggest Pro of the EU AI Act is ensuring that when using AI systems, companies, and workplaces have to comply with privacy and data protection regulation.

This is important. This is a European approach. It might be questionable in other parts of the world, but personally, I like that and I think that is the biggest plus of the EU's AI Act. When it comes to minuses or cons, there are some as well. As a consumer, I'm interested in a thriving economy and market in the EU that can compete with other parts of the world, meaning that both startups, SMEs and bigger companies have regulatory certainty, a regulatory space, and certain tolerance for risks when developing their products.

And it's a huge debate whether the EU AI act will allow it or not.

Martin Jokub:

All right. And how about your work personally? Will it impact you somehow in the Consumer Choice Center? Or…

Egle Markeviciute: This is a very difficult question. Of course, I'm a consumer as well. It is too early to say whether the EU's market will be impacted and what happens after two years when the EU Act comes into force, whether we will have products that are less functional, and have less features than in other parts of the world. I do use AI products and I think even Microsoft Office and Word itself have integrated AI systems and products for a long time now. But I do agree with the notion that you have to still use your critical mind to evaluate whether the information providers for you is the right.

I alSo, really like the comment about the Lithuanian language. I used to work in the Ministry of Economy and Innovation. And one of the big and most difficult projects was ensuring that language models or in Lithuanian, we call it „Lietuvių kalbos ištekliai“, are created and digitalized So, that the AI systems can speak in Lithuanian if you will. At the moment, we still don't have functionalities in Lithuanian in products like notes, phone, dictation, microphones, and So, on and So, forth.

This is certainly a limitation for a Lithuanian speaker to use AI systems in the future.

Martin Jokub:

By the way, talking about Lithuanian language, there is one AI open-source project, it's called AYA project. And Lithuanian language is included in one of 126 languages and other models will be able to use that model too.

I believe this year we will start seeing much more improvement using different languages with AI.

Egle Markeviciute:

Wonderful. Maybe the state will not need to invest So, much money.

Martin Jokub:

It's an open-source project. I believe that the people behind it are keen to implement new technologies and new languages into it.

And Rokas, how about you? How do you see the EU AI Act affecting your practice? Particularly in intellectual property, maybe the film industry, and, in general with the contract law?

Rokas Janauskas: I would probably need to alSo, look like from two sides, one side from my personal and my practice view and another side from like client's perspectives. In some cases, we would interact.

At first, of course, obvious thing, but it would be some work for clients and for myself to assist them in complying with the EU AI Act. And the incentive to, for doing this is pretty much clear, straightforward. Because the the law foresees quite big fines for non-compliance, ranging from 35 million or 7% global turnover to 7.5 million or 1% of turnovers. This incentive to comply is not something like “we will not comply and then we will have some warning or small administrative fine, and we can continue”.

It's like a major incentive to really comply and probably companies that are now already reviewing the proposals and will be waiting for the final text. Probably at the moment, it's a bit early to speak about what exactly we can start doing now. The main principles are agreed and we are still waiting for the final act wording, which will be approved by the parliament and council. So, still some changes are possible, but after the law is adopted, there will be a transition period of two years to implement and during that year, I think it will be busy. I would even imagine some cases similar to GDPR where companies had this alSo, incentive for big fines to start implementing. So, we started preparing and everyone did a great job to comply.

From my personal view, what relates to copyright to IP rights, the act alSo, foresees that one of the principles that companies will need to comply with EU copyright regulation and to…

Martin Jokub:

You have in mind the large language models training, usually these companies are quite big.

Rokas Janauskas:

With data images, which generates data images. For example, maybe the video, images where, you need to put some, content of hours into the training model. I'm, talking about those.

Martin Jokub:

Right before going deep into this, I'd like, I would like to ask you, and maybe you can explain to me and listeners to understand how this EU AI Act was passed, what institutions were involved in crafting it?

And when did it start? Because from the headlines in the news portals, we think what, at least it looks like it was passed in 48 hours, but I believe it's not true. So, can you tell me more about this?

Egle Markeviciute:

Absolutely! So, the three main institutions that we're working on EU AI Act is, the executive branch - European Commission.

That is the main owner. They drafted it. AI Act is part of the bigger digital strategy. We have to admit that this European Commission was very effective and proactive in terms of, technological regulation. There are several other acts and regulations that are either come into force or are coming into force in the future.

Then there's the Council of European Union, which is comprised of member states representatives. And of course, the European Parliament.

If we look at the debate between the European Commission, the Council of the European Union, and the European Parliament, I would say the European Parliament, was even stricter in terms of, data protection when it comes to AI Act compared to the Council of the European Union and the European commission.

For example, the council of the European Union and many member states were really against not allowing military and national security organizations to use AI systems that work with biometrics. This has been stated by our criminal police, for example, the Criminal Police Bureau. Which did not result in the support of the European Parliament.

The European Parliament wanted to ban biometric AI system use completely. Meaning that they were the most conservative or the stricter ones. So, the main organization, the main institution, the main owner is the European Commission and others are participating. It started in 2021, but I would say that it reaches even back further. In 2019, when the European Commission came into force. In these terms, the European Commission, was appointed and they had, many new ideas, on how to govern and regulate digital markets, and data, and how to promote European digital sovereignty, if you will, and AI act was a part of it.

Martin Jokub:

Maybe you can discuss some main purposes of this act. What are the main points, what were agreed on?

Egle Markeviciute: The main purpose, as I said before, AI Act is part of a broader European digital strategy, and its main aim is to regulate its use.

And as we've seen in the last document, alSo, the creation of AI systems and AI tools in the European Union ensures that the regulation is risk-based. And as the European Officials themselves have proclaimed many times it's human-centric. Meaning that the citizens, the people are protected from the perceived dangers of AI in the future.

Martin Jokub:

What do you think the European Union and Commission and Parlament and all these institutions are mostly afraid of? What's the worst scenario they are seeing what they can cause to humanity, let's say.

Egle Markeviciute: I'll try to be politically correct. It's, technological Luddites if you will, or seeing the dystopian worst-case scenarios.

This narrative has been very prevalent over the past few years. And of course, this affects regulators, and policymakers as well. The European Union, following their GDPR success or attempts, wanted to introduce the same type of regulation that ensures the safety of European citizens and Europe as a whole by implementing this type of regulation.

That's, I think, their main motto combined with other initiatives: Data Act, Data Governance Act, DSA, DMA, and, other initiatives, oriented towards European digital sovereignty. This is just a part of a bigger picture to ensure humans are safe. I think Ronald Reagan was the one who made the famous joke: “I'm from the government and I'm here to help”, being the scariest words that a person can hear.

So, not throwing yet it's at the European Commission, but the European Commission and Europe wants to regulate and ensure that people are safe in the future.

Martin Jokub:

From my perspective, I would like to add: who we are regulating? Technology or people who uses technology? What do you think?

Egle Markeviciute:

Initial purpose, and this was stated by, I think, the CEO of Mistral, the French startup that opposed, the EU's AI Act.

The initial idea was to regulate the applications. So, that the applications are safe to use for the consumer. But as it often is, the appetite grows as you're eating and the European Commission and the architects of the EU AI Act went deeper into the regulation of technology. Of course, there are opinions that providing the data on the models, the information data sets that are, that the startup or a company is using to train their AI systems is not exactly regulating the technology. But I wouldn't call it any other way than just going into the kitchen, deeply into the inner workings of the companies.

I alSo, see a huge, debate or discussion on. How will that affect the European startup and tech companies’ future?

Innovation not only needs talent, and finances, which the European Union has, but alSo, a good regulatory environment, tolerance for risk, and IP protection. So, if we're going completely into the open-source area, how will that affect our competitiveness in terms of global?

Martin Jokub:

Thank you. And this is the discussion for all of us.

What do you think, Rokas you mentioned, what Egle you mentioned, that this act will take force only in 2026? In two years.

What do you think about the timeframe? Is there a risk that the EU AI Act will be outdated at that moment? Especially knowing how fast AI development is progressing. What do you think, Rokas?

Rokas Janauskas:

From things I see during my practice, I would say that law is always dragging behind the technology.

It's very difficult for laws to outrun the technology. I think we tried, So, it might work. But not like in most cases. Because in my view, AI affects So, many areas in So, many different sectors and it can be used basically in every sector, in every aspect. IP protection, which is in a way now alSo, unregulated or unharmonized.

Lots of questions are still open and discussed now and are being decided now. We still maybe don't know the full applications where AI can be possible. So, I think such risk exists that it will be updated, but of course, it can be updated, and amended to reflect like from my experience, as I see with technology, So, a loan in most cases goes a bit after the process.

It's first some technology and then decisions on how to regulate. It's my opinion.

Martin Jokub:

And Egle how you see it?

Egle Markeviciute:

I see it from a philosophical point of view. I see it as a huge flaw of continental law type in comparison to common law, For example. If we compare the UK's approach, the US approach, and Singapore's approach, they are not running towards imperative rules or regulations that get outdated very quickly.

They work on working with agreements, and recommendations for technological companies, for the ecosystem. They try to understand the technology themselves. Only then do they think what kind of regulation can be put into practice. We don't know yet.

Two years, Martin, you're hailing from a startup scene yourself. You, know it better than we do. Everything can change in two years. Some startups can scale, exit, leave for the US, and finish. There will be a lot of existential questions for startups and SMEs, especially in Europe, because of the compliance costs. My colleague has to be happy about the EU's AI Act. This will result in more billable hours and more work because more companies will need help from lawyers to understand the regulation and comply with it correctly.

But in general, when it comes to this regulatory framework, I certainly think this may result in being outdated in two years. We have to know that the two years or the end of 2025 or 2026, this is yet to be agreed with. So, we will see the resumed conversation on the EU's AI Act next week. Hopefully, it'll result in final decisions and then we'll have to go back to the European Parliament, the Council of the European Union, all the bureaucratic chain to finalize the agreement on the EU's AI Act.

Martin Jokub:

All right. When you said startups are moving really fast… can you imagine what ChagGPT arrived to the public only one year ago? Almost 13 months ago… What will happen in the next two years? It's really hard to predict and startups coming, going out. And we're building different solutions. And I believe a lot of teams are building at the moment, new solutions. Just last year when I started checking how many AI solutions we have. In some databases, I saw more than 8, 000 AI solutions already. They didn't exist two years ago or a year ago. I worked with some startups who were in the AI industry two years ago, but it was more about far future. Even they didn't expect that what will happen in the next year. It will be really interesting to see how the landscape will change until 2026.

And another question, and you mentioned that they are kind of coming up with the final draw (EU AI Act). Maybe it's already accessible because I tried to find that document and it was really hard to find the newest version of it. I found just half-year-old one.

Egle Markeviciute: I think there's a newer one available on the EU's legal act system, EUR Lex (https://eur-lex.europa.eu/), but, the newest version, in my opinion, it's still not publicly available.

European officials are sharing some information on LinkedIn and other areas, but it's not openly discussed yet.

Martin Jokub:

For our viewers, I will put the link when it will be available publicly below the video. If you don't see the link yet, please refresh after some time, the link will be there.

This is the question for, I believe for all of us. How do you think the EU AI Act might affect innovations in Europe? As you said, Macron and other political leaders already said what they think will lower the competitiveness of Europe. Egle, what do you think from your side?

Egle Markeviciute:

As we discussed before, innovation requires a lot of elements. We need good science. We need business and science cooperation. We need talent. We need finances. We need a regulatory certainty. We need collaborative regulators and policymakers. We need to clear intellectual property protection and So, on and So, forth. Commissioner Breton has said himself that the innovators need regulatory certainty and that the EU's AI Act will bring regulatory certainty for them.

I am not exactly sure about it because as we've discussed before: two years is a long time where many things can happen. Many companies, including smaller ones, will be more aware of the compliance costs that come with the EU's AI Act. There could be a significant impact on tech talent in the EU, not even try to start their companies in the European Union.

This may sound a little bit negative, but there are a lot of countries in the world that want to become AI hubs: the UK, the US, Israel, UAE, China, and So, on forth… Singapore. So, there are a lot of countries competing for it. And those who have a smaller population, and smaller markets. Still, focus on the regulatory environment that is much more open than the European increasing their competitive advantage.

Martin Jokub:

From my viewpoint, I see it as, let's say negative side. Especially we, as AI developers and custom solution developers, will be using different models and if we decide what European market requires more regulations, we need to adapt the product to regulations. It means more legal costs to us to enter the European market. Because for example, my entity was established in the UK, I have a different approach and the world is much bigger than just Europe.

And I believe some of the technologies and innovations after two years, maybe right now will be entering the market, everything will be okay. But after two years, some of the solutions could be cut off. And even, for example, Elon Musk had a different opinion on X regulations and he said: “Okay, the European market is not So, big. So, we cut out the European market from accessing X.”

We still have access to it, but I'm not sure how long and regulators will regulate it. Maybe the world will just turn around Europe and Europe will be a third sort region where new technologies related to AI will come. I'm not sure. I'm just saying this is my opinion.

Egle Markeviciute:

Can I add one more thing?

I think the European officials do believe and hope that the AI Act will follow the success or the impact GDPR has had on the world. But as the one on one international economics says: you can reduce your tariffs in your own country, but if the countries around you, don't reduce tariffs, you will be the sole loser. And if the global scene changes, then everybody benefits from it.

With the AI Act and the negative communication around it, if the politicians, and policymakers in the US, UK, and other countries do not follow the European path, and it's very likely, then we will certainly have a huge difference between the EU and other countries.

The EU expects other countries to follow in their footsteps. If they do: we have different scenarios, this is one of them. If they do, then the European union is the pioneer and yet everyone's following the same path and we have the same situation, but in my personal opinion, it's highly unlikely.

Martin Jokub:

And Rokas, how about you? I remember we talked about this.

Rokas Janauskas:

I alSo, think that there was some, not only competition, but like very close, looking from bigger countries who want to participate in the, or, see the future, whether countries can assist the development of AI, and how to regulate this. Or to regulate, or not? And there were some different approaches. Who will be first? Who will take the first steps?

We see now that the European Union took the approach to regulate and we are the first and the United States, for example, took a bit different approach with Joe Biden, just having some decree or order in the form of recommendations on what AI companies should follow. Like, a different approach, I would say. And I would agree that the regulation and amount of regulation, especially for startups for new technology developers, smaller companies, it's like a really big and main issue and we don't need to look far back. For example, with FinTech at the moment, Estonia was a very attractive country, but it had become a stricter regulation. So, the countries started to move down more. Lithuania was trying to attract companies in the area by alSo, representing themselves in the global market as a suitable place for FinTech companies.

But well, after sometimes the regulation got stricter and we see those companies alSo, moving outside. Businesses are now very much mobile. They don't need to be stuck in one country. So, they, of course, check and evaluate those legal costs, administrative costs, reporting costs, and So, on. This is one side for business, but from the other side, I'm in a way happy that some regulation, I think, in my view, should be in place, not only in the form of recommendations. Which relates to the aim to protect human rights, data, and privacy issues related to tracking, and surveillance. Just that it wouldn't be So, open as, like with China, where AI is used for government purposes to track, evaluate, and classify you. So, I think in some ways, yes, but how much regulation would foster innovation, of course, the smaller for development is better. So, this is my view.

Martin Jokub:

We were talking about this, we all know that things like writing, photography and software are protected by law. But what about AI prompts and can AI creators use intellectual property laws to protect their work? And can they use their trade secrets or confidentiality agreements to keep their data safe?

What should they think about in these cases?

Rokas Janauskas:

It's a very interesting topic in my view, and again, I would split my answer in several groups. In general, AI shakes intellectual property rules. So, at first, who owns the products generated by AI? Courts of the United States and the Intellectual Property Office of the United States took the clear and straight position that only a human being can be the author, the owner of the copyright.

So, only the creations created by human beings. Can be protected by copyright law and anything that is created by the AI is not protectable. So, if you get the results from ChatGPT or Midjourney. They are not protected by copyright. A bit different situation is if you are not AI-generated, but if you have created, for example, AI-assisted content. But it's again, how much human input should be present?

Do you use AI just as a tool, for example, photography and Photoshop or AI-generated it? In the case of AI-assisted, there was one case with a comic book, that was awarded a copyright. The simple reason is that there was human input in creating the entire comic book and its underlying story. But after there was alSo, some controversy because it was after revealed that part of the images in this comic book were created with Midjourney.

So, IP office of the United States changed their position saying that the portion that was created by Midjourney is not protectable by copyright. And similar position we have in the United Kingdom, the Supreme Court said that AI cannot be named as a patent inventor, for example. But we have an opposite situation in the opposite part of the globe. I read about the case in China. The Chinese court said that AI-generated image is protected by copyright. So, because the person put it So, much, effort into prompts, changing, and finding the right qualities of that image. So, they accepted that it can be protected by copyright. And alSo, similar, cases in Australia, where the Court said that AI systems can be held as an inventor.

We understand: we had two very different positions in one part of the world and another. For the moment it's not clear. It's actually, it's very interesting time because these cases are appearing before courts decide them. They go to higher courts and we need some, for example, in the United States, probably the Supreme Court at some stage will tell its final position, how they see AI in relation to copyright. And we can have a situation where, one country can be protected and another not. Another problem with intellectual property rights, which relates to AI is who owns the input data. So, basically what we talked a bit with…

Martin Jokub:

Oh, training data, you mean, yeah.

Rokas Janauskas:

This training data is because there are alSo, lots of lawsuits during this year from authors, graphic artists, journalists, publishing house, and the latest is the New York Times suing OpenAI for the use of their data in their speak language training models or other models. Authors basically raise the question of consent.

Do we consented to that? Do we need to consent to that? And the question of adequate compensation. 'cause from this publishing house perspective, author's perspective, we see something like AI companies are like a bit free riding on their work or in a way competing with them without proper compensation to author in question.

And I would say like for some, it's more, maybe it's more personal issues that author does not like that his creations would be into AI training model. But for some, like New York Times, I think it's just the question of the adequate compensation. Probably they believe in the same future, but it's meant to be AI, but just we want to.

Step in and have some piece of that pie revenues, which are generated. And again, and I think it's alSo, not a new issue in general, same situations with what's with Google, digitalizing books, with internet, with music streaming. There was, we can remember Napster, which was closed because it didn't have licenses and now it's similar.

So, it's now just a question again, where the politics like IP politics, IP laws or case laws will go? Will we have some exclusion or limitation for AI companies to say it's an important technology? You can use it, but with some limitations, or will we have the question, for each use the license is needed.

Martin Jokub: Or maybe there will be a separate AI license. Or AI trading license, which could be a new model.

Rokas Janauskas:

Exactly. And this will influence because, under each of these possible options, there are different business scenarios. For example, if you can own it for free or you need lots of, legal. Administration and financial efforts to buy all those licenses or they are somehow like acceptable as best, for example, with use of films in some TV installation, but you just fill the form that you use some content and you pay some agreed fees.

So, it can really differently affect how this technology is developed. And what about prompts? You mentioned how prompts can be protected and if they can be protected. One aspect depends if prompts can be copyrighted. In my view, some prompts, theoretically, and if we meet the criteria for creation from a copyright perspective, can be sufficient to be protected under copyright.

But in this case, the prompt should be an original idea, a result of the human creative process, and fixed in a tangible medium or expression. But when it comes to the question, can we like for each prompt, probably it will not be possible for like simple question, make me some pictures, probably it won't apply because it's, just consistent of general work.

But again, if we even have the protection for prompt itself, it does not mean, that we protect the outcome that AI generates from other creations. So, we talked about it a bit earlier. What to do with prompts. I think even going that direction, that. To copyright the prompt won't probably, especially for smaller companies, won't be a good scenario, although it could be possible.

So, maybe a better way, my view would be to just secure prompts from the trade secret, commercial secret perspective, and just try to, do it my recommendations from companies, because the prompts could be. In a way hard to copyright it to go through this confidentiality commercial secret provisions.

For example, in Lithuania for that, we, company would need to have a list of its commercial information. My recommendation would be to include into that list that, prompts and everything, which is related to the AI of that company works is confidential information, commercial secret of that company.

And then we need employees, of course, to get informed about such a document, have them signed into it, and my addition with employees in place. There is alSo, a possibility, for example, that all of this information of your company, like valuable information, is contained in one or a few pieces. And alSo, it would be possible to be, to have safeguards that after this person leaves the company, they could have their non-competition projects applicable to them. But in such cases, the company needs to pay them at least 40% of their salary, such as a requirement in Lithuania. If you operate in different countries, you need to check those countries' laws just to see what safeguards you have in addition.

Same with other, with other tech companies and I think for AI, it's not different. If you want to protect your people. P company is. So, best ways for is to, capitally protected with confidentiality. And E, even with clients, if you don't want, the clients will use your information for benefit. Benefit.

You can alSo, include provision into that agreements and with AP in general. It's alSo, worth noting that even if. prompts can be regarded as a copyright situation, which is protected. So, without an additional agreement, for example, in Lithuania, an employee passes the rights to that creation only for five years, if there is no additional agreement.

So, for companies normally, which are. in the field of IP creation, we need to have additional agreements for that. Employees would sign for each subcontractor or the person who used in the project would sign additional agreements for transferring IP rights to the company.

Martin Jokub:

Thank you Rokas for a really detailed explanation.

I believe a lot of companies who currently are in the AI space and developing different solutions, will benefit from that.

Egle, you mentioned what something interesting is building in you, related to supercomputers and a huge 20 billion Euro per year fund for AI innovations. Can you tell us more about these projects, who might benefit from these funds and how do you think these supercomputers and funds will be used?

Egle Markeviciute:

I'm not exactly sure about the exact sum, but, European Union has been investing over the past—three to four years, I think, in the development of European, supercomputers. Three of the supercomputers are now world-class. One is in, Italy, it's called Leonardo. The second one is in Finland called Lumi. And, the third one is in Barcelona, Mare Nostrum.

The European Union has been investing in the development of its supercomputers, thinking and expecting that this will give an edge to the European Union in terms of AI, the global AI race. And it probably will. There has been a debate on whether the European Union should by the technology that is already available in the world and use it for European purposes or develop it on their own.

in this case, the political notion of European digital sovereignty is one, that Europeans want to build their own supercomputers rather than buy it from the biggest. global companies worldwide, the commissioner, Breton, and, other officials have announced in November 2023, that some European SMEs, and AI startups will be able to use these supercomputers in the future.

And the exact call we can include in the link below. Certainly, this might become an opportunity for startups and SMEs, in Europe.

Martin Jokub:

Okay. So, you have that link you, you can send me over So, I can include. And what do you think how that these funds will be spent from one side are these funds will be dedicated to innovations in the AI sector or bigger part of these funds will be spent on trying to adjust to legal environment to of the European Union.

This is my question. What do you think?

Egle Markeviciute:

Usually the funds, the European funds have very strict rules on where they can be used and not. Probably they won't be able to use it for compliance costs to comply with the AI Act. The question is bigger and very important and very painful, I would say, especially for countries in the Central Eastern Europe.

The European Union has a lot of. Money and funds to promote innovation, be it structural funds in case of CE countries or the new Europe that has joined, in 2004 or Horizon 2020, which is a biggest, one of the biggest opportunities for companies of different size. If we look at the statistics, we see that our companies in Lithuania, for example, are at a disadvantage because participating in this program successfully requires.

Having partners in different European countries in Lithuania, for example, we do not have people with a significant success story to train our companies and So, on and So, forth. I will not dwell into the ambition, our science and other public institutions have in order to promote business participation in the Horizon 2020.

But this is a big problem. And what I'm going towards is. These opportunities with supercomputers, AI promotion, all the promote innovation, promotion tools that will be available for AI companies as a nice and startups, whether the country send the people from central Eastern Europe, which is smaller, where the opportunities are lower, we'll be able to use these funds.

As much as the companies that exist in the West, it's an open question. We'll see, we cannot say too much at the moment, but we have to recognize that both our institutions, agencies have to be preparing for that because as the time goes by and if the European Union does. not refuse the idea of this big European digital sovereignty, which in my personal opinion is not the right way.

We are too small in terms of global race to go alone. we have to collaborate, but at the same time, respecting and securing the interests of our markets, our agencies, our institutions have to be prepared for these new developments in the European realm.

Martin Jokub:

You already mentioned the difference regulatory environment for AI in Europe and in other regions.

And Rokas, you alSo, mentioned that, but maybe you can compare some things. For example, what is different in UK or US or Singapore compared to Europe?

Egle Markeviciute:

Absolutely. Comparing the US, UK and Singapore to the European Union, for example, we can see that Singapore and the UK invest a lot in understanding the technology before regulating it.

For example, in Singapore, I think they've been working on AI regulatory sandbox for over a year now. They have recommendations. They cooperate with companies. the UK is following the same path. I think it was March, 2023 or April where they published their approach to AI regulation, and they have been working with the companies on the same path.

If you look at the website of, UK's, Governments, when it comes to AI innovation, we can put this link below as well. You will see a link with 50 pages where there's a lot of models explaining how companies of different sizes, startups, SMEs should act. there are different cases explained in a very understandable, humane matter.

So, they're investing a lot in it. And, this. at least for now, looks much more open and innovation promotion, like then the European unions approach. There are other countries that do not talk that much about the regulation. They have, China with a lot of state investment in AI development. we don't know a lot of things.

So, we do know they are issuing some type of regulation. If we looked at Japan. Until the discussion on the AI Act began in a very heated manner in December, they were claiming they will follow the European path. We'll see how it goes afterwards. But I think in East Asia, it's one of the few countries that have proclaimed that the European way is the way.

We have UAE, where they are investing a lot of. States money in the development of AI ecosystem, but we do not really hear much about their regulatory measures yet. we have Israel always open for innovations. One of the spots where both talent science and, open regulatory openness is combined.

There are other countries that want to join the AI race.

I noticed in the public domain, when comparing, different countries, the EU is sometimes compared to the US or Canada as a whole, as far as I know, we're not a federation yet, we have to compare Germany or France to the US or Canada, not the EU itself. If we look at the EU as a whole, the numbers are not that bad.

I think the EU is It's around number three in the world or number four in the world in terms of the combination of talent, private startups, finances, allocated to AI and So, on and So, forth. But, we still do not live in a federation. We do not have a unified state and we should compare other singular countries to other countries.

Martin Jokub:

Thank you Egle.

And Rokas, in your view: What are the potential legal implications for AI solution companies operating on different regulatory environments like in Europe, UK, and US? What do you think?

Rokas Janauskas:

Like in my view, because I work quite a lot with companies who operate globally, I wouldn't see it as a big change to other businesses, to doing business in other regulated areas.

Of course, it's better when we have some harmonize at least principles. But for example, now, as, I mentioned with copyright, it's very unclear. Part of the world says that AI generated content cannot be copyrighted. Part of the world says that it can and that it can be protected under copyright. And we have like different approaches to surveillance, scrapping of video data by AI.

China maybe has one regular approach, Europe, US has different. In some places, there are just recommendations, which means that companies tend to adhere to them, but can take a risk and not to adhere. And then it's regulated. So, it's more of administrative burden and legal burden. I don't see much of the big change from other regulated areas where business is doing, but the main thing that if it's regulation.

Company needs to adhere to it and comply with it. What implications can be that we've touched upon a bit that companies and startups can, this is where they would start operation or they could develop. Companies can choose the places where they want to be established. We want to have their development works and then spreading the product around the globe.

So, in each case, we will need to meet certain requirements in the countries we, in which we operate. And alSo, I fully agree with Egle that when we talk about European Union, we don't need to forget that European Union is alSo, made. From different countries, even now we have, for example, if we have some aspects regulated, like in whole European Union and GDPR, but for example, with some software issues, consumer rights in regards to software issues, for example, we have Germany with a bit stricter standards and the case law, and we have, for example, Lithuania, which does not even have a law on certain specific IT related issues. So, there is also, it's, always this uncertainty. So, the question, the answer to the companies or suggestions to have the markets, which you want to enter, consult with lawyer in those markets, have some regulatory table, make your homework, and otherwise it can be, costly or even unable to, go with your product.

Martin Jokub:

Thank you, Rokas. And thank you, Egle for joining us today on the AI Masters Voice and sharing your expertise, your knowledge with the audience.

Egle Markeviciute:

Thank you for having me here. And it was really interesting to hear other perspectives and very legal oriented perspectives. Time will show whether regulating the innovation brings more benefits than negatives.

As a person who wants to see European Union thrive and compete with other regions in terms of technological development, I think we still need to have a very serious conversation on open source versus IP protection, and how does that affect Innovation, potential and business as a whole. So, thank you for having me.

I will follow your work and, please subscribe, and follow us on social media. looking forward to hearing other thoughts.

Martin Jokub:

Thank you, Egle.

And Rokas, your last thoughts?

Rokas Janauskas:

I also feel very enriched by this discussion we had. It was interesting to hear more input and detailed information on how this regulation of AI is going around the globe.

I personally, I am interested in IP topics related to AI. How, in this situation, ownership of AI-generated content will go, and will it be harmonized? Also, I'm interested, very much how this input training data issue will be solved. Will it be some licensing model?

Martin Jokub: All right. So, it was a really interesting discussion and I believe this topic is so open and we can continue for hours trying to understand and see different applications of this new AI ad and legal environment.

For AI solutions, I encourage everyone who was listening until this moment to put your questions and some thoughts in the comments section. So, we will review and we'll try to answer those questions for you.

Thank you for joining us on AI Masters Voice. I encourage you to leave comments, like our video, and share it with your friends, and colleagues.

It's really important for us because it's the first episode of AI Masters Voice and your support is needed.

Don't forget to comment, and share this video.

Subscribe to the AI Masters YouTube channel for more exciting conversations about the dynamic world of AI. And until the next time. Keep innovating and you soon!

Related content

This new legal framework aims to regulate the development and use of AI technologies within the EU, focusing on ensuring trust, safety, and the protection of fundamental rights. This new legal framework, however, raises critical questions about its broader impact.

About the Author

Follow me

A seasoned digital business architect and full-stack digital marketer, he brings over 24 years of experience in launching, automating, and scaling online projects, with a particular focus on the tech, AI and education sectors. His diverse skill set extends to AI training, startup advising, and founding innovative initiatives.


{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
>