I’d like to know how do you expect the governments or even private institutions to enforce this since most of the countries won’t care about foreign laws.
They can forbid companies from using the AI to do business in their areas, like the EU is doing with privacy laws. Google not being able to use its chatbot search in the US would be a big deal.
Not hard when they’re advertising it right now. And if they do try to keep it secret all the government will have to do is subpoena a look at the backend.
But honestly, since when do we just not have laws when it’s hard to prove. It’s hard to prove someone INTENDS to murder someone, but that’s a really important legal distinction. It’s hard to prove someone’s faking a mental illness, but that’s another thing that’s got laws around it. It’s really hard to prove sexual assault, but that needs to still be outlawed too.
Compared to that stuff? Proving someone used an AI is going to be a piece of cake with all the data that gets collected and the amount of work it would take to REMOVE the AI from a business process before the cops get there.
Enforcing a potential AI ban in work environments is unrealistic right now because it’s challenging to prove that AI was actually used for work purposes and then enforce such a ban. Let’s break it down in simple terms.
Firstly, proving that AI was used for work is not straightforward. Unlike physical objects or traditional software, AI systems often operate behind the scenes, making it difficult to detect their presence or quantify their impact. It’s like trying to catch an invisible culprit without any clear evidence.
Secondly, even if someone suspects AI involvement, gathering concrete proof can be tricky. AI technologies leave less visible traces compared to conventional tools or processes. It’s akin to solving a mystery where the clues are scattered and cryptic.
Assuming one manages to establish AI usage, the next hurdle is enforcing the ban effectively. AI systems are often complex and interconnected, making it challenging to untangle their influence from the overall work environment. It’s like trying to remove a specific ingredient from a dish without affecting its overall taste or texture.
Moreover, AI can sometimes operate subtly or indirectly, making it difficult to draw clear boundaries for enforcement. It’s like dealing with a sneaky rule-breaker who knows how to skirt around the regulations, all you have to do is ask.
Considering these challenges, implementing a ban on AI in work environments becomes an uphill battle. It’s not as simple as flipping a switch or putting up a sign. Instead, it requires navigating through a maze of complexity and uncertainty, which is no easy task.
No, that was just an example of how such a law may be enforced.
Honestly, we may need to get our lawmakers to expand the definition of “identity theft” to really cover this stuff. A VA’s voice is their innate, distinctive personal signature. It is their livelihood. We’ll have to see what a court says about copyright but it’s certainly not right for it to be just copied by an AI.
I’d like to know how do you expect the governments or even private institutions to enforce this since most of the countries won’t care about foreign laws.
They can forbid companies from using the AI to do business in their areas, like the EU is doing with privacy laws. Google not being able to use its chatbot search in the US would be a big deal.
Sounds to me you would have to prove someone used AI in their work first, therefore making it difficult to realistically enforce.
Not hard when they’re advertising it right now. And if they do try to keep it secret all the government will have to do is subpoena a look at the backend.
But honestly, since when do we just not have laws when it’s hard to prove. It’s hard to prove someone INTENDS to murder someone, but that’s a really important legal distinction. It’s hard to prove someone’s faking a mental illness, but that’s another thing that’s got laws around it. It’s really hard to prove sexual assault, but that needs to still be outlawed too.
Compared to that stuff? Proving someone used an AI is going to be a piece of cake with all the data that gets collected and the amount of work it would take to REMOVE the AI from a business process before the cops get there.
Enforcing a potential AI ban in work environments is unrealistic right now because it’s challenging to prove that AI was actually used for work purposes and then enforce such a ban. Let’s break it down in simple terms.
Firstly, proving that AI was used for work is not straightforward. Unlike physical objects or traditional software, AI systems often operate behind the scenes, making it difficult to detect their presence or quantify their impact. It’s like trying to catch an invisible culprit without any clear evidence.
Secondly, even if someone suspects AI involvement, gathering concrete proof can be tricky. AI technologies leave less visible traces compared to conventional tools or processes. It’s akin to solving a mystery where the clues are scattered and cryptic.
Assuming one manages to establish AI usage, the next hurdle is enforcing the ban effectively. AI systems are often complex and interconnected, making it challenging to untangle their influence from the overall work environment. It’s like trying to remove a specific ingredient from a dish without affecting its overall taste or texture.
Moreover, AI can sometimes operate subtly or indirectly, making it difficult to draw clear boundaries for enforcement. It’s like dealing with a sneaky rule-breaker who knows how to skirt around the regulations, all you have to do is ask.
Considering these challenges, implementing a ban on AI in work environments becomes an uphill battle. It’s not as simple as flipping a switch or putting up a sign. Instead, it requires navigating through a maze of complexity and uncertainty, which is no easy task.
Are you sure? Training with VA data counts as privacy violation?
No, that was just an example of how such a law may be enforced.
Honestly, we may need to get our lawmakers to expand the definition of “identity theft” to really cover this stuff. A VA’s voice is their innate, distinctive personal signature. It is their livelihood. We’ll have to see what a court says about copyright but it’s certainly not right for it to be just copied by an AI.
Thing is, the courts, in the US at least, have already made a decision about this.
Bette Midler v. Ford Motor Co
Impersonation of a voice requires permission from the original artist. AI is no different. It should require permission.