Jonah Ramponi PRO

jonah-ramponi
ยท

AI & ML interests

NLP

Organizations

None yet

Posts 3

view post
Post
649
From Article 50 of the EU AI Act:

"2. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated."

How might this be put into practice?

I'm interested to know how content might be deemed as being "detectable" as artificially generated. I wonder if this will require an image be detectable as AI generated if it was copied out of the site / application it was created on?

Some sort of a watermark? LSB Stegranography? I wonder if openAI are already sneaking something like this into DALL-E images.

Some sort of hash, which allowing content to be looked up, and verified as AI generated?

Would a pop up saying "this output was generated with AI"? suffice? Any ideas? Time is on the system provider's side, at least for now, as from what I can see this doesn't come into effect until August 2026.

src: https://artificialintelligenceact.eu/article/50/
view post
Post
487
Thought this was an interesting graphic from the EAGLE blog post. It made me wonder if certain sampling methods have been shown to work better for certain tasks.

Does anyone know of any work looking at trends in the output token probability distribution by task type? (or similar)

Source: https://sites.google.com/view/eagle-llm

models

None public yet