Cybersecurity

Italian Data Regulator Launches Probe Into OpenAI’s Sora

Company Has 20 Days to Disclose Details on Data Used for Training the AI System

Italian Data Regulator Launches Probe Into OpenAI's Sora
The Sora response to the prompt “instructional cooking session for homemade gnocchi hosted by a grandmother social media influencer set in a rustic Tuscan country kitchen with cinematic lighting.”

The Italian data protection regulator opened a privacy inquiry to Sora, OpenAI’s newly announced text-to-video artificial intelligence model.

See Also: Live Webinar | Secrets Detection: Why Coverage Throughout the SDLC is Critical to Your Security Posture

The probe, announced Friday, will focus on the data used by the San-Francisco company to train Sora and its data processing procedures. The inquiry follows another ongoing probe into ChatGPT (see: Italian Regulator Again Finds Privacy Problems in OpenAI).

Sora can generate videos up to a minute long based on text-based prompts. The model is currently in beta stage, and will be included into OpenAI products in coming months, the company said during the product launch last month. As part of the rollout, OpenAI CEO Sam Altman posted a video taken from a social media suggestion that Sora create an “instructional cooking session for homemade gnocchi hosted by a grandmother social media influencer set in a rustic Tuscan country kitchen with cinematic lighting.”

As part of the latest inquiry, OpenAI has 20 days to respond to a number of questions posed by the Italian agency. They include details on how OpenAI trained Sora algorithms, types of data collected by the company and if the collected data includes sensitive information such as political opinions, genetic and health data of Italian and European citizens.

The agency, known as Garante, “has asked the company in particular to indicate whether the procedures provided for informing users and non-users and the legal bases of the processing of the data.”

The Italian agency in 2023 imposed a temporary ban on OpenAI’s large language model chatbot after it said the company violated the European General Data Protection Regulation. It restored access to the chatbot in April after OpenAI agreed to changes including age verification and an opt-out form to remove personal data from the large language model (see: Italian Privacy Watchdog Imposes ChatGPT Ban).

Garante later said a review of the changes introduced by OpenAI revealed that the company has continued to violate trading bloc privacy law. In February, the OpenAI was given a deadline of 30 days to respond to the agency’s ChatGPT privacy review (see: Italian Regulator Again Finds Privacy Problems in OpenAI).

The agency did not immediately respond to a comment request on the status of the previous probe.

In Europe OpenAI faces scrutiny in Germany, France, Spain and in Poland over its privacy practices. The increased European scrutiny into the AI company comes as the EU is set to implement AI Act, a comprehensive regulation that regulating AI and banning certain applications such as emotion recognition and facial data scraping from CCTV.

OpenAI did not respond to a request for comment. Owing to increased European scrutiny, the company in December introduced an updated privacy policy, which provides information on data collected by the company and how the company processes it. Under the revised policy, OpenAI users can object to the processing of their data for direct marketing or legitimate interest.