What you need to know
- The EU and Google are working together to create a voluntary AI pact.
- This comes ahead of some much stronger guidelines for AI technology for European and non-European countries.
- The European Commission would like the details to be finalized before the year’s end.
It appears as though Google and the EU are putting their heads together rules companies must adhere to with AI technology.
According to Reuters, the European Commission and Google have started to work together to create a voluntary AI pact ahead of some stronger guidelines coming for the technology. EU industry chief Thierry Breton has reportedly been urging EU countries and lawmakers to finalize the details of the European Commission’s AI rules before the year’s end.
The proposed AI pact and, assumedly, the forthcoming rules will affect both European and non-European countries moving forward. However, as Reuters informs, neither group has started negotiations to iron out any kinks in the proposed restrictions for the rise in AI software.
Breton reportedly met with Alphabet CEO Sundar Pichai in Brussels, Belgium. “Sundar and I agreed that we cannot afford to wait until AI regulation actually becomes applicable, and to work together with all AI developers to already develop an AI pact on a voluntary basis ahead of the legal deadline,” Breton stated.
Thank you for your time today, Commissioner @ThierryBreton, and for the thoughtful discussion on how we can best work with Europe to support a responsible approach to AI.May 24, 2023
Not only is the EU trying to get countries and companies within the region to comply, but it’s also working alongside the United States. Both regions are beginning to establish some sort of “minimal standard” on AI technology before any legislation is put forth.
AI chatbots and software are springing up like wild weeds, which has created an increasing level of concern for lawmakers and consumers regarding the speed at which it’s taking over our lives. Samsung recently had a run-in with OpenAI’s ChatGPT and a mishap with an engineer that accidentally submitted sensitive company source code into the AI chatbot. This swiftly prompted a ban on all employees from using generative AI software in company-owned devices and their personal ones (if company documents exist) in the name of security.
Meanwhile, in Canada, more federal and provincial privacy authorities have started joining forces to launch an investigation into OpenAI and its ChatGPT software. According to CBC, both parties have claimed OpenAI collected, used, and disclosed personal data unlawfully. The investigation will seek to discover whether or not OpenAI received consent from users prior to taking and using their personal data and if there were any malicious intentions behind the act.
Furthermore, Google’s I/O 2023 event was packed full of the company’s new efforts in AI for users. One of the topics the company raised itself was how it’s placing focus on being more “responsible” with its AI software. Google wanted to not only take into account the image behind its products but also how it will move forward and handle information — especially misinformation.
Google stated that part of its AI development means finding ways to maximize “positive benefits to society while addressing the challenges” in accordance with its AI Principles rooted firmly in responsibility.
(Except for the headline, this story has not been edited by PostX Digital and is published from a syndicated feed.)