Generative artificial intelligence (AI) tools like ChatGPT could be too good to be true — literally.
A recent consumer alert from the Federal Trade Commission (FTC) is warning that scammers are using AI to replicate voices as part of high-tech phone call scams; while the U.S. Copyright Office continues to have its hands full as generative AI image systems scrape the web for pictures to feed their models, rights-free or not.
This as President Joe Biden met Tuesday (April 4) with a council of science and technology advisers to discuss the risks, as well as the opportunities, that recent and rapid advancements in AI development may pose for both individual users and national security.
The meeting indicates the level of attention the administration is paying to AI, which burst onto the scene last fall with headline-grabbing capabilities that captured both public imagination as well as the attention of businesses increasingly looking to do more with less and eke out efficiencies in today’s challenging economic environment.
Per a report by Reuters, the president used the meeting to reiterate a call for Congress to pass bipartisan privacy legislation that puts limits on the personal data technology companies collect, bans advertising targeted at children, and prioritizes health and safety in product development.
When asked whether he considered AI to be dangerous, the president replied, “It remains to be seen. Could be.”
Hot Topic for Policy Makers
The speed at which AI is radically transforming global economies has regulators scrambling to keep up and contain it.
As ChatGPT’s popularity has exploded, global concerns have been mounting about the AI industry’s lack of regulation, and lawmaker scrutiny around AI technology has increasingly shone a spotlight on an emerging regulation-innovation tug of war.
Just last week, Italy became the first Western nation to ban the Microsoft-backed OpenAI ChatGPT-4 chatbot after the nation’s Data Protection Authority announced a probe of the AI solution’s alleged breach of General Data Protection Regulation (GDPR) privacy rules, as well as its age-verification practices.
The Italian regulator claimed that there was “an absence of any legal basis” justifying the massive data collection and storage of personal information used to “train” the GPT-4 chatbot, and issued a temporary ban on its use in the country.
ChatGPT is already blocked in a number of countries, including China, Iran, North Korea and Russia.
An OpenAI representative did not immediately reply to PYMNTS’ request for comment.
Italy’s move has inspired other privacy regulators in Europe to take a closer look at ChatGPT and other AI tools.
Regulators in France, Ireland and Germany have reportedly reached out to their Italian peers and have indicated they are considering taking similar steps within their own borders.
As reported by PYMNTs, in the last two weeks alone a notable group of scientists and tech innovators, including OpenAI co-founder Elon Musk and Apple co-founder Steve Wozniak, signed an open letter calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” reads the letter.
The Center for AI and Digital Policy (CAIDP), whose president is among the signatories of the open letter, has also filed a complaint with the FTC asking it to investigate OpenAI and halt its development of large language models (LLMs) for commercial purposes.
Even the normally pro-business U.S. Chamber of Commerce publishing a report last month that called for AI regulation, saying that, “failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies.”
Complicating matters is that many of the tech giants leading the race for AI applications, including Microsoft, Amazon and Google, have all made cuts to their “responsible AI” teams as part of broader headcount reductions.
Somewhat ironically, the Big Tech companies scaling down employee headcount are commonly using the freed up cash to invest in bigger data centers and more processing power to support their AI initiatives.
Comments