We’ve been largely relying on these big technology companies to regulate themselves. The rise of artificial intelligence, explained And the computing power needed to support AI technology makes it prohibitively expensive to develop and run, leaving all but a few rich and powerful companies to rule the market. Millions of people might suddenly be put out of work, while intellectual property and privacy rights are bound to be threatened by the technology’s appetite for data to train on. Inherent biases could cause people to be discriminated against. Those include disinformation spread by convincing deepfakes and misinformation spread by chatbots that “ hallucinate,” or make up facts and information. But generative AI is a big leap forward, and so are the problems or dangers it could unleash on a world that isn’t ready for it. Biden, who is reportedly “ fascinated” by ChatGPT, stopped by the meeting, which was also attended by Satya Nadella, the CEO of Microsoft (which has partnered with and invested billions in OpenAI), Google’s Sundar Pichai, and Anthropic’s Dario Amodei.ĪI isn’t new, and neither are attempts to regulate it. A few weeks ago, the Biden administration also welcomed Altman to chat about how to mitigate generative AI’s risks. We want to work with the government to prevent that from happening.”Ĭongress isn’t the only body thinking about what it can do about AI. “My worst fears are that the field, the technology, the industry, cause significant harm to the world,” Altman said in the hearing. Several lawmakers at the hearing noted that they’ve been late to act, if at all, in the past on emerging technologies,and don’t want to repeat those same mistakes when it comes to AI.Īt the same time, Altman is making the rounds on the hill, charming lawmakers with office visits and dinners where he sells the potential of AI while also being sure to present himself and his company as very much open to regulation that will keep the world safe from a technology that could cause great harm if sufficient guardrails aren’t in place. The US government is now trying to figure out what it can and should do about AI. John Kennedy (R-LA) asked Altman if he’d be willing to leaving OpenAI to head up a hypothetical federal agency that oversaw AI (Altman, whose company is valued at up to $29 billion, declined).Īs companies like OpenAI roll their powerful products out to a world that may not be prepared for them, how to keep these technologies safe while also allowing them to develop has become an increasingly pressing question. It was so friendly, in fact, that at one point Sen. Unlike some other contentious hearings we’ve seen with major tech company CEOs, Altman faced a largely friendly crowd that wanted to know if and how he thought generative AI technology like his company’s ChatGPT and DALL-E, which can create words and images from text prompts, should be regulated. OpenAI CEO Sam Altman made what has become a tech CEO rite of passage on Tuesday: he testified before Congress, where the Senate Judiciary Committee’s Privacy, Technology, & the Law subcommittee held a hearing on artificial intelligence oversight.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |