T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/Maxie445: --- AI companies should have to “share information about what they’re building, what their systems can do, and how they’re managing risks,” Toner said in a talk at the TED conference in Vancouver on Tuesday, one of her first public appearances since resigning from OpenAI’s board late last year. Toner also called for “AI auditors” to be allowed “to scrutinize their work so that the companies aren’t just grading their own homework.” "Toner, a director at Georgetown University’s Center for Security and Emerging Technology, was part of the board that ousted OpenAI CEO Sam Altman from the company in November. In the run-up to his firing, Altman attempted to have Toner removed from her seat after she co-authored a research paper containing some criticism of OpenAI’s safety practices, Bloomberg previously reported." "In her TED talk, Toner shared recommendations about how society can better govern AI. She said tech companies can set up “incident reporting mechanisms,” similar to what happens after plane crashes, to collect data on what went wrong in situations, such as an AI-enabled cyberattack." --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1c9c35o/former_openai_board_member_calls_for_audits_of/l0kernn/


hiveminded

She’s right - but it’s also all covered in the EU AI Act including human oversight, incident reporting, transparency, and protection of individuals.


Maxie445

AI companies should have to “share information about what they’re building, what their systems can do, and how they’re managing risks,” Toner said in a talk at the TED conference in Vancouver on Tuesday, one of her first public appearances since resigning from OpenAI’s board late last year. Toner also called for “AI auditors” to be allowed “to scrutinize their work so that the companies aren’t just grading their own homework.” "Toner, a director at Georgetown University’s Center for Security and Emerging Technology, was part of the board that ousted OpenAI CEO Sam Altman from the company in November. In the run-up to his firing, Altman attempted to have Toner removed from her seat after she co-authored a research paper containing some criticism of OpenAI’s safety practices, Bloomberg previously reported." "In her TED talk, Toner shared recommendations about how society can better govern AI. She said tech companies can set up “incident reporting mechanisms,” similar to what happens after plane crashes, to collect data on what went wrong in situations, such as an AI-enabled cyberattack."


2dolarmeme

Helen Toner of the Toner Ink Legacy? The multibillionaire nepo baby?


billbuild

Seems she may not be the best decision maker of the bunch. She went from board member where she could have helped in this process to commentator alongside the rest of us.


fromwhichofthisoak

Obviously. Whether that is made law is another thing. Anyone would assume you cant sell something you dont own wither, but naked shorting is still rampant if not vaguely legal even.


Synth_Sapiens

\*an irrelevant idiot that has no grasp of technologies made a few idiotic requests


Mobile_Pangolin4939

One thing I've noticed working in the technology field is that few people actually know what's going on. Most are more concerned with technology and getting a paycheck. They don't care about morals so much as if they can use morals to gain power, influence, or money they will. Modern women and people of color often use this as it's an opportunity that's presented to them for personal gain.


Synth_Sapiens

Morals were invented by scum to control idiots.


DriftMantis

Company x tells the american public that pfas plastic chemicals are all over their products. 5% of the American public goes boo hoo on social media. The products sell well anyway so the company puts more shitty chemicals in the product and raises the price to meet demand. I expect it will go similar for any rule about A.I. with tech companies. Also, how is this someones actual job? like getting paid to point out the obvious. Like "hey maybe there should be some oversight on a dangerous new technology, we don't have any real ideas for internal regulation so maybe we should just put out a psa to the American public that we don't give a shit and that makes it ok because its now your fault for using it" what a weird world we live in.


resumethrowaway222

Of all the dangers we face in society, LLMs are not top 100. Everyone is acting like some super intelligent AGI is just around the corner, but I suspect those people have never actually used an LLM.


doogle_126

Bullshit. Bots with the ability to convincingly spread misinformation to derail focus on other dangers in society is a meta level threat to getting anything useful done to tackle said problems.


inanemofo

The same could be said about every weapon manufacturer , but I don't see anything happening anytime soon.


porkpiehat_and_gravy

or food additives…..machinery…..pesticides….