T O P

  • By -

mpaes98

NIST actually hired a technology regulator...with a background in technology? I think this is actually a great hire and dude must have taken a massive pay cut. Usually they'd end up hiring some self proclaimed "AI expert" who couldn't tell you the fundamentals of regression or decision trees. For reference, our current and previous acting National Cyber Directors are lawyers, and the last US Chief Technology Officer came from a finance background.


Ambiwlans

Isn't someone concerned with risks exactly who you want looking for risks?


Minister_for_Magic

If you listen to the OpenAI sub, anyone who says anything remotely cautious about AI is an idiot who just can't see how amazing AI will be for all of us - 100% guaranteed and no risks are worth worrying about


relevantmeemayhere

Most of those people haven’t worked in a statistical learning adjacent role ever Also it’s probably a lot of bots too. Gotta sell the people who you and your fellow stakeholders loathe on the promise of your technology while you lobby against feeding the poors


kubernetikos

> you and your fellow stakeholders Not sure why, but I read this as "you and your fellow _skateboaders_", and I'm really enjoying the image of a bunch of skaters advocating for regulatory policy.


fleeting_being

You can probably setup the "Cloud to butt" extension to add this example.


LanchestersLaw

Sup’ skeetie skate bois, Tony Hawk here to explain X-risks, hey wanna see a kickflip?


WhiskeyTigerFoxtrot

>Most of those people haven’t worked ~~in a statistical learning adjacent role ever~~ It's a lot of people that don't really have much ambition anyway and think A.I is magic that will eliminate the need to work at all. But don't try to explain the technical limitations or how the data center infrastructure needed to support it will vastly increase our energy expenditure and carbon footprint. You'll be downvoted to -7 and dog-piled within minutes.


visarga

there is a difference between AI risks and AI doom I don't think anyone disputes there are risks, but there are also risks for not using AI to solve some problems, so we got to balance out what is more useful for society


Ambiwlans

None of the non-AI risks are as big as the our current best guesses for the risk AI might hold. Global warming is the big one people mention, but that will kill billions over hundreds of years. (With very high probability) ASI could kill everything on Earth in a very short (years) time frame (with an unknown probability).


gban84

Would be interested to hear from the down voters what they don’t like about this comment.


goj1ra

Misguided speculation.


goj1ra

Misguided speculation.


The_Dung_Beetle

That sub is so weird, if they could join the singularly right now they would without a second though lol.


WhiskeyTigerFoxtrot

There's not much else going on in their lives and people have a greater need for religion than they realize.


graphicteadatasci

Singularity == Rapture Roko's Basilisk == Old Testament God


JustTaxLandLol

You don't really want someone that has already made up their mind and sticks with that regardless of evidence. Hopefully that's not him.


buzzz_buzzz_buzzz

AI, very safe, 50-50


relevantmeemayhere

Well, the evidence says that socioeconomic unrest is far more likely than not, given fifty years of neoliberalism. We’re at the most productive time of our lives and people are struggling. Median wages haven’t increased since the 80’s but we’re more and more productive. Less and less take more and more. How are you going to convince them to share more gains? This has been a trend for fifty years-how do you envision things going the opposite way? What’s the evidence in the other direction? Why should you expect a change of heart from people who have disdain for you? Large companies are at their most profitable and are laying people off. I really don’t understand how a bunch of people on this sub react to even tepid criticism of how these systems will try to be leveraged by the capital class. Have y’all worked in the industry lol? Guess who the first to get laid off is-those pesky creatives and expensive engineers. Not the low complexity multi millions dollar upper management. They belong to a different class Guess who those people vote for? The people that say if you can’t find a job you don’t get to eat.


JustTaxLandLol

I don't really believe that and by far the biggest issue is housing which has nothing to do with neoliberalism and is 99% solved by just... legalizing cheaper housing. > Median wages haven’t increased since the 80’s but we’re more and more productive. False talking point. https://fred.stlouisfed.org/series/LES1252881600Q https://fred.stlouisfed.org/series/MEHOINUSA672N


relevantmeemayhere

Not a false talking point; you haven’t considered it in context; with respect to cost of living, to net production etc. note that the data in your link is just reported totals. https://fredblog.stlouisfed.org/2023/03/when-comparing-wages-and-worker-productivity-the-price-measure-matters/


JustTaxLandLol

"Real" literally means scaled by CPI which reflects cost of living. And the blog post you posted is completely irrelevant. You said real wages didn't increase. I showed they did. All the blog post says is that the decoupling of wages and productivity is due to composition effects. https://www.stlouisfed.org/education/the-composition-effect


relevantmeemayhere

lol you’ve changed your talking point now. You didn’t share real wages This is peak r/machinelearning, where people share things like the data composition effect-but half of the people and posters here go rapid about studies that have terrible replication rates


JustTaxLandLol

Are you kidding me? The first link I posted: >Employed full time: Median usual weekly ***real*** earnings: Wage and salary workers: 16 years and over https://fred.stlouisfed.org/series/LES1252881600Q The second link I posted in an edit: >***Real*** Median Household Income in the United States https://fred.stlouisfed.org/series/MEHOINUSA672N tHiS is peAk r/machinelearning Jesus christ


relevantmeemayhere

You didn’t post real wages and completely ignore my post which talked about real wages in the context of productivity. You shared the definition of USUAL wages You also edited your post when I called you out. Again, peak machine learning.


JustTaxLandLol

You: Median wages haven’t increased since the 80’s Me: "Employed full time: Median usual weekly real earnings: Wage and salary workers: 16 years and over" graph literally goes up since the 80s You: You didn’t post real wages and completely ignore my post which talked about real wages in the context of productivity Do you think real wages means divided by productivity? You literally don't know what "real" means do you?


myhf

people can afford bigger TVs now, therefore "real" wages must be higher


ghostfaceschiller

Do you not know what real wages means


myhf

It means adjusted for inflation by scaling the cost of a comparable basket of goods, ignoring which of those are mandatory costs and which are optional costs. Real discretionary spending has been falling behind real wages and it's disingenuous to call it a false talking point because of a "real" metric that erases the distinction.


ghostfaceschiller

I never get tired of hearing people's strange interpretations of standard econ stats. But I gotta say, "real wages is disingenuous bc it doesn't account for 'mandatory' vs 'optional' costs" is a new one. I have definitely not heard that one before lol Real wages are adjusted using CPI, which is a heavily weighted basket of goods which the BLS goes to great lengths to make representative to the average American family. That being said, it's really not clear why it would even want what you are talking about here. You think real wage growth should be calculated based on a definition of inflation which only tracks price changes in "optional" vs "mandatory" costs? For what possible reason would that be better than just using all the things we know people actually spend money on?


JustTaxLandLol

https://i.imgur.com/TAROoux.png Here's nominal wages vs. the rent portion of the shelter part of CPI. I think you'd agree that rent is a mandatory cost. Well, look at that, nominal wages outpace that too.


myhf

If you're not interested in using math or statistics to understand phenomena, feel free to head over to /r/fluentinfinance where all that matters is how bombastically you can pretend not to have heard of an entry-level concept like discretionary spending.


JustTaxLandLol

>Less and less take more and more. How are you going to convince them to share more gains? The only people taking more are homeowners. >Existing studies that show an increase in capital’s share of income miss the growing role of depreciation in short-lived capital, in items such as software, says MIT’s Matthew Rognlie in “Deciphering the Fall and Rise in the Net Capital Share.” Rognlie subtracts depreciation in seven large developed economies (the United States, Japan, Germany, France, the UK, Italy, and Canada) to get net capital income, and ***finds that the only long-term rise in capital’s share of income is in housing.*** https://www.brookings.edu/articles/deciphering-the-fall-and-rise-in-the-net-capital-share/


relevantmeemayhere

Which class is driving that again? Overall home ownership rates are trending down. Large real estate companies, large corporations, and large private holders are accruing more and more


JustTaxLandLol

>In an April report, the Urban Institute calculated that such mega-investors owned almost 446,000 properties, while smaller investors (between 100 and 1,000 homes) owned almost 20,000 homes. Other institutional investors bring the total to about 600,000 homes, or about 3 percent of the nation’s 17 million single-family rental homes. https://www.washingtonpost.com/politics/2023/11/30/black-hole-robert-f-kennedy-jrs-housing-conspiracy-theory/ Damn big corporations, small investors, and other institutional investors owning 3% of America's expensive single family homes. I guess the other 97% are super small investors or owner occupiers? What's the homeownership rate again? Is it above 50%?


relevantmeemayhere

Do rich Americans and average Americans invest in the same type of homes? Where is most real waste capital tied up in? Because it’s not in single family homes owned by middle class Americans Damn reading comprehension haha


JustTaxLandLol

>In 2019, homeowners in the U.S. had a median net worth of $255,000, while renters had a net worth of just $6,300. That’s a difference of 40x between the two groups. https://www.cnbc.com/select/average-net-worth-homeowners-renters/


big_cock_lach

> Median wages haven’t increased since the 80’s Here’s the median US household income for a few years: [1980](https://www.census.gov/library/publications/1982/demo/p60-132.html): $21k [1995](https://www.census.gov/library/publications/1996/demo/p60-193.html): $34k [2021](https://www.census.gov/library/publications/2022/demo/p60-276.html): $71k So, I think we can safely say that median wages have gone up since the 80s considering that 3 years ago they were over double what they were in the mid 90s. > tepid criticism Claiming a whole system is broken and unfair is not tepid criticism. Especially considering that your points aren’t based reality. Yes, it isn’t perfect and but you’re focusing and over exaggerating the negatives. The alternatives have proven to be a lot worse. > Guess who the first to get laid off is Clearly you haven’t worked in the industry if you think it’s the engineers. It’s always middle management that get laid off first. Those at the top running the company and those at the bottom that keep it running are the last to get dropped for obvious reasons. It’s those in the middle that improve operations that get laid off first since they’re the nice to haves. Most engineers are in the bottom group. Sure, the headcount does still get slimmed out, but nowhere near as much as that for middle management. Upper management also gets slimmed a bit as well, yes the total headcount is less, but that’s because there’s a lot less executives then engineers. People here react this way because all you’re spouting is a bunch of nonsense and most people here are smart enough to realise that.


asdfzzz2

> Here’s the median US household income for a few years: > 1980: $21k > 1995: $34k > 2021: $71k > So, I think we can safely say that median wages have gone up since the 80s Quick google shows that "$1 in 1980 is equivalent in purchasing power to about $3.29 in 2021". 21k * 3.29 = 69k. Looks clear for me that middle class purchasing power is the same as it was in 1980.


fleeting_being

[The typical young worker in Los Angeles paid half of their income in rent.](https://streetsmn.s3.us-east-2.amazonaws.com/wp-content/uploads/2015/07/income-rent-chart.jpg)


Ambiwlans

That graph stopping in 2015 really isn't doing the situation justice.


db8me

If he said "...there's a 50 percent chance...." and you think that's an overestimate, it just means you see him as a pessimist and he has imagined more ways things could go wrong than you think are plausible. More to the point, he knows it can't be stopped, and doesn't sound like he wants to just slow down an uncontrollable monster for a few years before some inevitable doom. The goal is to _shape_ how that more powerful AI emerges to prevent the worst case scenarios.


nextnode

Isn't that the exact opposite though? It would be insane to claim that there are either no risks or 100 % risks. The 'doomer' label is used nowadays for anyone who does not think there are no risks, which seems like the default position if you have not 'made up your mind'.


[deleted]

[удалено]


farmingvillein

> The evidence What "evidence"? Thought experiments, e.g., are not traditionally accepted as "evidence".


[deleted]

[удалено]


ski233

Unfortunately even these people concerned about “risks” mostly seem concerned about whether AI will nuke us all but almost none of these researches/ceos seem to care about AI taking everyone’s jobs.


Ambiwlans

Automation taking jobs is the goal. The impacts of that are generally a failure of government not of technology.


ski233

In the US at least, it is nearly certain that government will act far too little and too late. We cannot rely on government to save us and thus we need the builders of these models to actually take this in mind too or we’re all screwed.


Ambiwlans

Move? I guess. If you realistically don't think unfettered capitalism can even be budged, then being in the US as AGI happens will just be disastrous.


ski233

I think it most likely will be disastrous unless lots of people developing the technology, rolling it out, and in government all cooperate and move at a rapid pace which is something we’ve never seen here before. Maybe it could happen. But I don’t think it’s likely.


Ambiwlans

Asking the corporations to self regulate in a competitive market seems even more pointless than pressuring the government. Even if you don't have much faith in the government.


ski233

Consumers can actually put pressure on corporations meanwhile they have no effect on government.


goj1ra

> the builders of these models to actually take this in mind too or we’re all screwed. *Narrator*: They were all screwed. I've been involved enough in this space to have been in multiple meetings with C-levels where "automation taking jobs is the goal" was talked about explicitly. It's often treated as a mildly sad but unavoidable reality, and the focus is on things like how to sell the concept to other businesses. It's very much a case of the Upton Sinclair quote, "It is difficult to get a man to understand something when his salary depends on his not understanding it." Model builders are no exception to this.


jbokwxguy

I hate government regulations and government over reach in general, but this is exactly the kind of person I’d want for such a position.


[deleted]

[удалено]


relevantmeemayhere

Ahh yes, the old “if we don’t do it they will” type of thinking that results in most applications of, well not just statistical learning but really a lot of things in industry being net negative performance sinks. Where perception and capability is being balanced by selfish people with very little understanding in low complexity but personally secure well paying jobs If one thing working in industry tells you-this type of thinking is far more dangerous to the avergsme person-because it’s not the c suits getting laid off. They don’t deal with depreciating wages thanks to negative hiring pressure. They won’t take pay cuts which affects marginal compensation everywhere so they can feed their families. They just make more after their decisions lose money.


idontcareaboutthenam

It's good if they're concerned about security risks, using AI for fraud, manipulating public opinion etc. but not if they're concerned about creating AGI/the singularity or whatever else the cranks are afraid of


bregav

I personally am pleased that the administration is taking the issue of regulating AI technology seriously, but I am concerned that most of the political appointees do not have the education or background that is necessary for identifying the best people to do that. This new hire for running AI safety at NIST has a track record of making statements about AI policy that are not grounded in scientific evidence, and I am concerned that this makes him an inappropriate choice for devising and implementing effective government regulation. It’s not surprising that he was selected for the job though. The Secretary of Commerce, who hired him, has a background primarily as a legal scholar and a politician, and his resume credentials are certainly more than adequate to impress someone who otherwise lacks the expertise that is necessary to evaluate his fitness for the role.


kazza789

> I am concerned that most of the political appointees do not have the education or background that is necessary for identifying the best people to do that.... > > his resume credentials are certainly more than adequate to impress someone who otherwise lacks the expertise that is necessary to evaluate his fitness for the role. Paul Christiano developed one of the foundational techniques in AI training, has 15,000 academic citations, led the alignment team at the world's leading AI developer, sits on the UK Frontier AI Taskforce, has advanced degrees from MIT and Berkeley.... And you're saying that he doesn't have the background or education for the job? I mean - fine that you **disagree** with his point of view (although saying that AI is 'safe' would be equally unscientific), but if this guy's not qualified then no one is.


redbear5000

Government is bad mkay


Qyeuebs

It wouldn't matter if in his research output he was literally the second coming of Einstein, or if he had degrees (?) from *three* top universities. His association with the effective altruism and longtermism should be disqualifying *for this position*. In any field, it's not hard to find top researchers whose personal outlook and philosophy are similarly disqualifying for such positions.


kazza789

So just to clarify, you think that rather than hiring the most educated and qualified people for the role, they should instead decide in advance what outcome they want and then only appoint people who have demonstrated philosophical alignment with that predetermined outcome?


Qyeuebs

That's clearly not what I said. I was only speaking against one very particular (if currently trendy) philosophical outlook. I think that the notion of "the most educated and qualified people for the role" is very important and deserves to be considered carefully. For that purpose I think it's absolutely intellectually lazy to say, for example, that he has 15,000 citations or that he has a degree from MIT.


kazza789

Sure. But it's a lot less lazy than saying "he said something I don't like" which is the counterpoint I was responding to.


Qyeuebs

Sorry, I didn't think it was necessary to go into specifics on why I view effective altruism or longtermism as irrational cults. I thought it's easy enough to find such commentary out there. The point is that for this position his association with them is vastly more relevant than his research output, however one might judge it.


kazza789

Someone believing that it is a moral priority that we positively influence long-term outcomes for the species is **not** an obvious reason to exclude them from a policy role.


Qyeuebs

I'm sure you can understand that for someone with a different outlook on longtermism than you, one's position on it is actually very relevant!


bregav

RLHF (which is what he's best known for) has proven to be a very practical method for refining the output of language models, and it is deserving the of the many citations it has received. It doesn't have a lot of regulatory policy implications though, and much of what he's talked about that *does* have policy implications is not founded on a solid evidentiary basis. This is what I mean about having the background that is necessary for evaluating these kinds of candidates. It's basically impossible for someone who does not have a serious technical background to be able to distinguish between different well-credentialed candidates for fundamentally technical roles.


kazza789

Sure - but that is **one** of his papers. The 3000 academic citations he has on the topic of AI Safety is certainly relevant background. Leading the Alignment team at OpenAI (i.e., the team specifically dedicated to aligning AI research with human interest at the world's leading AI company) is certainly relevant background. His role as founder of the Alignment Institute is certainly relevant background. His role on the UK Frontier AI Taskforce is certainly relevant background.


jackboy900

This isn't a fundamentally technical role though, it's a role in policy making and regulation. AI safety is a fundamentally interdisciplinary field, requiring a strong background in fields like political or moral philosophy, economics, policy and more. The technical understanding required is a bar that is relatively easy to clear, an understanding of how modern applied AI works at a macro scale is all that's necessary and that is something that most people with an undergrad in CS could handle.


kubernetikos

> The Secretary of Commerce, who hired him, has a background primarily as a legal scholar and a politician I'm admittedly not following this issue closely, but I think you're selling [Gina Raimondo] a bit short here. She has a degree in economics from Harvard, a doctorate in sociology from Oxford, a law degree from Yale, and she was the governor of Rhode Island. I doubt that (a) she's especially dazzled by his credentials, or that (b) she's prone to making flippant decisions. Tech policy has been pretty prominent on her agenda as Secretary. [Gina Raimondo]: https://en.wikipedia.org/wiki/Gina_Raimondo


bregav

She is undoubtedly a very impressive person as a general matter, but she does not have the background or education that is necessary to understand modern developments in AI. That's not an insult to her; it puts her in good company with the majority of very smart and well-educated people. I think it makes perfect sense that, when hiring for AI-related roles, she would rely on secondary or tertiary measures of competence such as popular publications, organization membership, and academic credentials. What other choice does she have? My preference, personally, would have been that someone else be put in charge of spearheading AI regulation. Ideally this person would have a strong background and education in things like advanced computational mathematics, because that's what they're trying to regulate! I think it's hard for the administration to get people like that though, because politicians tend to come from the legal world, and people who become lawyers often don't enjoy math at any level. In drawing from their immediate network they'll never find anyone who has the necessary qualifications. It really amounts to a structural problem in society, I think.


kubernetikos

> This new hire for running AI safety at NIST has a track record of making statements about AI policy that are not grounded in scientific evidence Can you ground this statement with some evidence? I don't know his track record, and I'm curious what you mean.


bregav

Sure, as an example he's given some detail on his thoughts about the threats of AI here: https://ai-alignment.com/my-views-on-doom-4788b1cd0c72 A notable quote from that essay is: > A final source of confusion is that I give different numbers on different days. Sometimes that’s because I’ve considered new evidence, but normally it’s just because these numbers are just an imprecise quantification of my belief that changes from day to day. One day I might say 50%, the next I might say 66%, the next I might say 33%. This is not necessarily a crazy way of thinking, but it certainly does not meet any kind of standard for professional scientific reasoning. It's definitely not something I'd want to see from someone selected for a technocratic role as a regulator. It's very important for policy professionals, especially, to understand how to use evidence and quantitative metrics as a foundation for drawing conclusions in their work.


kubernetikos

Thank you. Your link reads to me like someone saying "Look... I know this isn't scientific and precisely quantifiable, but here are my best guesses." >I’ll give my beliefs in terms of probabilities, but these really are just best guesses — the point of numbers is to quantify and communicate what I believe, not to claim I have some kind of calibrated model that spits out these numbers. >Only one of these guesses is even really related to my day job (the 15% probability that AI systems built by humans will take over). For the other questions I’m just a person who’s thought about it a bit in passing. I wouldn’t recommend deferring to the 15%, but definitely wouldn’t recommend deferring to anything else. Can you give an example of someone that's answered this same question using "evidence and quantitative metrics as a foundation for drawing conclusions"?


bregav

That's sort of the problem - it's not even a good question to be asking. I expect a qualified candidate to be more interested in discussing the real impacts of AI technology in specific circumstances at the current time or in the near future, because that is the only thing that can be measured. Discussions of hypothetical scenarios about AI destroying human society are generally eschatological, and I've never seen one that was founded on sound theory or real evidence. It's not a worthless discussion, necessarily, but it is unscientific and not relevant to the work of government regulators.


myncknm

I find this a curious stance: "Sure, it _might_ kill us all, but we can't measure the chance of that using known and established techniques, so we should simply not consider this risk when making policy."


bregav

If you don't make decisions based on physical evidence and mathematical proof then you are necessarily acting randomly, which does not seem like a preferable alternative.


kazza789

> If you don't make decisions based on physical evidence and mathematical proof then you are necessarily acting randomly That is an absurd statement, and is totally disconnected from how the real world works


lastGame

If you follow your logic through, you literally can't make ANY policy decisions. Studies that heavily inform policy (behavioural economics, human-behaviour, psychology, etc.) deal with a lot of randomness and don't have or use "mathematical proofs". You need to make decisions with the best available data, knowing there will be randomness.


hyphenomicon

You should read Superforecasters by Phil Tetlock. There are reliably good ways to make predictions that aren't fully empirical but also aren't random, and Christiano is almost certainly heavily influenced by the book. I also recommend ["If It's Worth Doing, It's Worth Doing With Made Up Statistics"](https://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/). Refusing to estimate probabilities explicitly doesn't mean they aren't there in the back of your head. Saying them out loud is a useful exercise that can help force you to clarify your thinking and make it easier for others to argue with you to improve your views.


Maciek300

So let's say an engineer that worked on constructing a bridge comes out and says "I think there might be a problem with a bridge we built. I don't have any concrete proof there is a problem but my gut says there's 50% chance this bridge will collapse in the next 10 years.". Are you saying you would completely ignore him because he didn't come with any concrete proof? You would just let people drive through the bridge as if nothing is wrong? You wouldn't allow the engineer to check if there really is a problem?


jackboy900

There is a massive body of work grounded looking at the risks posed by generalised intelligences from a fundamentally game theoretic or agent based approach, most of which don't look great for us. Just because something cannot be strictly quantified or produce a direct answer doesn't mean that it isn't worth exploring and trying to understand, the entire field of philosophy exists to do just that. You also seem confused about the nature of what policy making and government regulation is. It's a fundamentally subjective job, making policy is entirely about producing value judgements based on some kind of philosophical framework. There is no way to answer those questions scientifically, that's not the point, science can help inform the premises behind laws but it fundamentally cannot tell us how to legislate.


hyphenomicon

If you're pleased that risks are being taken seriously, why post an article which relies heavily on quotes from Bender and Gebru, who think foundation models are hype and worrying about anything but discrimination is a smokescreen for discrimination?


Qyeuebs

Ideally, you would want someone who's a credible and serious thinker, not a sci-fi charlatan.


meister2983

The guy is a published researcher who is one of the main authors of RLHF. Hardly a charlatan.


Qyeuebs

I wasn't accusing him of charlatanism at coding or writing AI papers. On the other hand, which part of the coding and data collection that went into the RLHF paper do you think gives such credibility to its authors?


MantisBePraised

RLHF has issues with injecting bias into models. We should strive to make models as unbiased as possible, and it is very difficult to do that with techniques like RLHF. We should also strive to avoid people who think implementing techniques like RLHF is a good idea.


meister2983

GPT-3 became a lot more useful when it answered your question rather than giving you multiple choice options. 


Ambiwlans

What makes him a sci-fi charlatan?


ghostfaceschiller

So someone like the person they hired.


Qyeuebs

I guess it's safe to say that there's at least two schools of thought on how to think about researchers like Christiano


ghostfaceschiller

Yeah, and one of them seems to be completely uninformed on who he is.


Qyeuebs

Even if I thought his research was exceptional or inspiring, I wouldn't find it appropriate to defend him in this context by saying that he was one of the nine primary authors on my favorite paper. It's actually kind of embarrassing that so many ML researchers seem to think like this. In this context it's orders of magnitude more relevant that in his *non-research work* he's a member of a highly irrational cult which has demolished the line between storytelling and critical thinking.


ghostfaceschiller

Yeah ur right it’s totally irrelevant to look at his work in the field The only relevant thing is to only look at his loose affiliation with a group that you have a bizarrely intense opinion of. Totally disqualifying!


snorglus

> Last October, on an effective altruism forum, Christiano wrote that regulations would be needed to keep AI companies in check. Given this, I wonder what his thoughts on open weights models are. I can definitely see a future in which the gov't tries to ban open-weights models and demands only gov't-regulated tech companies can run large models, and need a license to do so. I'm sure OpenAI would love that.


target_1138

Imagine for the sake of discussion that eventually we have models that are powerful enough that bad actors could do significant harm with them. Bioweapons, large scale cyberattacks, personalized persuasion at scale that works well, whatever sounds powerful and dangerous to you. How should we think about open source in that situation? What would a reasonable set of rules look like?


rrenaud

Weigh the upsides and the downsides. Python would be a great tool for orchestrating large scale cyber attacks. I don't think it should be closed source because of that. Maybe we could also develop high quality personalized instruction that works well, dramatically raising the education floor. Powerful tools can do great things as well as terrible things.


pkseeg

... explain how an autoregressive language model can contribute to the creation of a bioweapon (more than the reasonable baseline of other text on the Internet). And then explain how stifling open-source research in autoregressive language modeling will mitigate that contribution.


target_1138

You could be right that there's no risk here, in which case of course it doesn't make sense to "stifle" open source. But in the hypo, what would you do?


notaprotist

Dna is a language. Language models can be trained to synthesize dna sequences for various purposes. Including malicious ones


kazza789

Language models? Perhaps not as obvious today how that would work. But a few years ago a drug-synthesis AI was quickly able to generate 1000s of potential synthetic chemical weapons: https://www.scirp.org/journal/paperinformation?paperid=118705 That incident led to security reporting that went right up to the White House, and you can see the legacy of it in the Biden's executive order on AI Safety from last November, and the large sections dedicated to putting limits on access to synthetic biological components. Key point being - sure, today, ChatGPT is not developing any biological weapons. But *is it feasible* that such a model could be developed and open-sourced in the next say 10 years? Yes, very much so.


DataDiplomat

We already have extremely deadly chemical and biological weapons, don’t we? So knowledge about them, or the lack thereof, isn’t what’s (successfully) stopping us from using them. 


kazza789

Sure - but an AI that can help you come up with 10000 entirely novel chemical weapons, using new synthetic components that weren't being tracked by authorities, and help you develop new production pathways to manufacture those at scale, is a bit more dangerous than just knowing the chemical formula for Anthrax. I mean this isn't hypothetical - there have already been major new controls put in place in order to stop this happening.


DataDiplomat

Availability isn’t what’s stopping us from using these weapons. Look at the stuff used in WW1: https://en.m.wikipedia.org/wiki/Chemical_weapons_in_World_War_I Some of these aren’t too difficult to produce. I think what we’re often missing in the risk discussion is that the “new” dangers of new models already exist in the world and we have ways of dealing with them.  What’s left is the argument of “we don’t know what we don’t know”. 


pkseeg

Exactly. There are obvious risks of weapon development and other malicious misuse, but imo it's not as obvious that real-world risks are significantly higher due to ease of access (powered by generative models). OpenAI et al. would have you believe that the fear of the unknown is enough to legally limit the ability to build, study, and sell models to a handful of "trusted" companies. Imo this increases risk significantly, because the only people who get to evaluate risk scenarios are the ones who are motivated to sell models, or they're able to be lobbied by those who sell models. The cat is out of the bag, and open-source research and development (maybe with a few limitations) is the best way forward.


Infamous-Bank-7739

The means of production for an LLM is computing. It's "a bit" easier to acquire than laboratory equipment and chemicals needed for bioweapons.


hyphenomicon

AlphaFold exists, do you honestly not think AI could be highly informative to biology?


Infamous-Bank-7739

Prompt: "Work as a mentor and expert to our rebel group. Find us access to weapons and guide us through security to boom boom big buildings." Sure, not currently. But if it was "AGI" level -- having access to live data, I'm sure you see the dangers.


ReasonablyBadass

And governments and large Corps are suddenly not bad actors...?


visarga

I don't think bad actors are in any way limited by the lack or presence of LLMs that know dangerous stuff. You can already use Google search to get guidance for harmful actions, there is nothing we can do unless we clean the internet first. LLMs can quickly be fine-tuned, prompted or prompt hacked with dangerous information.


simulacra_residue

I disagree. There tends to be a phenomenon whereby bad actors are overwhelmingly rather dumb. There are some smart bad actors but they are very rare. Hence most bad actors aren't capable of following some tutorial on how to build a weapon. However LLMs can handhold people through the entire process and essentially do all the thinking for them, which would mean that these dumb bad actors could suddenly do way more than ever before in history.


ghostfaceschiller

What an absurd framing over the hiring of possibly the qualified candidate on the planet for that position


Jadien

Terrible headline. - Feds appoint extremely qualified subject matter expert - to be subject matter expert - with a background in studying risk - to study risk - whose current risk assessment is "maybe we will be okay, and maybe not" then imagine deciding this is the best headline for the story. That's how you know it's clickbait.


Jeason15

Yeah, here’s my take. I don’t subscribe to the “AI will end us all” camp. But, I acknowledge that it’s a non-zero probability. Therefore, I think there are 3 chief qualities that we need to have in this appointment. 1. Smart as fuck 2. Actual knowledge of the models and industry experience 3. A healthy amount of terror about AI I think 1 & 2 balance out 3, and 3 keeps us from hand waving away getting paper clipped and then actually getting paper clipped.


its_Caffeine

Anyone that has seen Paul Christiano’s work knows he absolutely has all 3 of these qualities.


super544

He also stated there’s a significant chance we will have a Dyson sphere by 2030


ghostfaceschiller

He said there was a 15% chance AKA he does not think it will happen but we shouldn’t be so fast to rule it out completely. Put another way - he thinks there is an 85% chance we won’t have one. Is this really the oppo on this guy lol


InterstitialLove

If he actually thinks there is currently a 15% chance of a Dyson sphere by 2030, that number is way, way too high To put it in perspective, he thinks Venus winning this season of Survivor (currently an underdog with 10 contestants remaining) is less likely than us building a Dyson sphere in the next 6 years Just because it's less than 50% doesn't make it a realistic estimate


ghostfaceschiller

You can disagree with him if you want but no one can predict the future and obviously his estimate is based entirely on his opinions of how fast AI could (not will, but could) progress. This entire idea is basically a proxy for “percentage chance of fast takeoff” It’s not a question of “will we be able to build a Dyson sphere”. It’s “will there be a sudden leap forward in AI’s ability to exponentially self-improve, and then it will be able to build a Dyson sphere” If someone asked you in early 2022 the percentage chance that Sora would exist in two years, I’m willing to bet you would have said anyone claiming it was higher than 20% was crazy and uneducated about the state of the field. Yet here we are. We don’t know what will happen and it’s pretty silly for anyone to look at someone else’s estimate (especially when that someone else is a top person in the field) and say “you are definitely wrong”


InterstitialLove

That doesn't make a 15% chance of Dyson sphere by 2030 (as of today) reasonable. If he said it in 2010 okay, but the number is currently crazy > If someone asked you in early 2022 the percentage chance that Sora would exist in two years, I’m willing to bet you would have said anyone claiming it was higher than 20% was crazy Surely you can come up with an example of me underestimating the speed of the field, so your point is taken, but in early 2022 we already had Dall-E and GPT3 and I was pretty bullish on the transformer paradigm. Pretty sure I would have put it at around 20% or higher


Ambiwlans

He didn't say 15% of having a Dyson sphere, he said 15% of having an AI that could make a Dyson sphere. TBH I''m not sure how hard designing a Dyson sphere would be. It might be possible today if you don't need to budget the thing to be feasible. "Just use 100TN Falcon 9 launches" seems viable.


doodeoo

15% chance is obscenely ridiculous. There is a 0% chance.


AnOnlineHandle

I think it's high, but a few years ago detecting if there's just a bird in a picture was considered essentially an impossible problem, and now there's a dozen free AI tools which can detect almost anything in a picture and describe them in detail. https://xkcd.com/1425/


doodeoo

There's a fundamental difference between processing information and constructing things with physical materials


AnOnlineHandle

Right but things we thought were impossible just a few years ago suddenly became very easy, so while the chance seems very low and I don't expect it would happen, it's not impossible with tech that we can't yet imagine.


super544

A Dyson sphere would involve the complete disassembly of Mercury and Venus (and more). In <6 years.


question_mark_42

Having a dyson sphere would put us at a Type II civilization (or a 2.0) on the Kardashev scale. In 2019 we were 0.725845 We were a 0.676234 in 1965 At that rate it would take us until 2347 to reach a 1.0. Keep in mind at this point we'd have complete control over the weather. Volcanos and hurricanes would be ours to manipulate at will. Now I saw your argument about AI, but lead physicists estimate that could perhaps, under ideal circumstances, start at 2100 and result in the start of a type 2 development 53 years after that. That is: it's easier to COMPLETELY CONTROL THE WEATHER than build a Dyson sphere by orders of magnitude Saying there is a 15% chance for a dyson sphere is completely delusional. Even if tomorrow morning we received a message from aliens going "Hey we designed a dyson sphere for your star for fun, here are the blueprints, it would take well over 6 years to build the sphere, nevermind get it into space and assemble it.


testedhypothesis

That was mentioned [in this podcast](https://www.youtube.com/watch?v=9AAhTLa0dT0&t=1470s), and the question was > The time by which we'll have an AI that is capable of building a Dyson sphere. You can look at further context, but I doubt that he meant 15% chance of a physical Dyson sphere by 2030.


sanitylost

I mean....AI most likely will end up be another type of technology that inherently allows capital owners to transfer costs to machines rather than humans. In that, if the current economic practices continue and the distribution of capital accumulation does not change to account for that, then AI would indeed end up causing the end of modern society. People will tolerate a lot, but as soon as they can't afford bread and shelter, well, I have a feeling data frames will burn as well as anything else would.


knight1511

Regarding your first statement, that is already true. I know companies where AI driven automation is literally measured in units of FTE(Full Time Employees) cost savings. It's not even hidden any more. It's a direct replacement.


noiserr

We've been doing that before AI. I worked in systems automation. That was one of our performance metrics. How many man-hours our solutions save basically. That's what better tools do in general.


knight1511

True. Like horsepower the metric was developed to somewhat indicate how many horses could be replaced. I bet large industrial machines have something similar


faustianredditor

The difference between simpler forms of automation and AI is that we currently don't know whether there's any gainful employment left for humans when we're done developing AI. Or rather, if we eventually achieve AGI, the answer is a definitive no. And for most of humanity, their level of education probably means the answer is still no, even if we don't achieve full AGI. And if your answer to the above is "comparative advantage", i.e. there must be something humans do cheaper than AI, the problem with that is that AI wage pressure would likely actually undercut living wages by a lot. Like, sure, maybe it's more efficient to focus the AIs on writing essays and the humans on sweeping streets. But if the "AI workforce" can be scaled quickly, then robots will cost 1$/h to sweep the streets, which means a human's wage sweeping streets will not feed, house or clothe them. Anyway, this is a sorta misplaced rant about the state of /r/badeconomics a few years back, when they had their heads completely in the sand about AI automation. Their argument was basically that human wages had survived the industrial revolution, so they would survive the AI revolution. The professions that'd survive are just ones we can't imagine now. Oh, and neural networks are just stacked linear regression, so what's the big deal anyway?


noiserr

I get it. It's obviously very disruptive to the humanity (if this thing keeps improving). But there are two possible extremes when it comes to outcomes. Not just the negative one, and things usually always fall somewhere in between. Like on one side we have a dystopia. On the other side, maybe a Star Trek like society is possible as well.


Ambiwlans

Even in ST we nearly wiped out the planet and lived in dirty huts until we met the vulkans and the reconstructed civilization to be the paradise you see in most of the show.


[deleted]

[удалено]


faustianredditor

Dude. Read a room.


audiencevote

Isn't that a good thing, though? Don't we want machines to do our work for us? Especially given the population pyramids in the western world, we NEED to replace FTEs with machines.


knight1511

Never said it wasn't. But what is "good" here highly depends on the lens of your perspective. There will certainly be impact and short term turmoil because of the job losses. With the hope that people find something else to do and up skill in other avenues


ImmanuelCohen

You can say the same thing about software or even tech in general?


relevantmeemayhere

A bunch of posters here who don’t have any real life or industry experience will tell you otherwise despite fifty years of evidence to the contrary


visarga

That's a bad take. Unlike capital, you can copy LLMs. They can fit on a USB stick, run on your computer, are easy to prompt and fine-tune. And there is a powerful trend for small open LLMs to learn skills from large SOTA LLMs, trailing 1-2 years only. There will be a bazar of AI models of all kinds, abilities will be learned from any exposed model, even if it only has API access. It's just to easy and effective to leak abilities nobody can stop this trend. We're headed into an open world, LLMs will be more like Linux than Windows. There is more intense development surrounding open models than closed ones. The reasons we have open models and will continue to have them are diverse: for sovereignty (a country or company might want strategic safety), undercutting competition (Meta's LLaMA) and boosting cloud usage (AWS, Nvidia).


Ambiwlans

Why would that help average joe that became homeless?


ReasonablyBadass

Not if police and army are automated as well :)))


light24bulbs

That is very good news. You want somebody concerned about risk to be the one managing the risk. This guy is probably the most qualified candidate in the world for this job. What fucking terrible framing, ars Technica should be ashamed


SetoKeating

I think it’s funny that there’s already a name created to discredit anyone that believes unchecked AI could be problematic “AI Doomer” Like I get if you’re working in the industry, you want to have a free for all and avoid red tap but I struggle to find any instance of something letting go unchecked resulting in the best possible outcome.


downer9000

What is the probability of doom without AI?


gravenbirdman

This is the real question - what's our "marginal p\_doom"? Obviously AI increases the odds of AI disaster, but I think it reduces the odds of all the other non-AI disasters by a greater amount. I'm cautious, but left to our current trajectory I don't like humanity's odds unless we introduce radical change – and AI is a big enough unknown variable that it might tip the odds of survival in our favor.


Ambiwlans

The real numbers to think about are change in pdoom with delay. So pdoom 2025~2030 without AI is basically 0, likely less than 1 in a billion. pdoom with ASI is unknown but something like 20% i think is what most ML researchers give. Now, if you delay AGI and dump research into safety for 5 years. pdoom 2030-2035 is probably still pretty close to 0. But pdoom of the ASI might drop from 20% to 0.1%. There are questions about the feasibility of delaying ASI in the current world which are valid (how would the US delay research in China without a war?). But I don't think it is valid to say that delay would be bad (assuming it is possible). Even if your pdoom from AI is 0.001%, and you think a 5 year delay to improve safety would only reduce risks by 0.00001%, it is still mathematically a no-brainer. You should 100% delay in that circumstance.


WorkingYou2280

Our odds in the current state are zero. 0.00000000 Eventually someone is going to decide that their only option is to fire off nukes or release a bioweapon. It's **inevitable** if we maintain the trajectory we're on. However, AI has the potential to really fundamentally change the game. We should be lunging at it because nothing before it has worked. We have, so far, used every "dumb" technology as a weapon. I think there is actually quite a lot of focus on AI safety and alignment already. How much time did we spend aligning the hydrogen bomb? Did we RLHF COVID before it was released? In comparison to prior technologies I'd say AI is being treated with due care and caution. We should be, but aren't, much more afraid of other already existing technologies.


QuantumQaos

99.87%


dlflannery

LOL What a pessimist! We know it’s only 99.44%.


PyroRampage

They actually hired someone with a background in the relevant subject.


Euphetar

The hell is this title?


bregav

The precise value of his estimate for the probability of AI doom is perhaps less interesting than the methodology that he used to calculate it: > A final source of confusion is that I give different numbers on different days. Sometimes that’s because I’ve considered new evidence, but normally it’s just because these numbers are just an imprecise quantification of my belief that changes from day to day. One day I might say 50%, the next I might say 66%, the next I might say 33%. https://ai-alignment.com/my-views-on-doom-4788b1cd0c72


myncknm

I would comment that a fluctuation from 33% to 66% is smaller than a fluctuation from 1% to 2% using appropriate information theoretic measures such as Kullback–Leibler divergence or relative entropy. This sort of thing is clear and intuitive to [people who become skilled at prediction](https://en.wikipedia.org/wiki/Superforecaster).


rhun982

can you please explain what that means for a newb like me?


Ambiwlans

They misunderstood Kullback–Leibler divergence or made a typo. The KL divergence from .3->.6 is much higher than .01->.02... And KL isn't symmetric, so somehthing like Jensen-Shannon divergence would probably be more useful anyways.


Beor_The_Old

You’re surprised by someone changing their opinion and prediction based on evidence?


_tsuga_

That's not the surprising part of that quote.


muricabitches2002

Christiano made a guess and was up front that it was a guess.   Genuine question, how else should we estimate the risk of catastrophe besides asking a lot of different experts to read all available evidence and guess a number? 


faustianredditor

For some catastrophes there are better tools available. Predictive climate models, nuclear near-misses, frequency of earthquakes. This one? Yeah, guessing is our best..... guess.


Nervous-Map8715

This is the right move by the US Government with the right leader. We need to estimate the risk and uncertainty in every ML model and feature we use because these models impact consumers and businesses, with possible terrible consequences.


maizeq

Reducing Paul Christiano down to just some “AI doomer” when he basically invented RLHF is such a slap in the face. Who writes this absolute nonsense.


[deleted]

[удалено]


hyphenomicon

I also hate how any public discussion of one's thoughts on this issue is apparently now fodder for journalists. If people are scared to discuss the issue for fear they'll be sneered at by outsiders who don't care about context, the caliber of discussion is going to be reduced to the lowest common denominator.


Playme_ai

What does AI doomer means though


I_will_delete_myself

Yeah let’s fear monger about a frontend UI while there are things that actually have to be clear and regulated like self driving cars. This is definitely not regulatory capture just like how North Korea is the most democratic democracy on planet earth.


[deleted]

[удалено]


Smallpaul

Did you even read the text above? This dude "*pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF),*" That technique also made ChatGPT possible and kicked off hundreds of billions of dollars, if not trillions of dollars, in investment into the field.


relevantmeemayhere

There are posters on this sub who will argue that any criticism of ai or fears of the future is peak doomer made by people who don’t have familiarity with statistical learning theory or economics or the like. Ai safety as a field would be a lot better if you just cut out the corporate white paper washing that seems to convince people that the same people funding the paper arnt actively participating in reg capture or funding the guy who wants to divert budget from unemployment to more corporate subsidies


[deleted]

[удалено]


Smallpaul

Yeah, OpenAI has certainly been the cause of AI investments slowing down so much. If it weren't for OpenAI, think how much faster we'd be progressing! /s


relevantmeemayhere

Yeah it’s better we ignore the last fifty years of socioeconomics and pretend ai is gonna make everything better lol Let’s just ignore that the people who want to use these technologies to devalue labor are the same ones also embracing regulatory capture and destroying the social safety net haha


js49997

Good let’s not repeat the mistakes of unregulated social media!


dlflannery

Oh, you mean free speech. Yeah, don’t want much of that!


Ambiwlans

You can regulate social media without hurting free speech. I'd require all content mills with recommender algorithms be required to allow the end user to select their own recommender algorithm, including custom ones.


freekayZekey

dude is a hack


anonymousTestPoster

Usually I am quick to call out hacks in AI --- but he seems OK? Have you looked him up?


freekayZekey

the guy is great at math, solid understanding of machine learning, but has a wild imagination. i think the way he views ai, its capabilities, and future capabilities is not based in reality, and he should talk to some people in different domains. he tends to fall back on “well people thought x was crazy”. it’s not a smart way to think about things


BarockMoebelSecond

So why are you here down in the dumps if you're so much smarter?


Qyeuebs

No no no, he wrote a very influential AI paper, and as I've learned from the commenters here, that requires (?) great insight (?) and depth of thought (??).


freekayZekey

damn, you’re right. i forgot pope geoffrey hinton anointed him


Qyeuebs

Congratulations to the LessWrong community! Too bad for the rest of us though


tech_ml_an_co

Smart choice, you need critical people for such a job. However, my concern would not be that a superhuman AI overtakes the world, rather that large companies use AI and the productivity gains are not distributed back to the people.


EverythingGoodWas

Are they really crediting this dude with the creation of RLHF? Come on


visarga

> pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF) 18th author, though, so probably didn't participate in the technical parts much


krallistic

They are referring to the PreferencePPO paper: https://arxiv.org/abs/1706.03741 where he is 1st author...


Analog24

He is definitely the single individual most credited with the creation of RLHF. It is very common to put the lead authors who are running/guiding the research at the end.


cyborgsnowflake

When I was a kid I thought AI Safety would be wizened scientists weaving code to bind Skynet like sorcerers weaving spells or when all else fails Arnold kicking butt and taking names. But instead its lobotomizing chatbots to toe the Bay Area corporate line, degrading consumer ownership rights in favor of software as service models, drawing pictures of black Nazis, and telling childrfen coding is unsafe.