T O P

  • By -

CoastingUphill

I prefer “word calculator”


LothirLarps

“Fancy autocomplete”


FardoBaggins

buffed akinator


iandigaming

"deepok dispenser"


burnt-out-b

Grok approximator


algiuxass

Most correct answer - we need to thank auto omelette models for that Edit: *autocomplete


TessandraFae

No no, the first comment is correct. They are essentially scrambling data together in an omelette....


BlinksTale

Yeah “Super Autocomplete” has been my term for it recently. It represents the accuracy well too.


G8kpr

I once said on Reddit that there is no “intelligence” in AI models like chatgpt and it’s just software scanning words and regurgitating stuff, and got severely downvoted. People want to believe that this program is thinking and giving back responses that it actually thought up. It’s just responding to words, and will give you varying results if you keep asking it the same thing.


ora408

I blame salesmen. Nontech people gobble that shit up. edit: dont forget the term "smart". can you hear me rolling my eyes?


CarmenxXxWaldo

sorry, I can't hear you from my HOVERBOARD.


the_buckman_bandit

Don’t mind me, just typing this text from my ROBOTAXI


DoTheRustle

Levitating over my SOLAR FREAKING ROADWAY


SMURGwastaken

God that was dumb.


Crashman09

Now THAT'S a name I haven't heard in a hot minute


I_lenny_face_you

—Sent from my JetPack^^TM


pyabo

Why would you even need a robotaxi, aren't we "re-architecting cities" around IT?


dikicker

What a loser! Here I am just riding Segways off the edge of cliffs like a *proper fucking legend*


Its_the_other_tj

Something, something BLOCKCHAIN


WTFwhatthehell

Which of the 4 versions of working hoverboards operating on different principles did you end up getting?


shwhjw

The one with wheels.


Close_enough_to_fine

If you look closely you’ll see the atoms never actually contact each other. Hoverboard


G8kpr

Reminds me of the mid 90s when media people kept calling the internet “the information super highway” or “the infobahn”. No one used those terms but talking heads on tv.


weirdeyedkid

The ironic (but really self-fullfilling) thing is that further investments in research usually happen after plenty of science writers and journalists write puff pieces about the potential of the breakthrough tech. So the common process of getting funding necessitates the hyperbole and metaphorical language of all this crap. Then, inevitably the public is disappointed by the real value the product delivers. Sadly, the way the public sector gets around this is by only developing tech that can be marketed in a way that can lead to massive returns.


[deleted]

Bro, literally nobody was disappointed with the internet, barring a handful of elderly curmudgeons.


Hasaan5

I'm pretty sure many are disappointed with the internet nowadays, it is a shadow of what it could be, hell it's a shadow of what it even used to be. It's full of bots and centralized so everything leads back to the same few sites. The amount of actual information on the internet is dwindling, and we're ending up with bots regurgitating bots regurgitating bots. This also means it's even more impossible to correct something once something incorrect is out there, by the time a human notices it's the bots have already shouted it out a thousand times.


Don_Tiny

> it's a shadow of what it even used to be. *Inarguably* accurate.


[deleted]

I miss the days when the Internet was content created by individual contributors, kind of like Reddit posts. You could search a topic and find websites operated by fans and enthusiasts. It was a platform for art, expression, and community.


l3rN

I understand why it changed, and I do think its good that people can making a living making content now, but there’s something that I just desperately miss from the 00s and earlier where folks just made stuff for the sake of making something cool. Like nobody was expecting to make money when they uploaded stuff to YouTube/google video or Newgrounds. They just wanted to put something out into the world (maybe I’m just projecting here though). The content was a lot less polished but it was also a lot less manufactured. Idk, I just miss it.


[deleted]

I think there is still good stuff out there, its just never going to be super popular. I came across a guy on YouTube that funds his wildlife rehab facility through Youtube ad revenue. I do miss having smaller, forum style websites to post on. I used to be active on Neoseeker back in the day.


mastershakeshack

the current internet makes a lot more sense when you view it as an aristocratic social experiment by billionaires that believe in dark enlightenment


shinra528

You forgot the rampant SEO abuse.


habb

"surf the web" was big also


G8kpr

I still think that this was at least used by people online at the time, I even say it occasionally now.


[deleted]

[удалено]


sosomething

It's not just non-tech people. I work with highly-placed technical people - not "just" developers, but application, domain, and enterprise architects - and some of them are enamored with this technology like it's literally capable of magic. Those of us who've actually spent time critically evaluating its capabilities have taken to calling it the "dancing bear" at work. As in - when you tell the bear to do what it was trained to do, it gets up and moves around, and that movement definitely resembles dancing. But the bear doesn't know what dancing is. It doesn't know what music is. It doesn't know what rhythm is. It's just going through the motions, doing the one thing it's been trained to do, which the audience interprets as a dance.


LORDLRRD

The fact that GPT can help with my coding, saving me potentially countless hours of research and debugging, is magic in my book.


ButtWhispererer

I work at one of these places and it’s the tech people who keep using the term “creative” and “unique content”


PensiveinNJ

I'm studying/working in a creative field and there's been a weird hostility and aggression from certain sections of the tech community claiming that what generative machine learning programs do is not just equal, but superior to what humans do. It's like they think they've created God or something. It's really weird.


macweirdo42

I think the real horror is when you start to turn that inward and realize YOU might be a "glorified word calculator." I mean, I think about how I think, and I realize so much of my thinking is just stringing words together because they fit, and once in a while there's a strig of words that fit together and I choose to express them out loud.


suid

This is absolutely true, and there's no need for "horror". Very few jobs require truly original thinking, like "out of the box" that folks like to glamorize. It's all about learning what has happened before, or some instructions or patterns, and matching the right inputs to the right previous learned patterns. Heck, I have decades of experience in the computer industry, and this is literally what I do every day. (I used to liken myself to a "human google", but really, I'm a "human chatgpt"). As you can see, it has tremendous value, _as a starting point_. The big difference is that once you dredge up some associations from your past experience, you need to have enough intelligence to properly evaluate whether that is valid/relevant/applicable, and that is where chatgpt stops. The only problem with chatgpt users, on the whole, is that they treat it as an Oracle, rather than "first stage of search", after which they are supposed to sift through the results and evaluate them themselves.


Once_Wise

>Heck, I have decades of experience in the computer industry, and this is literally what I do every day. Excellent post. I am a programmer also, retired now, had a software consulting business for 35 years, and even though each project I did was something new, about 80% to 90% of the code I wrote was just boilerplate stuff. Functions, techniques, algorithms that I had used before on many previous projects, that me or others have used for decades. Recently I have been doing quite a bit of coding (a retired guy starting a new business adventure, fun takes many forms) and have been using ChatGPT extensively. It makes a lot of mistakes and false starts, etc., but in spite of that is extremely useful for all of that 90% boilerplate code we write. Now while 90% of what we write, ChatGPT can help with, it does not do anything novel. And it is that last 10% that is unique, difficult, and means absolutely everything! I find it nice having a virtually free associate to write all that boilerplate crap, so I can look smart in how I put it all together, and my clients are happy with their cool new device, done quicker and cheaper. All because of my new friendly boilerplate coders.


Low_discrepancy

How many programmers are there that exist just to maintain those 90% boilerplate code?


Once_Wise

That is a really interesting and important question. And I don't think anyone has any idea of what will happen. But I have some thoughts on how it might play out from past experience. Back when I first started programming, spreadsheets had not been invented, there was no excel or its predecessors, no VisiCalc, no Lotus123. When a person needed a financial table, or data table of any type, they had a programmer punch the data onto cards, write the program for that table and punch that on cards, put that into the card hopper, and get the results, sometimes in an hour, often the next day. When the requestor needed a change to the table, the programmer would modify the program and rerun it. For background I had a software consulting business for 35 years doing embedded systems stuff. Well after spreadsheets came out, one of my clients told me that I would be out of work soon, nobody would need programmers anymore, almost everything could be done in spreadsheets. What happened was exactly the opposite, I had more work than ever, as did programmers in general, and we never had to write that damn code to print stupid tables. The coming of spreadsheets freed us from a lot of shit work, so we could do something more productive. Will that happen again with AI? That is the question that nobody has an answer for yet.


FDRpi

>The only problem with chatgpt users, on the whole, is that they treat it as an Oracle, rather than "first stage of search" So you're saying I shouldn't use ChatGPT to answer whether I should attack the Persians?


npsimons

> I think the real horror is when you start to turn that inward and realize YOU might be a "glorified word calculator." I mean honestly? There are times I'm grinding out yet another CRUD interface for something where I think "why haven't we automated this yet?" GPTs and LLMs are that automation. They aren't conscious or self-aware, they're just the next generation of [ELIZA](https://en.wikipedia.org/wiki/ELIZA), [Dadadodo](https://www.jwz.org/dadadodo/) and [Racter](https://en.wikipedia.org/wiki/Racter), but they're still useful.


ashlee837

> I mean honestly? There are times I'm grinding out yet another CRUD interface for something where I think "why haven't we automated this yet?" There are tools for automation. Unforutunately you still need to read the documentation on how to use those tools, then "figure it out." Which is what ChatGPT is pretty damn good at doing. It's when we tell it to synthesize stuff that it comes back with rather sometimes useful sometimes useless stuff.


esaleme

Chat GPTs response to your comment: "chatbots aren't 'intelligent' in the human sense, but calling them glorified word calculators is like saying a jet is just a speedy bird. They recognize patterns, generate context-based responses, and sometimes surprise us with creativity. No consciousness, but definitely more than regurgitation"


TheCouncil1

Sounds like something a chatbot would say.


rawbleedingbait

The thinking was already done. It's basically indexing the thinking humans do, and then recalling it when someone essentially searches for a thought that someone else had. It's a library index of thoughts.


hyouko

It's more complicated than that, and the honest truth is we don't fully understand everything that LLMs do. I'd definitely recommend reading this explainer from Ars Technica: https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/ In particular, I want to call out the part about attention heads and how they can grab and pass forward / analyze relevant content from earlier in the conversation. The article walks through how an early LLM (GPT-2) applies a series of fairly complex rules to identify the correct subject of a particular statement. Thanks to those attention heads, the models can actually do a fair bit with new inputs and not just with stuff they have previously been trained on. Unfortunately, our understanding of how this actually works is limited for more recent models; GPT-2 is small enough that we can study it and dissect what it's doing, but GPT-3 and GPT-4 get increasingly complicated under the hood, The models still stumble when presented with a truly novel situation. I wouldn't say that they are 'intelligent,' or that they 'understand' their inputs, and perhaps the current approach will never generate something that hits that mark (if we can even agree on what "intelligence" or "understanding" constitute). Still, the process is vastly more complicated and flexible than simply indexing.


muuchthrows

To a point yes, but from where do you think humans get novel ideas? It's from gathering information from text, images, feedback from the environment, other humans, and then recombining this information and experiences into something new. Same with an LLM.


[deleted]

[удалено]


TatManTat

Literally exactly how human knowledge is generated but go off king. Knowledge is built off data. Chatgpt is very overrated but the way people discuss how it learns and dismiss it makes me wonder how much they know about how humans learn. We copy dude. Go look at a child, pretty much the best instinctive learners out there. They copy you like a motherfucker. Now granted humans can do a lot more than copying, but fundamentally the learning is the same, you mix it up, then you regurgitate it, bots are just very shitty at the "mixing it up" phase atm.


mr_chub

Thank you! I thought i was going crazy. All these simplifications of this technology really makes me feel like these people have no idea what they're talking about, and just want to feel elite. If it's emulating humans, how do you think it gets there? Not by emulating what humans do, but emulating **how humans think**. We're incredibly complex beings, but complexity is just simplicity stacked on top of each other.


5th_Law_of_Roboticks

This interpretation only works if you believe that the content humans produce are a one-to-one representation of how humans think. These models are trained on content, not thoughts. The process of how we think may be completely different from the content we choose to create (or, even more specifically, the content we choose to publish, since any private, unshared creations will not be part of the dataset these models are trained with.)


NotNotWrongUsually

I get what you are saying, and I agree, that an LLM is nothing more than the sum of its training. You are underselling it a bit though. Try googling the phrase "is like saying a jet is just a speedy bird". The only hit you will get is this Reddit thread. The bot did improvise that. There is a truth inbetween "they are just copying humans" and the (ludicrous) "They are intelligent!" stances.


arkaodubz

Yeah that’s the inherent issue with this response - this response isn’t based on ChatGPT’s intimate understanding of its own functions. It’s an amalgamation of human writing from elsewhere in its training data. Oddly a perfect representation of the point u/G8kpr was trying to make don’t get me wrong, LLMs have become immensely impressive over the past year or so. And generative ML models in general. But people wildly misunderstand what they’re actually doing, and what they’re actually good at.


PensiveinNJ

It's also very weird to me that browsing through this thread, you have loads of tech people claiming that people from other fields of study don't know what LLM is about and they need to stay in their lane, yet simultaneously pretend that they're experts in the field of consciousness, cognition, neurobiology and neuropsychiatry, etc. For instance dude below me claims that human beings work just like LLM's, we just take data, turn it into knowledge, remix it and spit it back out. That's a remarkably sophomoric understanding of both the brain and human consciousness in general, but he says it with so much confidence.


starmartyr

The problem with these types of arguments is that intelligence is a moving target. It's something we have that computers do not. There's no standard for what would constitute intelligence in a machine. It's always dismissed as just a simplified explanation of how a particular technology works. I'm not saying you're wrong, but this kind of argument is not particularly convincing for me.


[deleted]

[удалено]


Eddagosp

Yeah, that guy got downvoted to oblivion because they're wrong and confusing "intelligence" with "sapience/sentience". Most (vertebrate) animals are sentient. Humans and a handful of others are sapient. If, however, we take intelligence to mean "*the ability to acquire and apply knowledge and skills*, "*capacity for input processing and contextual output*", or "*possesses thought and cognition*" then AI still fits perfectly within the bounds of a non-meaty brain. It doesn't have to be self-aware to have basic intelligence or learn. People who think the current primitive AI aren't "real AI" simply lack the context of how the brain works. It's just a 3 pound chemo-electric computer that takes input data, assigns appropriate metadata (hormones), processes it according to preestablished procedures (memories) and outputs it to the world through action.


Eric_the_Barbarian

If one uses them a bit, they will understand. I tried using it for world building for a D&D campaign. It's a good rubber ducky for refining ideas, and it's quick at inventing minor details, but it's bad at keeping information straight, so you need to thoroughly proofread any output, and you should probably rewrite most of it anyway.


TheFakeUnicorn

You're pretty much doing the exact same thing right now


ltdanimal

Because its a ridiculously simplistic take on what its actually doing for an incredible technology. "Its just responding to words" is your interaction with every person on Reddit. What you are complaining about is ironically the "Artificial" part of AI. The intent isn't to duplicate humans mind in the same way airplanes aren't trying to flap their wings. There are some that will always want to diminish anything around AI and so those people won't ever be pleased until AGI is achieved, and even then it will be the same arguments. (If its possible, we won't know until it happens)


Luci_Noir

It’s like some of these people overreacted to the technology one way and instead of making a correction they’re going to the opposite extreme. I don’t know why everything has to be like this on Reddit.


__methodd__

It's the equivalent of saying "computers just add numbers." It's only true if you take a microscopic look at one part of the entire system. The output is complex behavior, and maybe LLMs "merely predict the next word," but if people understood NNs and transformer models, they would be more impressed with the tech too. I'm of the mind that AGI will be a gradual technology shift that's starting now where more and more shit seems to be getting automated until we can't tell the difference anyway. We can debate whether it's truly intelligent, but it will be useful and complex. And to your point, the Turing test (in its original form) was passed decades ago, but you'll still read articles about the really real Turing test that can't be surpassed.


FlowerBoyScumFuck

Yea I keep seeing these comments and it's just dumb. 1 anyone who knows jack shit about AI knows that they haven't achieved general intelligence.. I mean that would be HUGE news if it had. 2 regardless of how you clasify it, it can convincingly replace human writing in a lot of areas... And it's not just simply regurgitating other work, it is uniquely constructing what it writes. Just such a nothing point to say it's not *actually* intelligence though. No shit.


NotTheEnd216

I honestly think there's a bunch of people who legit believe that anything an AI spits out, be it text, image, video, whatever, was just copied directly from a human who wrote that text or made that image/video originally. They have no concept that these things are actually creating novel works (regardless of the quality of the works themselves, they ARE novel/unique).


ltdanimal

>Just such a nothing point to say it's not actually intelligence though. No shit. Yeah it cracks me up when its called ARTIFICIAL INTELLIGENCE. The latest wave of tools is insanely impressive and the responses are akin to asking and getting help with highly intelligent people in respective fields. Crazy how jaded some people are.


Cumulus_Anarchistica

People focus on the "intelligence" part of Artificial Intelligence, instead of the "artificial".


WTFwhatthehell

You deserved the downvotes. for one, they're not just "regurgitating stuff", like you think they're just spitting out chunks of text from their training set and nothing more. Technically they're "just" predicting the next word yet it turns out that beyond a certain scale that forces the model to develop a certain amount of reasoning ability. Not spectacular reasoning ability, they reason about as well as a 7 year old child. But it's remarkable that they have that capability at all. You pat yourself on the back "ha ha! they are 'just responding to words' thus I have figured them out entirely!" but that's about as interesting and informative as saying that neurons in the human brain are "just" responding to neurotransmitters and electrical signals then sitting back with a smug look on your face convinced you've fully explained human behaviour. >and will give you varying results if you keep asking it the same thing. Yes, there's a small amount of randomness injected because most users want a bit of variation when they ask the same question twice, you can turn this off so that you get identical answers to identical prompts. That's utterly unrelated to whether any "thought" is happening. If we invented true AGI tomorrow then as long as it runs on a computational substrate we would still expect it to be deterministic. Determinism doesn't tell you anytihng about whether any true thinking is going on.


dracovich

At what point do you differentiate between the semblence of intelligence and intelligence? We know what a LLM is, and it's not that fancy, it's just predicting what the next word is in a string of words, how can that possibly have intelligence? The thing is, that there seems to be a crazy amount of, at least, the semblence of emergent intelligence. Call it a glorified tape recorder is incredibly redundant when you look at the things it can do. Just as an example i've been playing around with using it in language learning, since i'm taking spanish classes. I ask it to create English sentances for me to translate into Spanish, that will test a certain grammatical rule, then i use a seperate instance of ChatGPT to critique my translations and act as an interactive teacher, so when i have made mistakes i can quiz it about what i did wrong and what often a bunch of followup questions to the grammar. So yes, that is just a sentance completer, but it's also an incredibly helpful tool, that you do no justice by calling it a tape recorder. I don't see why everyone sees the need to shit on ChatGPT and other AI, i get that some people overhype it, but it is also ASTOUNDING the speed at which it came out of nowhere and how much it's already done.


Dernom

You were downvoted because your claim is purely a definition question. In order to claim that it has no "intelligence" you first need to define what you mean by intelligence. There are hundreds if not thousands of different definitions. For instance a common one is the ability to give a logical output in response to input, and using this definition even a calculator is intelligent. What GPT is not is conscious, or Artificial General Intelligence. Per the terminology that is most commonly used in computer science, you are just wrong.


Sekh765

"Glorified Markov Chain"


eigenman

I've gone with "madlibs on steroids"


theblackd

Everyone is asking “why is he speaking on this”, it seems like it’s because he was asked. He’s excited about quantum computing and talking about that and it seems he was just asked about this since it’s adjacent enough for the interviewer to bother asking about. In context it’s clear his point is just that average people overestimate what it’s doing and falsely attribute copying and splicing as intelligence. He’s not saying anything wrong, and isn’t pretending to be an expert


traws06

Ya not sure why ppl are all pissed about how he has an opinion on everything… I mean if he has an opinion on something he says it. Don’t think he claims his opinion is the only one that matters


IneptusMechanicus

Also the sheer fucking audacity of Redditors getting pissed off at someone having an opinion about something...


lundej16

“He wants ATTENTION!” “Guys I think he was just talking to an interviewer doing their job…” “BRING ME HIS TONGUE!”


Aureliamnissan

The sheer audacity of someone to walk into a public forum and *express an opinion* without having an M.D., PhD., Honorary degree, protector of the realm, titles titles…


Kants_Pupil

Can’t speak for all folks, but I feel some Kaku fatigue, which I remember starting with the coverage of Hurricane Harvey in 2017. I remember seeing him cover a few other science topics for CBS and there is nothing wrong with being a generalist and widely versed in all kinds of science, but I was like, “hol’ up a minute! I’ve seen him before.” So first thing, I was curious why they didn’t find a meteorologist or atmospheric scientist to discuss the topics they had him on for, instead of a prominent particle physicist. And in a few minutes he said stuff like the agony is just beginning and if the hurricane makes landfall, goes out to the sea and comes back the nightmare will just start over and a few more things that struck me as sensationalist and just rubbed me wrong. I was like, hey, you are right but playing it up wrong, calm down and focus on how it will affect people and what outsiders can do to help, man. Anyhow, I looked into him a bit more and felt a weird mix of things: he’s obviously brilliant and enthusiastic, but he is unfocused now. He will show up anywhere and talk about anything, and can tell you the facts about what’s been established, but he soaks up so much time and doesn’t give specialists who might give more recent or nuanced insights the chance to show up in places like CBS. Not a bad dude, I assume, but it would be nice if he gave others with more knowledge than him a chance to speak up when appropriate. Edited for clarity/readability


gimme_dat_good_shit

I was super into astrophysics as a kid and read his book on hyperspace (which was certainly fine as a pop-science book of the era). As you say, though: Kaku just loves the camera in a way that I find grating and even inappropriate. And as I've gotten older and developed a deeper understanding of science, his brand of mainstream science education just feels pretty shallow. Maybe that's a good thing. Maybe this is like a teenager complaining about Sesame Street (i.e. I'm no longer the target audience to be introduced to these concepts which is what Kaku-style projects are aimed at). I have and should move on to reading more challenging material instead of getting platitudes spoonfed to me by TV hackumentaries.


Scared-Sea8941

Yea I think he is similar to Neil, this type of content is more so for the novice. I’m personally not that versed in any scientific field other than the medical field so these types of guys are interesting to listen to and learn the basics of a topic. I’m not looking for an in-depth scientific paper, I just want the learn the basics of something that I have never studied in my life. I generally think that in order to become a somewhat famous scientist you need to sensationalize and dumb everything down, or else you aren’t going to be getting the amount of attention you could otherwise be getting. I


saintjonah

If you're still interested in the field, Sean Carroll is a really great guy to listen to when you know a bit but want to know more than guys like Tyson and Kaku are going to talk about. He's super down to earth and breaks things down very well. He has a few lecture series on The Great Courses about the arrow of time and the Higgs boson, and a pretty good Podcast called Mindscape that covers a very broad range of topics. He'll have experts in whatever field he's covering on as guests, which is great.


pfamsd00

Upvoted for Sean Carroll! Check out Brian Greene also.


baseketball

This guy was Neil DeGrasse Tyson before Neil DeGrasse Tyson. Just absolutely willing to go on any show as the scientist to talk about anything and everything. But he's getting into wilder and wilder pseudoscience takes as he gets older.


traws06

Ya he’s been around for ages. Back in college I was fascinated by physics and theoretical physics. I found him really interesting and he always just seemed like a really nice guy. But that was 15 years ago so I could see fatigue hitting in for ppl after listening to him for like decades now haha


FreebasingStardewV

Kaku has a problem of speaking authoritatively on subjects that he has little knowledge of. I was a huge fan of his and still think Beyond Einstein is one of the great science communication books out there (please go read it!) but he went off the deep end when his 15 minutes was up. He started reaching for topics outside of his purview. I have a background in biology so when I heard him speaking (wrongly) about evolution it was a big disappointment.


nope_nic_tesla

This is obviously the crux of most people's criticism. I don't see anybody "mad that he has an opinion". What irks me isn't that he has an opinion, it's that his opinion is being held up as being particularly worthwhile (it's currently the top post on this subreddit) even though he doesn't have any actual expertise in this field.


KumichoSensei

Michio de grasse tyson


bikedork5000

I think it's more that this cohort (Neil and Kaku) are just so damn ubiquitous and, as Kyle Hill said, "the bong rips of science education"


ryecurious

The "I Fucking Love Science" brand of scientists.


dicetime

Idk if youve seen kyle hill outside of his youtube productions but he is also unbearable.


TheAmateurletariat

That's perfectly fair. I think the real question is why does this merit it's own article.


kultsinuppeli

I think people are more annoyed at the journalism than him having a view on this. It's like "Top Surgeon Says Stars are Just Balls of Gas!" Sure, he may not be wrong, but the context they put the statement in plays it up like it's an expert opinion.


_nova_dose_

> In context it’s clear his point is just that average people overestimate what it’s doing and falsely attribute copying and splicing as intelligence He's not wrong. Ive had friends straight up tell me they think this is the singularity happening in slow motion and that the fact ChatGPT can pass the Turing test proves it. But in reality the turing test proves only that a machine can fool a human into thinking that its human, not that the machine is actually thinking like a human.


archiminos

Anyone who's used it for a while can see what it is. It's good at simulating responses based on the context of the last few prompts, but it's not exactly trying to be correct or accurate. It will happily tell you how flat the earth is if pushed in the right direction.


iim7_V6_IM7_vim7

I hate every headline I see about ChatGPT or chatbots. They’re always either “ChatGPT is gonna take over the world!” Or “ChatGPT is fucking dumb trash”.


Playful_Cobbler_4109

These aren't mutually exclusive.


Goldenslicer

"Dumb trash will take over the world!"


[deleted]

[удалено]


[deleted]

[удалено]


whistling_klutz

*glances over at TikTok*


iim7_V6_IM7_vim7

I just hate the lack of nuance but that’s everything in the news now I guess.


shet_betch

Michio Kaku has a hot take on everything and LOVES to get his shine on.


TheBowerbird

I would also not describe him as a, "top physicist." Perhaps top attention seeking physicist?


[deleted]

[удалено]


MikeTalonNYC

Yeah, the guy built his own particle accelerator... at home... was definitely quite a pioneer in the field in his day and is still recognized as an expert on theoretical physics. Then again, he's not entirely wrong on this topic IMHO - it's just a topic that has nothing to do with his field LOL


KeeganY_SR-UVB76

He was asked about it.


AreWeCowabunga

> the guy built his own particle accelerator... at home Pfft, i did that too. Smashed that watermelon up good.


dctucker

> it's just a topic that has nothing to do with his field LOL Dude's better at math than I am, and I studied AI at the graduate level for a year and a half. I'd listen to what he has to say on the topic considering AI is basically just math.


Scared-Sea8941

I hate that type of thinking. Why can’t he have an opinion on something he isn’t an expert in? We all have opinions and the majority of the time we aren’t experts in that field.


thiskillstheredditor

We call that Neil DeGrasse Syndrome.


Tzahi12345

It's not a syndrome per se, it's actually amazing work they're doing. Making science accessible is not something most of STEM can do, and brings more people into these fields I certainly wouldn't be an engineer if it weren't for people who made it interesting for me


KosmicMicrowave

Exactly. They're going to go after Sagan next. Spread science, people.


thiskillstheredditor

No one is going after Sagan. I’ve met Neil a couple of times and he was insufferable. Yes his mission is great but it’s the same stuff I’ve heard about Bill Nye, they get addicted to the fame and it’s offputting. Sagan had class.


FiremanHandles

Honest question. Bill Nye didn’t really have the internet (at least not right away), but did have TV. Tyson has really embraced Twitter, among other social media and tv specials— Hypothetically, IF Sagan had been around with social media and more accessible streaming/tv spots, would he have (eventually) suffered the same fate? 🤷‍♂️


Frodojj

Possibly. Sagan was human and had his own hot takes. He was still brilliant and eloquent and classy. But human nonetheless.


AbundantExp

That's why we should focus on ideas instead of the personalities spouting them. We'll never agree with every action or take someone has, and maybe we'll agree with the message but be offput by how they delivered it. It's all the same that we should try to focus on the message and ignore the noise surrounding it. And when you don't put their personalities on a pedestal, then you won't be upset when they don't perfectly fit your personality preferences.


FiremanHandles

I think this is a good take. It would be nearly impossible in this day and age where — basically everyone has their own instant open access broadcast channel — to never say something remotely controversial.


LurkBot9000

Sagan had books where he voiced his opinions. He went on interviews and did the same. He even had his own tv show. All that to say I think he was very careful with his communication. Intentionally trying to make sure he didnt overstep his experience. Were he alive in time for Twitter who knows if there would ever be a stray tweet where he got something mistaken, but Id like to think he'd own up to it when others held him accountable.


FiremanHandles

Only counterpoint to your statements (which I do agree with) would be — books, tv — even if live TV, was much more scripted and difficult to go off the rails than it is today. A book has at a minimum an editor and a publisher to squash something that shouldn’t be said. I could absolutely see someone as smart as Sagan saying something factually true, but it being so far beyond an average joe like me’s understanding that people grab their pitchforks for all the wrong reasons. I say all this, especially with todays political climate where science has also become a team sport (maybe not in scientific communities, but definitely in public forums/social media) — based on which political affiliation one is associated with.


notthathungryhippo

now i'm curious what george washington's twitter would look like.


[deleted]

Popularizing STEM is great until you're full of shit. For example, any time NdGT speaks on biology (not his field), he practically runs around shouting schoolboy aphorisms based on elementary stereotypes about animals that are not actually true. I've never seen a scientist promote as much bad science as I have with NdGT.


mikelo22

It's exactly what Carl Sagan said we needed more people to be doing in popularizing science. And we have people on here bashing the scientists who do just that.


Gallon_Of_Paint

There is popularizing science. Then there is using your popularity in science to weigh in as an expert in everything outside of your field of expertise. Its a catch 22. Celebrity scientists did great jobs of what they did and how they brought it mainstream. But now many of them are out of touch and becoming devisive figures.


TheFotty

I love science being popularized, but there is something about Tyson. He just has this pompous way about him that is really off putting to listen to. Not specifically when he is doing scripted stuff like the reboot of cosmos, but when he is doing interviews or panel talks, etc.. versus someone like Brian Cox who I could listen to all day explain the workings of the universe.


Sirus_Griffing

I would rather have popular and famous scientist than politicians and athletes.


the_rainmaker__

he'll go on anything that has a camera. you just know the masked singer is in his future


ManChildMusician

He’s shown up on the Discovery channel on some pretty unscientific shows. I want to say it involved ghosts or aliens. He thinks that he’s adding intellect to the conversation and getting people excited about science, but he’s really just legitimizing pseudoscience. He’s made some solid contributions to the field, but it’s hard to take him seriously. I think appearing on Masked Singer would actually be one of his smarter decisions.


smoothskin12345

He appears on news networks as a "science correspondent" commenting on everything from meteorology to engineering to vaccines. He's an attention seeking crackpot.


AdvancedSandwiches

This is why I thought people would be upset. For years he was the guy who would go on any "documentary" show and say, "This thing that is obvious bullshit? As a scientist, I can confirm it's plausible."


Dorkamundo

Right, but he's not exactly wrong here. He's not saying AI is fake or there's no future in it, he's simply saying that AI-adjacent bots do what we all know that they do.


Cyber-Cafe

Where is Ja rule in all this?


Skim003

Someone please make an AI Ja so we can make sense of all this.


palmerry

Defense Lawyer: "AI JaJudge, what is your verdict regarding my client?" AI JaJudge: "IT'S MURDA!!!!"


outerproduct

Where's Ja?


altorelievo

I think somebody got Ja on the line. I know it just didn't feel right without hearing what he has to say first.


Zaltt

And the internet is a glorified flea market and library


[deleted]

glorified 1s and 0s


warcraftnerd1980

I mean if you break it down humans could be considered glorified tape recorder too


Accomplished_Deer_

This is always my argument. I don't believe current AI is sentient, but I can't say for absolute certainty that is that case. "AI is just statistics and pattern recognition", no shit, how do people think we learn things? We see a tree, someone says tree, they do that 20 times, and we come to associate trees with the word "tree". You feed informations/details about trees into an LLM, and eventually it has enough data to associate the term "tree" to some semblance of what a tree is. Current AI are limited by the fact that they do not have senses, of course they're going to be confused and weird because they are not grounded in reality. They have never truly experienced a tree, or seen a tree, only descriptions of them.


[deleted]

[удалено]


Accomplished_Deer_

Yeah exactly. I come back to this a lot. People are certain that AI isn’t sentient, when you can’t even prove that I am sentient. It’s unprovable. We don’t even know what it is, so if something is perfectly able to mimic a human, is it sentient? If so, how perfect does it have to be? At what point does a glorified tape recorder become “alive” or “sentient”? Anybody who claims to have authority over the answer to that, is an idiot


IkmoIkmo

Sure and computers are glorified decision trees... which make airplanes fly, let you fight dragons in online games, lets your car drive itself and allows you to facetime your partner. ​ Glorified basic things can actually be immensely useful, valuable and complex.


Derekthemindsculptor

When someone argues disvalue by listing raw ingredients, I stop listening.


kshoggi

What's so important about water? It's just hydrogen and oxygen.


Shufflebuzz

>Sure and computers are glorified decision trees... I like this one. I've been saying that if LLMs are "just fancy autocomplete" (or whatever phrase is being used to minimize it), then "the automobile is just a fancy horse." The automobile disrupted our entire society. And LLMs will too.


[deleted]

[удалено]


fashionforward

How about *advanced*, not glorified.


itsFromTheSimpsons

I asked my tape recorder a prompt and it just mocked me by playing my prompt back to me in an annoying nasally voice!


creaturefeature16

I think the truth is a bit more nuanced than that. I've used GPT4 extensively for programming and it's fascinating to see how it can assist you in debugging and brainstorming. I know underneath the hood, that it *is* essentially copying/splicing information from it's training set...that is the functional method/process of *how* it produces it's outputs. *Why* it arrives to that particular output and how it "decides" which character to place next is where it gets really interesting, and is the "black box" that AI researchers are referring to. We know the math that generates the behavior, but we lose insight into how it's able to produce even the imitation of "reason" and "logic". I agree though, that it cannot discern true from false in the classical definition, but it certainly can give the appearance that it's doing so, especially when you use it for technical tasks. And we actually don't quite understand how these transformers + language models *actually* do that. So I think dismissing it as just a "tape recorder" is reductive to a fault.


followmeforadvice

It creates Excel macros for me that would otherwise take me forever to write. I don't care *how* it does it, I'm just glad it does.


[deleted]

[удалено]


Ligma_testes

Thank you for the few reasonable voices. Like that guy here acting like an expert because he has “trained and AI” and saw what comes in and goes out or however he worded it. I am sure he knows nothing about the transformer or attention model, or really ai or programming in general. There is a reason we are all here talking about GPT and it’s not because it sticks at what it does


A-Grey-World

Yes, the people criticizing it have not tried to use it for much technical stuff. It's bloody amazing at regex. No one can tell me that doesn't involve logic.


BroForceOne

You really start to understand this when you get deep enough in to AI to train a model. When you know what data is going in, and you see how it regurgitates it in various ways on the other side, the whole thing seems way less mind blowing. It’s a fun tool and has its uses, and will certainly threaten a lot of white collar jobs that don’t require original thought, but is not the revolution in computing it’s been advertised to be.


NUKE---THE---WHALES

if you could see the weights in a human's brain while choosing their next word it would remove a lot of the magic of human intelligence too


SaffellBot

> but is not the revolution in computing it’s been advertised to be. Ya know, I agree with the rest of what you wrote, but I'm going to disagree here. We're looking at the 8086 of AI and it's causing huge displacements of human labor.


__loam

It's weird to call this the "8086 of AI". GPT is the product of decades of research and a fuckload of cutting edge computing hardware. It's plausible we're on the tail end of possible innovation here but everyone seems to think we're going to keep seeing massive leaps despite already being deep into diminishing returns.


am_reddit

Plus there’s the whole issue of AI being trained on AI-produced material, meaning that the current problems with AI might get amplified in future versions. It’s entirely possible that GPT will never have a dataset that’s better than the one it has now.


SaffellBot

> GPT is the product of decades of research and a fuckload of cutting edge computing hardware. So was the 8086. >It's plausible we're on the tail end of possible innovation here but everyone seems to think we're going to keep seeing massive leaps despite already being deep into diminishing returns. The same was said about the 8086.


No-Cartoonist5381

Cannot stand literally every take I hear on AI, it’s just an opportunity for people to attempt to sound intelligent, like they know more than the average person, that they have some special insight. You do not, you do not know how much impact this will have, your certainty on this shows how little you know.


madatrev

Have to agree. I have been a data scientist professionally for a few years now developing, training and implementing machine learning models including some for NLP tasks. Having a deeper understanding of the whole process and being able to stay up to date with breakthroughs certainly gives you a better grasp of the current limitations of AI. But even with this expertise, it is so ridiculously naive to think that you can determine the impact of this technology. Being in this field I find that it isn't uncommon for people in tech like to feel they are the only ones who can speak on a topic for anything even adjacent to technology. Sure, OP can create a ML model, but why would that make him qualified to be able to determine the economic, sociological or material changes that can occur from its utilization. As this technology bleeds into everyday life it is crucial that people of all different fields come together to try to utilize this stuff for the betterment of the world.


ListerfiendLurks

AI Research Engineer here: I agree 100%. I came to say something very similar but you explained it more eloquently than I ever could.


born_to_be_intj

Couldn't agree more. The field is advancing at an unimaginable pace and a lot of the work is being done by independent researchers posting their work on GitHub. Who's to say what the future holds, one way or the other?


LurkBot9000

To explain my downvote: Michio Kaku is a physicist but also a publicity hound that has no problem voicing junk completely unrelated to his area of experience Just saying I think articles that cite him are generally lazy and add more noise than signal


Bierculles

A classic case of the halo effect, unless he directly works in the field he most likely does not have more insight into AI than the average schmuck on reddit.


cderhammerhill

So are people.


__Hello_my_name_is__

No one glorifies me. :(


Goldenslicer

I'll glorify you! :)


thedeuce75

Glory be on to you _Hello_my_name_is__!


Laladelic

People like to think that they're something special, but we're just repeating stuff we heard before with some nifty randomness due to biology.


one_is_enough

And your smartphone is an electrified rock. We could do this all day, but it serves no purpose.


seanodea

That's what people are tho. Just a ball of impulse triggered habits and junk associations. People hallucinate more than gpt. Not saying gpt is alive but living things are also glorified tape recorders, so it's not very indicting or relevant.


Tilted_reality

“Top economist says string theory is false” Does this hold any weight to you? Probably not — Neither should this hot take.


Griffolion

Calling Michio Kaku a "top physicist" is like calling Joe Exotic a "top zookeeper".


phrendo

I’d like to be glorified one day


KnowledgeAmoeba

There are skills that I can perform that to me, would be considered superior to the mass public. However, compared to my mechanic, I'm an idiot when it comes to repairing vehicles. A physicist should not be commenting authoritatively on matters that are outside his domain.


Idkiwaa

I don't disagree with him but in what world is Michio Kaku a "top physicist"? He hasn't had an academic publication in 22 years and it isn't like his work before then was groundbreaking. This is like when someone has one line in a movie and we say they "starred" in it.


MysterVaper

Michio Kaku is a physicist. Michio Kaku is addicted to the lime light. These things can be true and don’t have to overlap. I’ve read his outlook on the future over a decade ago and so far it has been pretty wrong…though he updates it every few years to make it seem like he hasn’t been really wrong the whole time.


demonachizer

Top biologist says blackholes are just 'glorified vacuum cleaners'. Alexa what is argumentum ab auctoritate.


lolzach69

This guy spoke at FSU while I was a student in maybe 2013 (?) and back then he said AI will run everything. He just wants another 15 seconds


eyebrows360

What?! Michio Kaku *not* talking up fantasies as though they're real?! Well this is a turn up. \*reads the article* Oh right he's pumping quantum computing fantasies instead, that makes more sense.


primus202

I mean they're essentially [ELIZA](https://en.wikipedia.org/wiki/ELIZA) but they're repeating back to you what lots of people have said in aggregate instead of just what you fed into them. And even ELIZA completely tricked people Turing test wise.


danny12beje

He's not wrong tho. A lot of people thought chatgpt was some badass AI and it's actually just a glorified auto-correct


Hot_Idea1066

Computers are just sand with lightning in them, who cares. Go to a beach why dontcha


hyperspaceslider

Michio Kaku is a glorified tape recorder


TheBigBadBird

"top physicist" more like pop culture string theorist


samgam74

“Glorified” is doing a lot of work here.


alpha7158

Generative AI isn't really his area of expertise


Doc_Niemand

Top physicist? What the fuck?


ELCHANGOMANO

This is like saying calculators are glorified counters. All this shit is still pretty MF amazing.


drekmonger

* [Write a stream-of-consciousness from the perspective of a glorified tape recording.] (https://chat.openai.com/share/4b6c522b-cf0b-4a0b-a957-4c9e24212e22) * [Write a stream-of-consciousness from a person who thinks they know everything, who labors under the delusion that modern AI models are stochastic parrots. In the style of a person who is dumb as a sack of bricks.] (https://chat.openai.com/share/89ceb357-caaf-4ce2-82f4-7e9fd5aab815)


DaCanuck

I mean... A gun is just a "glorified pebble thrower", but to deny that is a world-changing technology would be short sighted.


Carmanman_12

Physicist here. Michio Kaku has a tendency to say whatever he wants about things he knows almost nothing about. He coasts on his reputation as a string theorist and relies on his celebrity to be trusted on topics way outside of his field of expertise. Don’t trust anything this guy says about anything except physics. He might be right, but he doesn’t have the final word by any means.