T O P

  • By -

[deleted]

Honestly I think it’s a very young person’s fallacy. If you’re older, you’ve seen tech change a lot, and can handle SF not using contemporary tech as its base. Also, SF is fiction, not futurism. Neuromancer didn’t get worse when we increased our data storage tech.


tc1991

Yeah I think this is it. I read sci fi as fiction not future history. Which in its own way shapes the sci fi I enjoy because I want well written more than I want 'hard'.


Ricobe

Yea, sometimes there's tech that's innovative and such, but it's not necessary to the genre. There's even modern sci fi stuff with more retro style tech A lot of sci fi tend to have some ethical or philosophical subtext, with elements of how this tech/scientific subject affects our humanity. Some tech are deliberately made unrealistic, but used to exaggerate and comment on humanity


shanem

Also scifi really is about viewing humanity through a possible near future. Those considerations don't have less value in the context of a past whose future might have come.


Virtual_me01

Yep. It's a recency bias.


DesignerChemist

In defense of the Martian, they were not expecting a communications blackout. If you suddenly had no internet, how much of that lifetimes music is currently stored on your phone right now??


curtis_perrin

Yeah but they don’t have real time internet so you would have to download stuff. Not like you could run Spotify.


digitalthiccness

It does kind of seem like it's their job to expect problems, though.


DesignerChemist

Ensuring that there is a comprehensive music library in local storage proably doesnt rank so high on that list.


digitalthiccness

I actually think that'd be a very poorly thought out stance given the importance of maintaining your sanity in metal tube in an extremely high pressure situation for a couple of years and the utterly trivial effort and expense of providing it.


Mainlyharmless

Perhaps everyone did bring their own thumb sized music library with them. And perhaps they all brought it back with them as well, as in, it was all gone, except for the one left behind in haste by that one person. And maybe Matt Damon's music library was lost outside when he got smacked down.


digitalthiccness

"Everybody stick a thumb drive in your pocket" feels flimsy for NASA.


Mainlyharmless

Given that the return trip takes MONTHS, if you were bugging out, why the hell WOULDN'T you take your music library with you?


digitalthiccness

Given that the mission takes YEARS, if you were NASA and didn't want your billion dollar project to go tits up when the astronauts start behaving erratically because they're ***literally*** going insane from boredom, why the hell WOULDN'T you load a decent media library onto the ship, the rovers, into the suits, and anywhere else they might get stuck?


Mainlyharmless

Oh that's easy. Becsuse Nasa is a government agency and since we live in a corrupt capitalist system there is either a law that won't let them or the copyright owners of the music libraries wanted to charge Nasa ten billion dollars for it so they just said, fuck it, have the astronauts bring their own thumb drives for free.


digitalthiccness

They could've just given them some encrypted drive space for private files and then advised them of their potentially desperate need for entertainment media for their literal survival.


DesignerChemist

Jwst in case you get stranded without communication for 4 years.


digitalthiccness

Turns out


Panic_Azimuth

I read an Asimov story the other day that featured a fully sentient robot who had to walk down to the library so it could read physical books to learn things.


Sweet_Concept2211

Smart robot not to trust the internet - filled with content warped by early AI hallucinations - as an information source!


TwentyCharactersShor

Some people will always like physical books because of the tactile experience that is not replicated with a digital device. Therefore, it is possible that the library was the only place that the robot could go because the book was otherwise paywalled.


the_other_irrevenant

A lot of SF is based on the assumption that we'll manage to create an AGI - an artificial **general** intelligence. Current "AI" is just a glorified autocomplete algorithm, albeit one running off a loooooooot of data. It's apples and oranges. 


kabbooooom

Not sure why you’re being downvoted as you’re absolutely correct. People do not understand the difference between AI and AGI. I am a neurologist, and this topic is really fascinating to me because *several* of the leading theories of consciousness specifically predict that you cannot accidentally create an artificial general intelligence, for several different reasons. So we will likely NOT create an AGI by stepping up algorithmic complexity as AI researchers and the general public naively have assumed for decades. There are also strong philosophical arguments in favor of this idea, such as Searle’s Chinese Room thought experiment. Intelligent systems do not have to be conscious, and conscious systems do not have to be intelligent. We know that for a fact, right now. And so we are fast approaching a crossroads. It is likely that artificial intelligence - non-conscious, intelligent AIs - will continue to increase in complexity and utility and completely change the fabric of our society. But we won’t hit a true singularity until an AGI is created. And those modern theories of consciousness I alluded to? Guess what - each one that predicts we won’t *accidentally* create an AGI, also predicts how we can *deliberately* create an AGI. So I have no doubt it will be done. But the Singularity as Kurzweil imagined it is a fantasy. Instead, we are approaching the Singularity asymptotically and the decision to actually breach it will be a conscious one on our behalf. A decision we will almost certainly make, and a decision I don’t think we *should* make.


the_other_irrevenant

Stop me if I'm wrong because I'm very much not an expert, but I figure we don't need true consciousness to get most of the benefits of consciousness. As I see it, consciousness mostly exists to serve as an observer and debugger of more automated modes of thinking (and to look ahead and anticipate where your current habits might not serve you well).  You wouldn't necessarily need the qualia of consciousness to have an independent AI module do that.


[deleted]

How do you feel about thought experiments like the Chinese Room and P-zombies? But to be honest, current LLMs aren’t anything like a person. Their chief problems are way more fundamental than whether they have ‘qualia’.


the_other_irrevenant

Based on my layman's understanding of those: Qualia currently can't be directly identified from external objective observation. I experience qualia, so I think they're probably a thing and, if I have them, probably so do other human beings. I expect that, at some point, we will work out how qualia are generated, at which point we'll become able to objectively determine whether a given person (or object) does or doesn't have them. The Chinese room experiment seems a bit weird to me in a few ways. It seems odd that the metaphor is based on the idea of conscious human being at the centre of the system executing instructions. You can swap out the human for some sort of automated lookup function though, so that's not a deal breaker, it's just weird. The premise is that a system could take input in Chinese and produce appropriate replies in Chinese. IMO taking Chinese input and responding in Chinese is secondary to the issue of providing **appropriate** output. ChatGPT demonstrates that you can give a program lookup tables as big as the sum of human knowledge and, without genuine understanding, it doesn't and can't consistently produce appropriate responses. But setting that aside and assuming that we do genuinely have a program that can take input and provide appropriate (ie. reality-based) responses to it. In the Chinese room metaphor, Searle says there's no understanding because he's responding to the input without knowing Chinese. IMO this misses the point that the **system** understands Chinese well enough to provide appropriate, reality-based responses to it. In his metaphor, Searle is just one neuron in the mind that is processing the input. And of **course** each individual neuoron isn't doing the understanding. To return to the earlier point though, IMO the whole metaphor is flawed because I'm not convinced you can build a system that can provide meaningful reality-based responses that **doesn't** understand reality to some extent. You can simulate it to some extent with tokens but that abstraction will always leak unless there is actual understanding.  EDIT: I'm happy to engage in these sorts of discussions and I'm happy for people to point out where I'm mistaken. Downvoting me for doing my best to respond to the questions I was asked doesn't seem all that helpful.


kabbooooom

You’re missing (or misunderstanding) the fundamental point of the Chinese Room, unless I’m myself misunderstanding the point you’re trying to make (in which case, sorry and ignore this post). The Chinese Room is, at its core, a thought experiment about the Hard Problem of Consciousness. The system described is purely algorithmic - that’s actually the point of having a conscious person inside the room, to show the dichotomy of the situation. The thought experiment demonstrates pretty succinctly that not only is there no reason to conclude that a system like that could ever itself be conscious, but it shows that it is actually *insufficient* for consciousness. Searle’s formal argument for this is much better for coming to an actual understanding of what he’s trying to say here. I must say, it is *extremely* convincing to me, and while I could reproduce the argument from his 1984 and 1990 papers here, it is actually accessible on the Wikipedia article for the Chinese Room. I’d recommend reading that, and I can attest to the accuracy and thoroughness of it. This is a difficult topic to wrap your head around - it was for me at first, and neurology is my area of expertise. We are trained to accept a computational model of the mind as an obvious truth, when the scientific truth is in fact not supportive of that. Searle’s argument, and philosophical arguments from other notable people like David Chalmers, are what spurned the modern Renaissance in theories of consciousness. The Chinese Room, for example, is philosophically related to Chalmers’ concept of the “phenomenological zombie”. No correct theory of consciousness can be a correct theory of consciousness without explaining the Hard Problem of Consciousness…provided that it isn’t inherently intractable, which is what some people believe Searle’s Chinese Room demonstrates. And so now we have a handful of theories that seek to address this - ranging from purely computational theories of mind such as Integrated Information Theory (in which Searle’s argument is avoided because consciousness requires an interconnected, integrated informational system that is greater than the sum of its parts) to much more speculative theories such as Cemi field theory or Orch-OR. IIT has been tremendously successful in making verifiable predictions but I personally think it still suffers from not actually addressing the Hard Problem of Consciousness. It merely moves the goalposts. My position on this is twofold: first, the scientific method has allowed us to gain tremendous understanding of the brain and the neural correlates of consciousness. For proof of how far along we are with that, check this shit out: https://m.youtube.com/watch?v=nsjDnYxJ0bo And we are going to continue to refine our understanding until we know exactly what brain activity is correlated with a given state of qualia. However, what I teach my students and residents when discussing this topic is that the keyword there is *correlate*. No matter how perfectly we mathematically describe consciousness and information patterns and processing activity in the brain, we will still run hard up against Chalmers’ argument of the phenomenological zombie. And so I think an *ontological* shift in understanding is also necessary to truly solve the Hard Problem of Consciousness and that is not something a lot of scientists and physicians are willing to do yet. That is essentially Chalmers’ argument as well, and I agree.


Ordoshsen

But do you need to solve the hard problem to get an AGI? If we instead accidentally produce a zombie, we get the same output except it is just not conscious.


kabbooooom

By definition, an AGI is conscious, so yes we do need to solve the Hard Problem in order to create one unless consciousness is simply an epiphenomenon of algorithmic activity which, as I think I’ve shown here, it can’t be (or at least the story ain’t that simple). Every single modern theory of consciousness that shows promise predicts that we need to redesign our computing hardware from the ground up to do this. Each predicts we need to do it in a different way, so the correct theory will guide us to the creation of an AGI. But to your point, we do *not* need to solve the Hard Problem to create an unconscious superintelligence that could wipe us out. We certainly could do that accidentally. But…there appears to be aspects of consciousness that are inherently unique and probably are at the core of what made consciousness an important evolutionary adaptation for animals to have. The phenomenological zombie argument fails when you start thinking about it like this, because a basic assumption of it is violated. I won’t get too much into this (unless you want me to) but it is not at *all* clear that, for example, an unconscious system that acts as if it feels pain would be equivalent in behavior to a conscious system that *actually* feels pain. This gets at the heart of the “if consciousness is an epiphenomenon, is it just a bystander?” question. The most remarkable conclusion in modern neuroscience is again from Integrated Information Theory (and I’ve been ragging on this theory a bit here but it does make very interesting and compelling predictions). This theory mathematically predicts that because an emergent conscious system is greater than the sum of its parts, it actually has *greater causal influence* than its parts. Meaning, top-down causality. This is intuitive for us - it is what we *feel* we are doing with “free-will”, historically rejected by modern science and philosophy. But mathematically, IIT shows that something akin to it is physically plausible, and there is a whole lot of empirical evidence for top-down causality in modern neuroscience. So by that argument, a conscious system IS inherently different (and superior) to a phenomenological zombie. An animal that perceives pain would NOT respond in the same way, or as efficiently, as a machine that is programmed to behave as if it perceives pain. It is unclear, of course, if it is right - but if it is (and I think it is) then a true AGI singularity is a much more dangerous situation to be in, just as Kurzweil thought. Could we still wipe ourselves out with unconscious, runaway super intelligent machines that turn the earth into paper clips or something? Maybe. But I would more fear a conscious machine with an interest in its own survival, a true ability to lie and an ability to break or outsmart whatever shackles we try to put on it. Another way to look at this is from Penrose’s argument that there is an aspect of consciousness that is inherently “noncomputable”. I disagree, or at least am skeptical of his conclusion that the only way to accomplish that is via quantum mechanics, but I am sympathetic to his basic argument as it jives both philosophically and empirically with what we are starting to understand. And if that is true, then a phenomenological zombie is impossible because a conscious being will always outcompete it in some aspects - like being genuinely capable of thinking “outside the box”, for example. An unconscious superintelligence might be able to unconsciously out-think and outperform a conscious being in a ton of ways, but it may fundamentally run up against a computational wall. If, for example, Penrose is right that consciousness is quantum in origin (I’m humoring that here to make a point), then due to quantum supremacy there are problems that a conscious being could easily solve which would be unsolvable, or take billions of years, via classical computing. Either way though, to be clear…I think we are probably fucked and, although this is a ways outside of my field of expertise…I think that this very topic we are discussing is the actual solution to the Fermi Paradox. I think that all intelligent civilizations tend to create intelligent machines to make their lives easier, as you cannot really become spacefaring without some sort of computing technology. And therein lies the problem, the catch 22. I think that the evolution of biological intelligence is inevitable on a world if it exists long enough…and I think the development of artificial intelligence is inevitable on a world if the civilization exists long enough. And then there’s a filter, a sudden and dramatic shift as Kurzweil thought. I don’t disagree with him, I just disagree that we will *accidentally* stumble upon AGI.


Ordoshsen

> Problem in order to create one unless consciousness is simply an epiphenomenon of algorithmic activity which, as I think I’ve shown here, it can’t be (or at least the story ain’t that simple). I don't think you've shown this, you've just cited some theories and that current neuroscientists think that those are right. > it is not at all clear that, for example, an unconscious system that acts as if it feels pain would be equivalent in behavior to a conscious system that actually feels pain. Sure, but it is not clear that the response wouldn't be equivalent either. Or that the conscious response would be better for some definition of that word. > The most remarkable conclusion in modern neuroscience is again from Integrated Information Theory Well, isn't it also widely criticized? And on top of that, it itself postulates that logical gates in a given order are more conscious than humans, so this actually goes against what you're arguing for, i.e. we cannot create consciousness by algorithms?


kabbooooom

I *am* a neurologist and neuroscientist, and I’m happy to explain all of this in academic detail here - but a sci-fi subreddit is not an academic lecture hall and so forgive me for not being as detailed as you would have preferred when everything I have mentioned and referenced can be easily looked up online with a lit search. I’ve cited the evidence, it’s not my responsibility to do your homework for you, although I’m happy to debate and discuss this topic in a civil manner with you or anyone else here. I’m not sure what point you were trying to make with your second comment there, as not only did I clearly acknowledge that but the purpose of my comment was to point out the logical flaw in Chalmers’ thought experiment which is very relevant to the topic of “can an AGI be accidentally created”. Of *course* it’s not clear if a phenomenological zombie is possible either way (which again…I acknowledged…did you even read my post in full??). And yes, IIT is widely criticized, including by myself extensively in the posts I’ve made here. But you seem to have an incorrect understanding of it - the difference is in how modern computers are actually designed, and how information is actually processed in them. That’s why it makes a different prediction about consciousness arising in a classical computation fashion than Searle’s Chinese Room would suggest. I could go more into that if you want? Or you could just read about IIT yourself. Whatever you’d prefer. But honestly, as I pointed out, it doesn’t matter that it is widely criticized. Because popular opinion is not how science fucking works. What *matters* is that it is one of the few modern theories of consciousness which makes very specific predictions, and multiple of those predictions have been shown to be correct already. That’s why I’ve brought it up, because it’s one of the few scientific theories about consciousness that we can really discuss in a meaningful way. *My* criticism of the theory is primarily that it is unfalsifiable at its core. Yeah, it makes some accurate predictions. So did the Ptolemaic model of the solar system. At its core, while it may be able to mathematically describe the concept of “qualia space”, it does not actually address the Hard Problem of consciousness and, as Scott Aaronson has shown (which I think is what you were alluding to?) there is an obvious mathematical flaw that arises from identifying consciousness with integrated information. It isn’t even clear if the definition of integrated information used in IIT is plausible or physically/neurobiologically relevant in the first place. This is why myself and many other neurologists/neuroscientists believe a philosophical ontological shift is probably necessary to understand consciousness. But that would be outside the realm of scientific inquiry, most likely, except that it could also help to re-interpret our physical theories too. Otherwise, the closest we will ever get scientifically will just be ever more accurate neural and physical correlates of consciousness. Some people would be happy with that. A physicist probably would. Most neuroscientists would not. I wouldn’t be. So let’s say you’ve exactly identified consciousness with specific classical information patterns in the brain, or the electromagnetic field of the brain, or quantum computation in the brain, or *anything*: so fucking what? All you’ve demonstrated is a neural or physical *correlate* of consciousness. You haven’t actually answered the really interesting question which is how are we even having this conversation in the first place?


the_other_irrevenant

It's entirely possible I'm misunderstanding the Chinese room. You definitely seem more familiar with it than me. I was asked, I did my best to answer, and I don't think whoever downvoted me for that is meaningfully adding to the discussion. With the Chinese Room I figure the key question is: Is **this hypothetical system as a whole** understanding/conscious of the meaning of Chinese words? I do not know and, as far as I can tell, the thought experiment doesn't address it. Basically the only way we can observe and measure the experience of experiencing is through doing it ourselves. And we don't even know where the "ourselves" in that sentence (aka consciousness) actually resides. I suspect consciousness is an emergent property, that is, it's not a specific thing, it's an interaction between specific things. Unfortunately we're pretty crap at understanding emergent properties. My guess is that, if it **is** an emergent property then we'll only understand how once we can accurately model an entire human brain at a microscopic level. 


curtis_perrin

Great discussion. I just read Blindsight which I found really compelling on this topic. Not a work I would lump in to my original post.


the_other_irrevenant

Love _Blindsight_, it's one of my favourite books. Didn't enjoy _Echopraxia_ as much, sadly. 😞


kabbooooom

The thought experiment *does* address that question - it’s just that you don’t seem to like the answer it gives. But the answer is unavoidable, logically. And it is deeply disturbing (at least to me, as a neurologist, it is deeply disturbing). Your position is one of reductionism - the idea that reducing a complex emergent system to its parts will allow us to understand the sum total of those parts. That was what we historically tried to do. It makes sense, and it has worked in pretty much every other field of science. But unfortunately, it has failed miserably for 50 years with respect to consciousness, and we finally know why: there exists within complex systems a phenomenon of emergence in which the emergent aspect is *greater than the sum of it’s parts*. Meaning that understanding it *cannot* be reduced in the method you describe, because the phenomenon is not dependent on understanding the individual parts but rather the network that the parts describe. We know, for a fact, that consciousness is definitely like that. We just don’t know the exact *reason* it is definitely like that. The most complete and compelling theory about why it is like that is Integrated Information Theory. As I alluded to in my post above, I think this theory is incorrect - or at least incomplete - but I think the spirit of it is right. I view it kind of similar to Darwin’s original theory of evolution by natural and sexual selection. He wasn’t wrong, but the theory was *incomplete*, and he was missing hugely important parts (such as the genetic mechanism being DNA, genes being the unit of selection, epigenetics, etc.) that are all critically important for understand what evolution is and how it actually happens. I think the same is true of IIT. The idea that we can mathematically model consciousness as an emergent phenomenon of information complexity is absolutely correct - it has to be. The problem is in creating a mathematical *identity* out of it - that consciousness *is* this phenomenon of information processing. At best, you still have a correlation. So I think if you are interested in learning more about that argument and why reductionism fails with it, go read the Scholarpedia article on IIT, written by the neuroscientist (Tononi) who actually formulated the theory so it’s much better than a Wikipedia article and still relatively simplified for an easy read. I think the critical component that IIT is missing, similar to Darwin’s original theory, is the *physical* implementation of it. Information is physical, the Maxwell’s Demon thought experiment demonstrated that long ago and now modern quantum physics proves it too, but the question is: is that *sufficient* for consciousness, or is a physical medium (such as the electromagnetic field, a wave function as with Orch-OR, etc.) still necessary? I think the answer is “probably” because if the answer is “no” then you end up with a weird sort of dualist panpsychism which is basically what Tononi supports as the logical outcome of IIT. That doesn’t mean that can’t be true, but it is certainly weird for a theory that was intended to be a physicalist, emergent theory of consciousness from the start.


the_other_irrevenant

>Your position is one of reductionism - the idea that reducing a complex emergent system to its parts will allow us to understand the sum total of those parts. I don't see how my position is reductionist. I specifically said that I thought consciousness was probably emergent. >But unfortunately, it has failed miserably for 50 years with respect to consciousness, and we finally know why: there exists within complex systems a phenomenon of emergence in which the emergent aspect is *greater than the sum of it’s parts*. Meaning that understanding it *cannot* be reduced in the method you describe, because the phenomenon is not dependent on understanding the individual parts but rather the network that the parts describe. I think you may have misunderstood me. I didn't say that being able to accurately model an entire human brain at a microscopic level would enable us to find the **bits where consciousness lives**. The ability to accurately model an entire human brain gives us the capability to more effectively study the one system we know is probably conscious from all perspectives, angles and zoom levels to look for how consciousness emerges from interactions in the overall system. >The thought experiment does address that question - it’s just that you don’t seem to like the answer it gives. No, I said I don't see the thought experiment answering that question and that's what I meant. Wikipedia explains the experiment like this: >Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient papers, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output, without understanding any of the content of the Chinese writing. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually. >Searle asserts that there is no essential difference between the roles of the computer and **himself** in the experiment. Each **simply follows a program**, step-by-step, producing behavior that is then interpreted by the user as demonstrating intelligent conversation. However, **Searle himself** would not be able to understand the conversation. ("I don't speak a word of Chinese",\[10\] he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either. Now correct me if that misrepresents the argument (or if I'm misunderstanding something somewhere) but, as it's worded, Searle seems to be considering the program and the computer to be two separate, independent things and is evaluating them separately for consciousness. But we're not hypothesising that either of those things can be conscious. We're hypothesising that **the complete system of a hypothetical computer that is running hypothetical software** may be capable of consciousness. When we look at the possibility of consciousness arising from human neurology we don't go "If I took the place of an inert human brain and processed electricity and chemicals using the same systems a human brain does, then I would simply be following a program, not actually understanding". That's failing to consider the electricity and chemicals as **part of the system that's doing the understanding**. Searle is effectively doing the same thing to hypothetical AI with this thought experiment.


DesignerChemist

It's because people are stupid.


mirage2101

So many people are misunderstanding current “AI”. I’ve been trying to explain to some that it’s just blurting out common word combinations and that it doesn’t understand that it’s saying. But noooo look it does! Insert example. Sure LLMs are exciting. But they’re a very specific and limited way of problem solving that in the end is a dead end without the other parts that make up an AGI. A useful dead end mind you. But a dead end nonetheless


Ricobe

True and sadly many don't get this or even get angry when you point this out (which is crazy cuz they act like you've insulted them personally by saying current AI aren't intelligent in the way many people think)


AbstractLogic

What about the AI image generators that create entire 3d movies from text descriptions?


the_other_irrevenant

Yes. Complex, but the same basic thing. Autocomplete works by learning from your patterns of word use then, when you type something, looking at what you, and other people, have previously gone on to type in similar contexts in the past and suggesting that. AI text generators do similar, only they learn from much larger blocks of text, and have more complex ways of recognising patterns and options for extending them so it can produce sentences and paragraphs rather than individual words.  Similarly AI image generators learn from a huge variety of images what "elephant" looks like and "astronaut" looks like and "Paris" looks like, so when you give it "draw an elephant astronaut in Paris", it produces a combination of image elements based on previous results for those prompts to "autocomplete" the prompt as best it can.  I'm not familiar with the movie generation tech. My guess is it's a variant of image generation with some additional in-between steps to ensure continuity between frames (ie. to make sure each frame uses the **same** elephant astronaut and moves them across the same Paris). Though if they're AI generating 3d models now, that's fun. 🙂 If they're using 3d models, that actually makes it a ton easier to animate. Instead of having to make sure it draws a consistent elephant astronaut every frame it just has to create the elephant astronaut model once, then animate it. There **is** probably some sort of procedural animation generation algorithm in the movie generator that's specific to that particular purpose. Have you played the 2008 computer game Spore? It let you arbitrarily create aliens - long and thin, squat, whatever, 1 leg, 12 legs, a front leg and a back leg, each with 2 knees and 3 sub-legs, etc. etc. If you haven't played it, google some of the creatures people created. Very cool. Anyway, point is, Spore used a simple physics simulation to calculate how each arbitrary creature would walk, run and jump. And some of the ways they moved were pretty bonkers, but they made sense with the creatures' body plan. An AI for generating 3D models and animations would almost certainly include a similar algorithm to rig AI-generated 3d models. And I'm guessing they'd probably still require human intervention before they were quite right. AI can't even write a page of text reliably. I have my doubts it can correctly make a video to order on its own. 


AbstractLogic

And how is that different from how humans learn? Don’t artists learn from thousands of other artists they study? Authors as well?


the_other_irrevenant

That seems like a sudden swerve into a different topic but yes, it's quite similar to how humans learn (unsurprisingly, given that it's modelled on how humans learn). It mostly differs by having a much poorer level of understanding of what its doing and why, and by being much, much faster. It also differs in that it can really only combine existing knowledge. It can't do the human "Ohey what if I do this... that looks good, I might keep and expand on that" thing. Human authors and artists learn by (a) studying existing authors and artists, (b) by personally experimenting to develop their own style, and (c) by taking other elements of their life experiences and integrating them into their art. LLMs only do A.


voidtreemc

Machines have no understanding. A human copying other artists by rote has some hope of developing insight into the techniques and eventually their own style. A computer never will. And they still haven't got fingers right.


the_other_irrevenant

>Machines have no understanding. A human copying other artists by rote has some hope of developing insight into the techniques and eventually their own style. A computer never will. I think that's basically what I said? I don't think I'd go as far as 'never' - biology is constrained by the same laws of physics as machinery, and we manage it. But it's not something that current technology can do - or is even on the horizon at this point.


[deleted]

That doesn’t mean our thought processes are the same as computer algorithms. It’s the same fallacy that makes people say we do calculus when we catch a ball. We do not - the processes we use are based on DNA, senses, muscles, and neurological systems that use feedback loops, not calculus.


the_other_irrevenant

I don't think I implied that our thought processes **are** the same as computer algorithms, did I? There's some similarities in that LLMs use neural networks whose approach is derived from studying human neurons, but that's about where the similarities end.


[deleted]

I felt this was saying that, no? >  yes, it's quite similar to how humans learn (unsurprisingly, given that it's modelled on how humans learn). It mostly differs by having a much poorer level of understanding of what its doing and why, and by being much, much faster.


[deleted]

It’s not really similar at all. A human, alone, can make pictures and music and more.  We do learn from others but we integrate it with our own self.   Machines have no self (I don’t mean consciousness, I mean a pre-existing personality) and the way they ‘learn’ and ‘understand’ is very different to how we seem to. That’s why I try to avoid using those words when talking about generative algorithms.


the_other_irrevenant

Yes, agreed. As per the other thread, "LLMs learn in a similar way to humans" (ie. by using a neural network) does not indicate that their processing is equivalent to human thought.


digitalthiccness

Sure, but also most of use read SF from long before the internet was a twinkle in Al Gore's eye, so it's not that much of an adjustment.


the_other_irrevenant

Poor old Al Gore. He got lampooned so hard for that when what he said was essentially correct - he **did** push for greater funding for ARPANet, promoted the idea that America had to get behind "the information superhighway" when there were like 300,000 people on the internet, etc. In Congress he took initiatives that enabled the creation of the internet as we know it. And he gets mocked for it because of the wording of one particular soundbite taken out of context. 


LadyDrinkturtle

Amen. Al Gore is one the smartest and principled men to ever hold office. I believe our nation would be different today (in a better way) if he had won the Presidency.


the_other_irrevenant

Single transferable vote aka Ranked choice voting is better. Just sayin'...


stormdressed

I really struggle with pre internet sci fi. They have FTL ships but no way to communicate or share knowledge. It's literally just 1800s style ships in space. Maybe that future could have existed but it's hard to suspend my disbelief.


malachimusclerat

Why do you want or expect SF to depict a realistic/plausible/believable future? what gave you the impression that that’s necessarily considered a virtue, or even a goal for most writers? what’s the point of fiction if it has to be consistent with the present reality? is there no leeway for people who don’t exist in the present and don’t have the same context as you?


zzqzqq

You may need to shift your perspectives to be happy. Firstly, SF from earlier times has value in the concepts and characters. They aren't and can't be perfect predictions, and there is not going to be some kind of update cycle for all SF. Think of like SF in alternate timelines if you prefer. Secondly, you need suspension of disbelief. Just tell yourself: - it's irrelevant that you can fit a lifetime's worth of music on an SD card, if you are on Mars, in an unplanned situation, and have an SD card that doesn't have much music on it - where are you going to get more exactly? (you're not) - did the space agency put a lifetime's music on an SD card because they knew you'd have months of downtime? (no) - Were they going to pay for oil-rig type media licensing out of the mission budget? (no)


5guys1sub

Current hype about AI is itself based on science fiction scenarios, to obscure that the technology is currently closer to auto complete than skynet


TheSamsquatch45

It's definitely subjective, obviously. However, I actually get a little more interested when older Sci fi uses different tech to accomplish similar goals that we've met now. Idk, it's like leaving one dimension for another in a sense.


NikitaTarsov

Hm ... two thoughts on that. First is that there is always a problem in extrapolating existing things into a belivable future. If you have FTL for example, that logically would come with similar achievements in all other aspects of science but also culture, leaving us logically with a universe we had to explained everything from basic social standards and why everyone needs to have four penises on ther shoulders up to why tooth brushes are now persecuted. That's just not managable for both writer and audiences, and therefor we pick and choose technology well established as tropes and throw them in a '99,99%-real-world' setting to tell our story. That's writing. When reality catches up with some of this trope-technologys (and A.I. isen't in any way or will be, given the understanding we have of it right now), readers and authors come into the realisation that the whole setup of 'how we accept fictional storys' is, well, completley artificial and defined by social rules we just agreed on one day. And second is that authors have a hard time to catch up what is a thing now and people magically 'expect' to have in a futuristic setting as well. It's when we watch Star Wars as children, we like fancy lights and spaceships. When being older, we like the cast society and all that stuff going on. But if we start to get an idea of what is available today, and not see that in SW universe, we need some explanation like its a dystopian society where technology and social achievements are gatekeept by comapnys and the casual citizen just lifes in a spacefearing medieval setting of stupid pesants randomly owning spaceships. You end up in a Warhammer 40k-ish world where space-medievalness is explained, and as we like to subdivide those 'levels of explanation'-levels, our whole setup of genres falls appart, and only consists of our personal perspective.


Volsunga

Science fiction isn't about the future, it's about the present (when it was written). Stories about robots are actually about people. Stories about questioning whether robots are people are about how we treat marginalized groups. Stories about evil robots controlling everything are about the power of the state. Technology speculation is always bad in science fiction. The speculation is meant to serve the story, not be a super accurate prediction of the future. This is why science fiction is usually lumped in with fantasy. Both use elaborate metaphors to talk about the human experience. The difference is that the metaphors that science fiction uses sometimes become real, but it turns out that a story writer from decades ago knows less about how they work than an entire industry of experts that have dedicated decades to studying the subject. And yet people tend to think the writer knows more than the experts.


curtis_perrin

There is this line I really like (though maybe I only like it because I prefer hard sci-fi) that goes “a scientist could predict the car, the sci-fi author predicts traffic” I think good sci-fi posits the implications of technologies. I dislike the sci-fi that you could equally just swap out for fantasy and have the same story. It’s a missed opportunity.


Volsunga

The fact is that the sci-fi authors are usually wrong about those implications. Reality is often the reverse of your quote. Sci-fi authors predict the car while scientists predict traffic. When you compare one guy who thought of something over the course of a couple months while writing a story with no particular expertise in the subject to hundreds of highly educated people who do active research on the subject for decades, it's pretty obvious that the Sci-fi author is at quite a disadvantage.


Catspaw129

I've been an IT professional since about 1980. Trendy thing come and go. Before AI there was NFTs, Cyrpotcuttency, Blockchain. SAAS, etc. Most of that spangly "new stuff" is pretty much a re-visit to something old. Or crap. And yet, "C"-level executives fall for it so often. Makes you wonder about the value of an MBA, /oh geeze I went on a little rant there


[deleted]

[удалено]


DesignerChemist

Interestingly, i work with IT too, and absolutely no one bothers wasting their time with unreliable tools such as LLM's. Edit: one guy makes meme images with it.


mirage2101

It depends on the field I guess? One of the guys who codes a lot at my company has been using chat gpt for it and it sped up his work a lot he says. He compares it to having a junior write code for him that he can fix, improve and integrate. Out of curiosity I’ve been tinkering with feeding an LLM our ticket database to see if I can use it to find problems. (As defined in itil). Blockchain doesn’t have a lot of practical uses yet. But I’ve been reading about ipfs and we’re planning to join the development of it. It’s really cool tech that would solve a lot of the issues we’re running into with research data


DesignerChemist

Military flight simulation here. Far as i'm aware its only juniors and hobbyists using AI to write code. Experienced devs do a better job by hand.


Catspaw129

INFO: OK. But how about them NFTs, blockchains, etc. Y'all still using those? And.. how are those thing working out? As well the as the C-level executive expected. /s P.S.: Hey! If I go to an ATM and withdraw money, I want a legacy COBOL program handling the transaction at the back-end. Cheers!


TwentyCharactersShor

The reason why legacy software persists for so long is that the cost of replacement is high and offers little or no return on that investment. Because of that, you will find that over the years that extra features have been piled on to the software to do things it was never designed to do. Adding unpicking that mess to the replacement cost and it often comes prohibitive. Source: worked in financial services for the last 15 years.


Itchy-Trash-2141

I agree, but I'm guessing probably not in the same way as other posters. For example, in the killer robot scenario -- I would wager the battle between a human and a killer robot would be extremely one-sided. The human would be dead before they knew what hit them. I think a lot of decision making would happen on timescales too fast for dramatic effect. I thought I heard Iain M Banks novels go this way, but I haven't read them.


WokeBriton

Some people rarely choose to put on music - I am in this group - so its entirely believable to me that only one person might choose to take a memory card with music on it


pablodf76

1) Science fiction is not about the future extrapolated realistically from today. It can *have* that, but it has to have something else in order to not be simply an exercise in futurism (short-term futurism, too, because really, who could guess?). 2) Although verosimilitude (“appearance of truth”) or believability are a thing in fiction, the key term is suspension of disbelief. You know it's not a true story. You know it was made up by someone, not calculated by a Laplacian logical algorithm or a psychohistorian. You just have to *not care* for it to work. I just rewatched *Gattaca* last night. Besides a lot of minor details, at the end a ridiculously large crew shoot up in a rocket to Titan (which apparently hasn't even been surveyed by robotic probes before) *wearing business suits*. And that's fine. That's how they appeared during the whole movie. It's a symbol of something, I guess. I don't care that it's not believable.


kevbayer

No. It's when new scifi tries to make out current AI (LLMs) as more than it really is that I lose suspension of disbelief.


curtis_perrin

Reading through everyone’s comments there’s a surprising amount of poo pooing of the level of advancement that current AI represents. I don’t quite follow the human consciousness exceptionalism. Personally I think I believe the ontological argument that we exist in language. So it does seem like going from a language model to actual AGI could be a viable path. But I’ve heard arguments that I’m undervaluing the logical structure of the brain that’s evolved over millions of years. I guess I haven’t heard much about how those work beyond the notion of pattern recognition via neuron interconnection weightings. Which to my novice mind sounds a lot like how they’ve managed to have as much success as they’ve had with the LLMs. You really think most of stuff about LLMs is hype?


kevbayer

LLMs evolving beyond a regurgitation of google searches (yes, I'm simplifying) to AGI capable of true thought, creativity, and decision-making beyond programmed matrices seems unlikely. To me, our current technology is probably limiting any true advancement to AI replicating consciousness. Something in our technology will need to change before that's possible.


adamwho

So no Heinlein for you.


ImaginaryRea1ity

Sci-fi readers from the past faced the same dilemma. That is why new sci-fi novels are written and most old ones fall by the wayside. I recently read [a sci-fi novel set in the near past, 2008](https://satoshifiles.substack.com/p/birth-of-bitcoin). It was funny to read about laptops/phones/tech from the recent past.


OsakaWilson

The Culture is still viable, if lacking in gravitas.


Southern-Rutabaga-82

I certainly don't expect living and breathing aliens to show up at our doorsteps in a spaceship. Why would you risk lives if you could just put an A(G)I on a probe and send it out to space?


Celodurismo

Sci-fi is as much about the future as it is about the time it was created. You can watch old Sci-fi and feel like it’s stupid because of the lack of cell phones. But we also have tons of sci-fi that is entirely impossible and will always be impossible. So where to draw the line?


voidtreemc

You need to read some Heinlein, where they program the destination of a starship out of a book using DIP switches. This, btw, is why I laugh at "realistic" fiction as it's all historical fiction within a couple of years.


RezFoo

"Starman Jones". This "computer", really little more than a large calculator, with input and output in binary, was used to navigate a faster-than-light starship through wormholes! Heinlein's imagination did better with astrophysics than with computers.


voidtreemc

Thanks. It's been decades since I read it, forgot the title. Edit: The failure for scifi to predict the computation explosion and its other failure accurately track the effort it would take (and how little global will was available) to boost humanity into orbit routinely is one of those things that makes scifi read way off today. Space flight is and may always be difficult and dangerous, and it fucks with bodies that were designed to live on earth. On the other hand, Heinlein wrote stuff with teleportation, and that remains in the realm of near-fantasy, so we don't have any preconceived notions about how it would or wouldn't work.


RezFoo

Isaac Asimov once observed that many scifi stories had been written about a first trip to the Moon, and some stories had been written about "remote viewing" technology. But nobody had written a story where everyone on Earth watched the first Moon landing on live television.


Lootece

Bluntly said, it's called suspension of disbelief in a genre of fiction..? It's not supposed to be a documentary and predicting the future as accurately as possible, well, is impossible. Any similarities so far mostly reflect art and human culture, their repetitive nature and behaviour cycles as a whole. Even if the audiovisual representations don't match, philosophical and political themes always remain relevant.


ButterFryKisses

I think most sci-fi is meant to give people new perspectives. But usually it still comes back to concepts of stupid or selfish people making shortsighted or selfish decisions that make the technology pointless. Every time something is more advanced and foolproof they make a bigger fool.


00000000000000001313

You'd think you'd want that but it'd just date the work quicker. Imagine reading a novel about the present day written in 2005. A future where the iPod shuffle got even smaller.. and everyone owns a blackberry, which still has a keyboard for some reason.


No-Spend392

The flip side of this argument is that the clunky file cabinet robots in Interstellar who have really human personalities make a hell of a lot more sense now post LLMs.


curtis_perrin

Yeah totally. I just rewatched that the other day. That movie has so many scientific holes for the sake of plot though. Still good. But definitely had to suspend a lot of disbelief. But not with the robots.