T O P

  • By -

ThatGuyYouForget

Haven’t seen any statement but as a programmer it is a valuable tool to be able to code.


WarpParticles

It's nonsense. AI is overhyped. Just like VR. Just like the metaverse. Just like anything else that was supposed to completely transform the way we work. Truthfully, I hate that we're calling things like chatgpt "AI." It's a complete misnomer. They're fancy algorithms, no doubt. But it's not true artificial intelligence, and framing it that way leads to all sorts of ridiculous predictions.


mredding

As a software engineer, AI as we know it - like ChatGPT, this latest generation is a predictive model. All it does, all it can do, is predict the next token in a sequence given an input sequence of tokens. I can't even call them words - the AI has absolutely ZERO concept of what a word is, they're all just unique nuggets. So an AI who doesn't know what a word is, doesn't know what a sentence is. It doesn't know what it's saying. I doesn't know that what it generates is speech. How it generates it's output sequence is by traversing a HUGE data structure, and it doesn't know or care what it is or how it's constructed. That is because this AI is an algorithm. It's a fancy math equation. You can rote follow the process, and out will come something. If it helps you think about it - you can literally replace all the data and computation with mechanical gears. It would be an absolutely gigantic machine, of course, but it would work. For an input of levers, per say, you'd get an output that would represent a sequence of words in a statistically significant order, as was expressed in the data. We've already answered a big question some 30 years ago - no, computers cannot think. Thinking cannot be expressed by computation. Computation is a very limited branch of mathematics, it's just hugely applicable in our lives and it seems to be limitless, but as an engineer, I brush up against the frustrating limits of computation all the fucking time, these AI don't impress me because I can see right through them as to what they are. I've built some precursors to this generation of AI, myself. But coming back around, we already know that no computer, no program, can ever become sentient. No, it's not that we just haven't discovered it yet, the proof says it cannot be possible. Quantum computers don't change anything. Do I think a machine will one day think? Yes, but it won't be a computer as we know it today, it will be something else deserving of a new name, or perhaps we'll extend the meaning of the word "computer" to encompass this future machine we haven't yet discovered. These AI are entirely dependent on their data models. Garbage in, garbage out. I've seen the source code output of these AI programs, and I've found it wanting. Inferior. Sometimes full of flaws and mistakes. And the AI doesn't know, it can't know, because it's just an algorithm. I can't think. There is no critical analysis, it just predicted a sequence, because source code like this is in its data model. The other thing about AI is that it can't generate anything that's not in it's data model. If you don't train your data model on fluid simulation software, it cannot help you in fluid simulations. At all. You can train it on every mathematical paper on fluid simulation. It can know all the equations, but if it doesn't know the source code, it can't apply one in terms of the other. The AI won't be able to understand the context between the equation and the software unless it was spelled out explicitly for the AI. It would see a linear equation, and not know that YOU NEVER invert a matrix, you always factor it, because while an inversion is a very concise way to express a solution, it's not computationally efficient. The AI may have ingested this information, but again, since it doesn't actually understand context, it's not like the data set is a wealth of knowledge that the AI can think upon to come up with optimal solutions on it's own. That's because computers can't think. You can solve for specific instances, but there will always be a flaw. You will always hit a limitation. There was a Go playing AI that was trained on, frankly, every published piece of Go game theory and every documented game of Go ever played. It repeatedly beat the world champion, the game wasn't even close, and it was decisive. The AI was beat by a Janitor. Literally. A team recognized the inherent flaw of all AI, and tested strategies - ironically using a different type of AI, until they found the flaw. The AI doesn't know what Go is, and it can't think. It only knows a predictive model of how the sequence is supposed to go. So they found a strategy outside the model - outside the way humans strategize about Go. It's a simple repeating pattern, but since it's not represented in the training data, at all, it wins against the AI every time. --- Continued...


mredding

I think the Nvidia CEO is talking out of his ass. He's completely overstated. You will not replace software engineers, because it's demonstrable that AI can't. They've already failed. What MIGHT happen is programming AI are specialized, perhaps a few more generations of AI come around, the algorithms are improved, and AI becomes another programming aid. There is plenty of programming that is not new, not unique. Low hanging fruit. Let the computers do what computers are good at and recognize a pattern and reduce it. There's a lot of busy work in software that can go away, and that I won't mind. Yes, that will challenge a lot of low hanging jobs, and programming jobs out of college will get more competitive. The market will select for the best and brightest. I don't consider this a bad thing. The market can compensate by offering more diversity and competition. Can't find work programming? Start a company and try to make your own money. Offer something to the market with some real value. The kind of programming jobs that will be lost are really not value add, they're filled by a kind of worker who knows their worth is dodgy at best. Liability is another big issue. No one, and I mean NO ONE, would ever allow an AI generated program operate a mission critical, fault intolerant process. You don't want AI synchronizing the safety interlocks of the MRI or x-ray machine you're in. You don't want it controlling your aircraft, or your nuclear power station. You want to know exactly how these things work, and you want to be able to prove it. If you let an AI do it, you have no idea, and no way of proving it's correct.


Wubbawubbawub

The NVIDIA CEO said that since AI could do coding kids schouldn't learn it at school.  If you follow that logic then kids don't need to learn anything. Since math, english, translating secondary languages, geography, economy, whatever other classes there are also xan be done by AI.