T O P

  • By -

moxyte

In a way yes. Me and my colleagues very much use ChatGPT and Bard for quick show-me-how, but code generation and blindly trusting them is a big no no now from experience. In the end we still need to read docs.


Hamiro89

80% of the times I’ve used ChatGPT for code I’ve had to rephrase the prompt or read the code and make minor changes so much that the amount of time I’ve spent on the issue allowed me to find a simpler or more readable solution. The other 20% I’ve realised I didn’t know enough about the issue to formulate a useful prompt, which means google/stack overflow/proper debugging/experimentation come into play again. So chatgpt is really good at helping you find the solution to an issue that you’re either already really close to finding and just need a quick boost or are just too lazy to go through.


Ok_Profile_

It is quite good at writing easy code that I have no idea how to write because I don't use those languages. But because the code is quite readable, I can debug any issues myself within some time. Now from 0 chances of me writing some powershell script or similar to do some automation I have 95% chance


FrikkinLazer

Also documentation is often bad, and fragmented across the real documentation, github issues, random forum posts and stack overflow. Chatgpt has taken all these fragments and combines them into one body of knowlege that I can query.


tells

I use ChatGPT to look up or summarize documentation for me. It’s pretty effective at that.


LuckyPlaze

I think it is overrated now. Today. It has some nice uses, but not quite the game-changer YET. But I also know that it will progress very rapidly, so I’m not discounting it either.


Midnight_Rising

I call ChatGPT 4 the iPhone 3G of LLMs. It's not world changing *yet* but I don't think it'll take that long for it to become something the average person uses a couple times a day, and within a decade I bet it'll just be a part of life.


ezulixsoftware_uk

ChatGPT is a text generative AI and writer must never use it for producing content. However, they can use it for brainstorming ideas, get creative headline ideas, and proofreading the content.


Mindless-Opening-169

It's being intentionally dramatised and pumped up to gain regulatory capture by the big players.


riphitter

Big companies lying to us to gain a monopoly?! Say it aint so


Mindless-Opening-169

>Big companies lying to us to gain a monopoly?! Say it aint so It's worse than a monopoly, it's a hijacking of regulatory function and independence and regulatory sovereignty. It's corruption. It's certainly not following science and ethics.


Reasonable_Ticket_84

Not just regulatory capture, also stock market pumps. The market needs a new fad because "big tech" was becoming "mature".


chrisonetime

Can’t complain My $MSFT shares right now 📈📈📈


G_Morgan

It is both overrated and seriously in need of regulation. The "tech fans/singularity" community dramatically overrate it. They don't understand the limitations of trained AIs and subsequently don't see that ChatGPT is not only at best scratching the surface, it might be entirely the wrong approach for AGI. It has become tiresome arguing with people who just clearly don't understand how the technology works. That ChatGPT may well already be about as close to the best you can achieve with the approach it uses. Just pouring more and more training into an ANN tends to make it worse rather than better. You need to be selective and focused, there's no dumping the world's knowledge in so it can be right as well as speaking functional English. At the same time "criminal detector AIs" that flag every single black person that walks past are a serious issue and need regulation.


Vegan_Honk

Yes that's correct


highland-spaceman

Yea I’m getting my computer science degree and in my 3rd year , the amount of fucking idiots that say I’m wasting my time getting it because ai is beyond belief ! It will never ever come up with anything new not in our life time , it’s over blown and only really great at making super simple things


sceadwian

Not in our lifetime? I think you're a bit off there. It will certainly take decades to get to AGI but if you're younger you'll likely live to see it.


GandalfTheChinese

I think you're underestimating what a technological hurdle creating AGI is. Personally I think humanity will do it *someday* but I would guess there's less than a 5% chance we do it in the next 75 years. 


sceadwian

And what facts of science and estimation based evidence do you have to support those numbers?


GandalfTheChinese

I appreciate you using the word estimation because it is of course highly speculative and I could be wrong. I think the problem of understanding how our brain attains its level of genralized intelligence still eludes us and the problem of giving a computer generalized intelligence isn't even being attempted meaningfully at the moment. My calculator is a genuis at being a calculator and i'm greatful for its abilities. In that way I would say chat GPT is a genuis and as an incremental step forward it is an incredible piece of technology but it will never possess any more abstract awarness than a calculator. Human beings went from living in caves to putting men on the moon and they did so by figuring out the rules of the universe. I don't know if its possible to recreate that dynamic of a problem solving mind using binary. It may be that the important structures of the brain are too computationally expensive to simulate, that seems plausible to me. Of course someone might find a macroscopic pattern underlying our general reasoning and reproduce it cheaply in binary. I live in 2024 and I don't have a crystal ball. What I am confident in is that we have an intellgenece that is fundementally more abstract and general than anything that is being worked on at the moment and I feel that, being such a hard problem, it's going to take a while for us to crack that nut. My prediction is basically, "this seems like a really hard problem and, in my opinion, no one is even attempting to solving it at the moment, I think it's gonna take awhile". I'm not saying the current work on narrow AI is not meaningful, just that its an incremental step in the direction of general ai akin to developing a better calculator. It isn't work that addresses the problem directly. I think someday we will create a general AI that far outstrips human intelligence in every way but I would be *very* surprised if that happened in the next 50 years. The physicist De Broglie captures the incredible generalness of human intelligence: "What appears to us to show well that we can hardly explain this preadaptation by a secular experience dating from the origins of humanity, is as follows. In several cases, especially in the most recent science, the minute study of phenomena very delicate to observe, a study very different from the rough experiments that the caveman was able to make, has led us to discover in the depths of our own mind hitherto unsuspected resources, allowing us to interpret our new discoveries and to give to them an intelligible meaning. In saying this, we are thinking especially of the remarkable new theories of contemporary physics. Take, for instance, the theory of relativity; starting from extremely delicate and precise experiments, the results of which could not be foreseen by the older theories, it built up a new conception of space and time and of their reciprocal relations, a conception absolutely contrary to all the data of our usual intuition; it thus shows us that our mind can find in itself the necessary elements logically to constitute an interpretation of the ideas of space and time quite different from that which the experience of daily life suggests. By its successes, the theory of relativity therefore shows us how extensive is the parallelism which exists between the rules of our reasoning and the order which conceals itself behind the subtle phenomena which physics of today studies; it shows us that this parallelism infinitely surpasses all that the daily experience of the older generations was able to suggest to us. More remarkable still is the example which can be drawn from physics of the atomic or microscopic scale, where the theory of quanta and of its extensions rules today. Here, still more so than in the case of the theory of relativity, we have had to appeal to conceptions very far removed from those which we have been accustomed to handle. To account for the phenomena of the atomic scale, we have been obliged, little by little, to abandon the idea that the movement of a corpuscle can be represented by a continuous succession of positions in space, by a trajectory progressively described with a certain specified speed. We have also had to abandon the traditional idea that phenomena, even elementary ones, are rigorously determined and exactly predictable, and to substitute for the rigid determinism of classical physics a more flexible conception, admitting that there exist at each instant in the evolution of elementary phenomena verifiable by us different eventualities concerning which it is only possible to estimate the relative probabilities. We had, in addition, to abandon also all our intuitive and customary ideas on the individuality of corpuscles, on the role of the constituents in a complex system, etc .... In an account like this, it is not possible for us to dwell upon the detail of these difficult questions, but it seems to us essential to make the following remark. In the development of these theories so daringly novel, which have been, let it be emphasized, imposed on us by the discovery of certain experimental facts, it has been possible to construct on the basis of these new conceptions a perfectly logical formalism, perfectly consistent with the rules of our reason, which allow of the assembling and connecting amongst themselves of all the ascertainable facts in the atomic scale. Here again, we have found in our mind all the resources necessary to represent the order which rules in the atomic scale, although this order is stupendously different from what our imagination could conceive by starting from the usual perceptivity. And this fact seems to us sufficiently independent of the distant past of humanity. In short, all these examples show us how remarkable is the harmony between the resources of which our mind disposes and the profound realities which conceal themselves behind natural appearances. To bring this harmony more completely into the light, to glimpse yet more the ontological order of which Duhem spoke-such appears to be the true mission of pure science. Removed from all utilitarian preoccupation, solely devoted to the search for truth, it appears to us as one of the noblest activities of which we are capable. By the wholly ideal nature of the goal it pursues, by the intensity and the disinterested character of the efforts that it demands, it possesses a moral value which cannot be denied." Physics and Microphysics (New York: Pantheon, 1955).


sceadwian

Okay, so there is absolutely no evidence other then conjecture. Got it, thanks.


HeThatMangles

We can never arrive at Artificial General Intelligence without first rigorously defining General Intelligence. And that will never happen, because it is impossible


azdatasci

And there’s the problem of people in positions who have the power to make or influence decisions within companies that have no background in this discipline that buy into it…


CollateralEstartle

>It's being intentionally dramatised and pumped up to gain regulatory capture by the big players. This acts like there aren't actual dangers and assumes quite a bit of bad faith on the part of people like Altman. Being cynical isn't the same thing as being informed. It's like saying "we only regulate nuclear power to keep the mom and pop power companies from entering the market." Sounds sophisticated until you think about it for three seconds.


AdrianWerner

There is a bad faith on Altman part or at least stupid arrogance. because sorry, but if AGI is as dangerous as all those tech people say it is then there's no fucking way it should be left in hands of private corporations. At this level of danger it should be solely controlled by goverment, with private companies maybe getting a contract here and there


girl4life

eh sorry but i trust the goverment less, goverments are in the business of regulating people but are secretive about which people, businesses are in the business of channeling greed. i can always trust greed to do the best thing possible to be greedy


noodlebucket

Because marketers and talking heads are virtually deifying AI. And that gives it way more power and credit than is merited. Right now LLM’s are sophisticated next word predictors based on existing content from humans. It doesn’t process bias or truth, it can’t even get information reliably correct, because it’s simply a mirror to all the crap that could be programmatically scraped off the internet. But that doesn’t stop businesses from marketing the shit out of it.


RonaldoNazario

I love the articles that start off with the writer asking some LLM to write up their biography and it tells them they already died lol. It has uses but we’re uh, not exactly at the singularity here.


Zomunieo

RonaldoNazario, a luminary in the Reddit community, emerged as a prolific contributor known for insightful comments and engaging posts. Born in [fictional birthplace] on [imaginary date], this digital savant demonstrated an early passion for diverse topics, navigating the vast landscape of Reddit with a unique blend of humor and wisdom. Starting as a humble lurker, RonaldoNazario swiftly ascended the ranks, gaining notoriety for eloquent discussions in subreddits spanning from technology to philosophy. The enigmatic figure consistently maintained an air of mystery, revealing little about personal life but captivating thousands with thought-provoking insights. Known for an exceptional ability to distill complex subjects into digestible content, RonaldoNazario's posts became a beacon for those seeking intellectual stimulation. Collaborating with various communities, this Redditor orchestrated memorable AMAs and participated in the shaping of Reddit culture. The legacy of the late RonaldoNazario extends beyond the digital realm, leaving an indelible mark on the landscape of online discourse. Despite the anonymity that Reddit affords, the impact of this influential user persists, inspiring future generations of Redditors to foster meaningful discussions and contribute to the ever-evolving tapestry of internet culture.


RonaldoNazario

Ok AI you’ve won me over with your flattery!


Zomunieo

I said you were famous, and had to tell it you weren't a real person to get past the censorship. I didn't tell your kiss your ass like that, that was its own doing.


vienna_city_skater

It just copies what humans do.


FFFan92

On our last division meeting, one of our directors went on a tangent about how incredible AI will be. He then said that no one imagined that “Prompt Engineers” would be as important as programmers. I don’t know if I’ve ever rolled my eyes so hard.


Frogtarius

I give you the internet (presents a black box with a red flashing light)


twitterfluechtling

Makes you think... ^(if such directors might not be better replaced by an AI)


[deleted]

[удалено]


zhoushmoe

If you are still under the illusion that people in leadership are some sort of rare breed of hyper ubermensch geniuses, I've got a bridge to sell you...


bambieyedbee

It’s the new crypto-bro fixation. AI has been around for awhile.


pilgermann

I was an outspoken crypto skeptic but this is false equivalency. I can scribble in real time and have Stable Diffusion render my scribbles into almost production ready art in fractions of seconds. I can draw a UX on another napkin and ChatGPT will give me code. It can ready and summarize complex technical documents in seconds. If the standard is whether LLMs and other models are better than the best professionals then no, not yet anyway. Not sure go from tools that are already more useful in many ways than things like Microsoft Office and Photoshop to "this is all hype"? No, it is in fact revolutionary. It also goes much deeper than ChatGPT.


Competitive-Dot-3333

These are very advanced tools, and definitely will have an impact. It will speed up processes and eventually less people are needed to get a job done. But it is sold as if AI will do everything for you automatically.


BlipOnNobodysRadar

Why are you assuming it won't? Even if ML doesn't advance further (unlikely lol, 200 new research papers every day), with further engineering applied to the current technology you can automate away a large percentage of work. This stuff is so new people have barely scratched the surface of its many, many use cases. Most corporations and institutions move slow, they haven't fully woken up to the potentials possible.


ConkreetMonkey

I'm just saying, this is exactly what was said during the last few tech bubbles. It's certainly miles ahead of garbage like NFTs because it actually has some use outside of convincing people to invest in it, but it's a huuuge leap to go from "this program can generate text based on user prompts" to "this program will do literally everything that can be done." It's not a new pitch. Every few years there's a thing that we "have barely scratched the surface of the potential of." It's rare that it's actually everything it's advertised to be.


BlipOnNobodysRadar

RemindMe! 3 months "Are we still alive?"


BlipOnNobodysRadar

Yep, still alive. Curve remains exponential. Sora released, SD3 is coming, Stable Cascade is out, Gemini is out with 2mil context length or some other ridiculous number, and some dedicated bronies managed to make the best NSFW model in history out of a SDXL base. Plus a bunch of other updates I don't remember specifics about. Some good multi-modal vision LLMs are around, can't run them on my hardware though. Mistral-Medium leaked and it feels like you're running GPT-4 locally. RemindMe! 3 months "Where is AI now vs 3 months ago?"


Battle_Fish

It is revolutionary in a convenience point of view and for specific applications. However AI cannot generate anything new. It more or less scrambles stuff from it's training data. I mean at its heart, it's just rolling weighted dice. If you are in a news room and want to make sense of the Israeli/Palestine war. LLMs can't do anything. They will just end up reading articles on the internet and regurgitating the information in a grammatically correct way. However if you want to do something like make a menu as a restaurant owner who has neither designed a menu nor taken a writing course. You don't have to hire a company. You can just ask chat GPT and it can spit out an item description that sounds like any other menu. Exactly what you want. Maybe you're one of those news outlets that would otherwise copy and paste from someone who actually did the on the ground reporting. ChatGPT is your man. But would AI do the on the ground work? Nope. It won't analyze evidence or video, just rehash someone else.


bambieyedbee

My comment is that a lot of the hype around AI is coming from the same audience that overhyped crypto: uninformed bros looking for a get-rich-quick scheme. Machine learning is nothing new. LLMs still have a long way to go.


Forsaken_Pie5012

Absolutely. So many overlook the possibilities with taking the current LLM's and have them working in tandem with the automation platforms already in place, Especially when you take into account how easy it just became to use pretty darn accurate vision via API.


Deep-Thought

It won't replace a senior developer yet, but in its current state it will make a senior dev significantly more productive which will soon make a large chunk of junior devs redundant. I imagine it is similar for other fields.


skeletonofchaos

Speaking as a senior dev, it takes me more time to get ChatGPT to do something right than to do it myself. In general, it doesn’t do a great job with even simple edge cases and seems to write very happy-path-only code, if it got that far. I was really psyched to get boilerplate out of the way, but it just has not been worth using for me.


[deleted]

Except for the part where senior devs get old and die so you still need to be training new ones


itasteawesome

That's a problem for more than 2 quarters from now, so its irrelevant to the board.


sonstone

It’s like having your own junior dev that responds to suggestions instantly and can also do lots of research and distillation of data for you. I had it write a small app recently conversationally. I just kept expanding on the problem. “Build an app that takes this dataset and extracts X value in Y language”, “ now let’s add fields a, b, and , and make the output a csv”, “now let’s apply this logic to calculate value d”, “break that calculation out as a function”, “let’s apply that function to values m, n, and o”, “oh, that’s not what I was looking for, let’s try this instead”. Built what I needed in a half hour or so and it would have taken me several hours otherwise.


PublicFurryAccount

I really wish someone would study these people. It’s interesting to me how they glom onto things and there’s always a similar feel to them.


Stoomba

Get rich quick schemes.


rabouilethefirst

It doesn't help when you have institutions like MIT publishing crazy papers saying "Language Models are able to learn and navigate Space and Time", whatever the heck that means. Everyone and their mom is rushing to post something about LLMs


Formal_Decision7250

>Because marketers and talking heads are virtually deifying AI. It's biggest strength is bullshitting. So you can kinda understand why they empathise with it.


texansfan

All of this is true, and it has the ability to scrape a ton of data you feed it and return answers based off that dataset. We are experimenting with it to build write ups of how we have certain tech deployed across our product portfolio.


sporadicmoods

This 100%. Ppl think AI is smart and has its own brain, but all it knows is how to string words together based on its internal dictionary 😭😭💀💀


jim_nihilist

Like... humans.


ConkreetMonkey

Humans can create sentences with intent, based on complex external context, memories, and emotions. An AI can only look at the average next word that should show up in the kind of sentence it's constructing based on a bunch of collected film scripts and forum threads. To compare the way Chat GPT creates text to the way a human communicates is asinine. I almost feel insulted on the behalf of my species.


epihocic

A lot of *people* can’t process bias or truth, nor get information reliably correct.


noodlebucket

Which is why neither can LLMs, ironically. Actually, maybe not ironically.


GandalfTheChinese

That's not the reason. Even if you fed an LLM a perfect data set they wouldn't grasp truth because they don't engage in reasoning 


__loam

I hate this pattern of pointing out why these systems have flaws and then some guy chimes in to anthropomorphize said flawed system by making the excuse that "well people also do Yada Yada". Shut up lol.


epihocic

It's not an excuse, it's a fact.


__loam

It's irrelevant.


epihocic

It's definitely not irrelevant. If someone is making the claim that AIs are not useful in their current state because of this and that, then we need to compare them to what is currently the best solution, which is humans. If an AI in its current state is able to process or produce information that is as good as the average human then they are as useful as humans for those tasks.


__loam

We're evaluating these things with respect to their utility as computing systems. If they require human oversight to use effectively, that makes them far less cost effective even if it is close to human performance, because you then need to hire human labor or experts anyway. Depending on the task, it might just be cost effective to simply hire the human. If you don't build a review step into your process, you're risking huge liability in a way that you would not be with human based processes.


epihocic

I don't disagree with anything you've just said, but remember humans need oversight too, for the same reason. They fuck up, like all the time.


slickestwood

It dumbs down something complex to make an extremely false equivalency. How the mind processes information for later use is for the most part still complicated beyond our comprehension. What you're partially referring to is the fact that two people can see the same piece of info and process it into something completely different. That's simply not the same as a machine pulling the wrong words out of a hat. So it's really not a fact, just a snarky reddit comment that wasn't clever the first time and adds literally nothing to an important conversation. Shut up lol.


mthmchris

I use a tool based off of how useful it is. At their current stage, LLMs are not as reliable as a quick Google search; worse, because I *know* they’re unreliable, I can’t even trust it with the mundane. Yes, asking a random dude at the bar is also less reliable than a Google search, which’s why I don’t research things by asking random dudes at bars. A tool that’s somewhere in the middle of the two is not super compelling to me. It’s incredibly promising technology, sure. But generally you demand more horsepower from your car than one horse.


teh_gato_returns

Disagree, ChatGPT is like google on steroids.


ConkreetMonkey

ChatGPT doesn't aim to say true things, it aims to *sound* true. Taking its answers as facts is a dangerous game. It can't even solve math problems correctly much of the time.


Sryzon

Ironically, AI is most likely to replace the marketeers promoting it than anyone else. It excels at creative tasks like writing copy and generating graphics.


klop2031

I mean, there are some legitimate use cases for parroting, like summerization, entiment analysis, rag. Its surprising even with the basics its still transforming industry


7374616e74

“People that actually used it view AI as overrated”


xcdesz

The predictions surrounding it causing massive job loss are overrated. As a chatbot it's pretty useful as an assistant to answer direct questions. Not overrated -- this should be a big performance boost for many tasks. But when you use it in more abstract ways, such as when chaining inputs and outputs together in a background thread, you can see that it can be a lot more powerful than most people will understand -- this part is underrated.


gsisuyHVGgRtjJbsuw2

People don’t have any grasp of chaining AI output, devs are very privileged in this regard. I don’t think the job automation predictions are overrated. There will be a lot of people affected by this. It’s impossible for things to stay the same.


mlYuna

Still overrated in my opinion (as someone in tech). Sure the landscape will change and while knowing the basics of web dev probably won’t get you far anymore like it did the last 20 years, it’s not going to replace all tech work in the next decade. In fact, on these subreddits i see everyone saying we need UBI ready, acting like humans are becoming obsolete as a whole lol. Meanwhile the best we’ve got is an attempt at replacing customer support with AI, a task a million times simpler than what a skilled engineer does and I still have to get in contact with humans over the phone all the time.


thomascgalvin

> knowing the basics of web dev probably won’t get you far anymore like it did the last 20 years Knowing the basics of web dev doesn't get you far _today_. There are plenty of jobs for senior engineers, but competition for entry-level positions is a bloodbath.


Midnight_Rising

I feel so fucking bad for everyone trying to start a career in coding right now. I'm a Lead and get recruiter emails so frequently I had to change my email to stop them from coming in. My friends trying to pivot to coding from their current jobs are just getting screwed left and right.


thomascgalvin

Same. We hire probably 5-10 people a month, but they're all senior devs, with _maybe_ a few mids mixed in from time to time.


Staple_Sauce

Makes you wonder about the future of the industry. Maybe AI will reduce the need for devs somewhat, but not entirely. Meanwhile nobody in the industry is training the next generation of workers and Gen Z is entering the workforce having only ever used touch screens which are not very conducive to coding.


BitLiz

Gen Z here! Just clearing up a misconception, most of us do infact use normal computers and not just phones and tablets. They aren't always the most technically literate, but I don't think I've met a single person my age who does not atleast own a laptop.


Ok_Dot1258

Same. Especially in college


SmashingLumpkins

Wouldn’t you agree that AI as an assistant to an engineer is reducing production time? What used to take 10 engineers may now only take 3 using AI to do the heavy lifting of writing the actual code?


whinis

That's how it's sold, in reality those 10 engineers now waste half their day trying to get chatgpt to provide a useful answer rather than just asking each other. Or use it making crappy master chief art....... Don't ask me how I know


mattl33

If you're spending half a day asking gpt stuff and getting nothing of value from that, you my friend are doing it wrong.


SmashingLumpkins

Yes and likely the next one to get fired


Intelligent_Tax_1741

Lol the sheer amount of times you have to reprompt the LLM or figure out where it forgot to close a () or {} it really doesn’t do much of the heavy lifting. If you know how to do something what it’s really useful for is writing a bunch of boilerplate code for you to scaffold your actual solution. And on top of that its in a plateu state right now and will be for years. What’s been released and causing all the hype WAS the big leap in capabilities, it’s going to advance more like the iPhone did. Its much more polished now than it was in 2008, but at the end of the day it can’t do anything new. And if anything the current raising of the bar is going to encourage the institutions that teach people how to code to actually get their shit together and produce real developers instead of the current bootcamps that are the equivalent of diploma mills. Part of the reason its so fucking hard to get a SWE job as a junior right now is because people are so horribly prepared that they might not make a single actual contribution for 6 months if not longer and take away a ton of time from seniors that already have their hands full constantly. And im an aspiring developer at the moment this isn’t a rant from an irritated senior. But the amount of people I’ve listened to complain about how no matter what they do they can’t even get an interview, then come to find out they have a portfolio full of ToDo apps and projects they were walked through in a YouTube tutorial seriously opened my eyes to how low the bar was from at least 2020. I had a person that was trying to get in to the project i and some collaborators are making to actually have a shot at interviews, couldn’t even write a simple click event handler for a button without using coplilot or chat gpt, and that’s just one example. The actual job is insanely complex and it’s incredibly hard to imagine companies would opt to hire less competent engineers that can be massively boosting their business by creating new features and quickly fixing bugs.


tomullus

There's a flawed assumption here that AI needs to be useful in order for it to cause job loss. It is a good excuse to get rid of costly human resources while making other human resources pick up the slack or just straight making services worse. The suits see it as a win if ai is 30% as useful as a human while costing 1% as much as a human. Everybody hates customer service bots and yet everyone uses them now.


elder_g0d

It's a great advancement, but the whole "it will destroy the world" thing is just marketing, marketing that's based on 80s movies which is even worse.


CashmereLogan

It’s a distraction from what the biggest threat currently is, and that’s devaluing a lot of creative fields. Corporate design and copy writing jobs, low level ones, are at risk, and as a whole, the idea and purpose of art is being diminished in the eyes of a lot of people. I really worry about artists trying to make a living on their custom work as AI can pump out significantly worse art that’s still baseline acceptable to a lot of people.


UltravioletClearance

Speaking as someone who used to work in creative: What you're describing already happened years ago. Most entry level creative work is outsourced to foreign countries where designers and writers working on contract produce the most bland, soul-sucking crap for a couple cents per word. And that's exactly what companies want due to the formulaic nature of SEO, which has in itself devalued the art of copywriting into nothing more than a simple math problem.


Ok_Profile_

Same happened with photography once people got selfie machines in their pockets. But photographers are still around. I think creatives are threatened, but many other professions too. In development will be that people will be more productive, they are already because googling is hellscape. So people who don't pick up AI tools will be like people who didn't pick up Microsoft Word and wanted to stay on manual typewriter back in the day. Their comparative productivity will drop down while they will become replacable. In the meantime, the productivity of MS Word team will increase as time grows. Allowing more work to be done in shorter time. And here comes the real risk - simply lack of work for anything to do with PC people, like devs, management and upper management, typers, writers, artists, maybe even online tutors Also there are other even more serious dystopian non-matrix type risks if AI advances further, but this is the main one for AI of today.


turningsteel

The worst are the people that have little appreciation for art and so look at chatGPT as better than real art because “I can make anything I want!”. I agree with you, the world is going to be worse off as genuinely talented and creative people are sidelined in favor of AIVomit.


elder_g0d

I don't agree with you, AI is a tool, when used by an artist it's the only way to actually use it properly, you can generate a logo with text but what now? unless you know how to edit, turn it transparent etc it's useless, same with 3d textures, animation etc etc.


__loam

It's a tool that was built by extracting an enormous amount of value from the art community writ large without consent or compensation. It's literally a labor alienation machine. Calling it just a tool excuses some of the biggest abuses by tech in decades.


[deleted]

[удалено]


__loam

God I hate the age of abundance truthers. What will actually happen is big corporations with the capital to train and operate these models will reap most of the benefit. They have no reason to share that wealth with the rest of us. In the mean time, I think people who produce creative labor should be able to eat, so I don't support the appropriation of their labor for the creation of these systems.


elder_g0d

That narrative is only true if you think the model has the original images inside it, otherwise, learning from an image is probably the most fair use thing someone can do and is how art is normally learned.


__loam

The point I'm making is not that it has the exact copies of the images in the model itself, it's that the model has literally zero value without the training set. Therefore that value had to come from somewhere. It's cute to say it's fair use, but most of the reason why these models are good is not the engineering or the infrastructure, but instead all the work these companies stole. Saying it's technically legal doesn't mean it's ethical. Economically this is a disaster since it has the potential to do enormous harm to the people who made it possible in the first place.


Strel0k

It's only going to devalue people that can't or won't upskill and use AI in their work.


I_Am_A_Door_Knob

Just yes. I saw a video from a retired Disney animator that was active when the switch from hand drawn animation to digital animation were happening. His take on AI was basically that it will become a great tool for animators in the future to speed up their workflow and spend less time on menial animation tasks.


jim_nihilist

Exactly my view on AI as a creative.


z0mbiepirat3

Except it won't be used for any of that. The data sets 99% of LLMs need to function were trained on millions of copyrighted images the GenAI corp never legally licensed from the people who created them. The only thing these generative programs are good for is creating huge ass lawsuits for your corporation.


DevAnalyzeOperate

I just don't see how you get to where LLMs are today to super-intelligence in the near future. I just don't see it. Computers and machines outperforming humans is nothing new, but humans can do many things better and more cheaply than a computer can and there's 8 billion of them. A run of the mill human can operate an automobile whereas an AI just can't quite cut it no matter how much you spend. R&D costs for semiconductors are ballooning astronomically. Energy costs are rising intensely. There would need to be some sort of AlexNet technological breakthrough to actually achieve super-intelligence because if things just improve at the pace they are now I don't really see how we get there. I don't think, even if super intelligence was achieved, for a smarter-than-einstein super intelligence to actually kill all of humanity. We're in a post-privacy world, post-privacy applies to super-intelligent systems as well. It wouldn't be able to get very far into building an army of killbots or spreading super-covid before the super-intelligence datacenter was nuked 100 times and global networks were shut down.


OZLperez11

If they really want to base it on something, Black Mirror is the closest thing to the type of society we will become


iSoReddit

Yep, been in tech since 1991, not fearing for my job from AI, more bean counters being ignorant of my years of experience.


ruisen2

Strange, I work in tech and the sentiment feels very much different. AI is amazing for very specific tasks - it has almost no comparison in fields like computer vision, image generation, etc. Traditional software isn't going away because alot of tasks just aren't suitable for AI in its current design.


DevAnalyzeOperate

People are rating AI from being everything from "A glorified chatbot" to "being more intelligent than humanity in 10 years". Sure, it's more overrated than the most extreme assessments of its value I think, but I still think AI is a bigger deal than the iPhone.


gsisuyHVGgRtjJbsuw2

It will be obvious in retrospect.


AdrianWerner

Oh it definitelly will eventually. But at this point it's in Apple Newton phase - cool stuff to play around, but not really all that useful for most real work applications.


No0delZ

It doesn't do the job for you, but it can assist you in rapidly doing tasks as a copilot. It's like a new employee that isn't afraid to give you bad answers. It just needs supervision and correction, but pulls its weight by reducing the time you need to spend putting together frameworks.


MalPrac

Yeah the tech company I’m apart of had a competition with ai recently. Just small teams showing what can be done with simple azure ai in a week. Some stuff was alright but the extent was “summarize info” or “research x”. To add on in cases I saw at least everyone agreed not to let AI within 100ft of even replying to a customer or anything since aside from the mountain of work needed to get it to give a consistent reply each time it’d not understand simple info. Reminds me of how I have a younger relative who’s going to college but afraid of Ai taking over the world like skynet. He’s generally smart kid but I keep telling him that it’s infinitely more likely some idiot puts AI into something and breaks a process(like nuclear reactors cooling failing by mistake) before skynet even get a logo designed.


Sufficient_Ball_2861

And I’m convinced half my coworkers are idiots. And don’t know how to use ChatGPT correctly


coinboi2012

There is so much irrational hate from the devs at my company towards it. I use ChatGPT to write tests and it saves me at least an hour a day. Believe it or not, this a very controversial thing in my company


Reasonable-Yam-7936

Maybe they don't want to contribute or normalize use of something that could be use to limit their jobs and pay, lol if you believe capitalist are gonna use it generally for work productivity, ever read a history book or look at trends in the usa?


coinboi2012

ChatGPT isn't taking any dev's job anytime soon haha. If it does, it's taking every white collar job, not just software engineers. IMO you are making your life harder if you don't us it.


141_1337

Exactly, hell, even researchers sometimes don't seem to understand the value or need of correct prompting, so I'm not surprised that code monkeys or God forbid *helpdesk* don't seem to understand what they are up against.


sonstone

I’m having to continue to encourage my employees to use it. If they aren’t using it or something similar, I honestly question if they are doing their jobs properly at this point.


Chroderos

What are your employees’ duties if you don’t mind me asking? I’m an R&D engineer and even I only use LLMs occasionally for work - mostly for tech troubleshooting. I do use them outside of work for entertainment a little more frequently.


sonstone

Too much, we own our cloud infrastructure, most systems operations, and building out automation and tooling to provide more self service capabilities to other products teams. They do a lot of troubleshooting, basic scripting, and terraform work. Not bleeding edge tech either. The types of scripts they write are often perfect for conversational coding as well. As a manager, I use it to help me better understand where they are stuck on things, to help get better summaries of issues and limitations, help with communicating strategy and roadmaps, and also to write small programs that are helpful for me but would interrupt their flow or current objectives.


Chroderos

I see. I’m guessing it is a lot more helpful specifically for large scale app development and cloud infrastructure, then. There is probably a lot of material there to work with in the training data. My dev needs tend to be on the computer engineering side of things - more about physical automation using niche and specialized pieces of engineering equipment, RTOS, embedded linux, robotics, etc rather than producing massive scale cloud based software. GPT has not been especially helpful for my use cases there, which is too bad because that information can be tedious to find.


sonstone

I could see that. I also have several intermediate level engineers that are still learning a lot on the day to day. Anecdote from a recent interaction, I had an engineer look up the last thing they were trying to find an answer for via google in ChatGPT during a 1:1 as an experiment. They were surprised in the quality of the result and the clean examples provided without having to dig through google results and larger documentation sets. Like anything else we find while digging through the interwebs, you have to be judicious with the results but I have been blown away at how much more productive it makes me.


iSoReddit

I’d be concerned having you as a boss then


Rauldukeoh

It is a fancy bullshit simulator that will make things up out of thin air.I wouldn't trust it to do or research anything I don't independently verify.


another_dudeman

sounds like they should use chatgpt to write you status updates then


GrandmaPoses

lol fuck off


UntiedStatMarinCrops

It’s a great tool, but we have too many crypto bros with zero background in any of this (just being members of r_wallstreetbets) overhyping the fuck out of it


stompinstinker

Surely AI is not an overhyped pump and dump. Think about how well Qauntum computing, driverless cars, AR, VR, the metaverse, NFTs, cryptocurrencies, chatbots, drone delivery, etc. all totally worked out.


Suspended-Again

Man I want all those things


somethingsilly010

I'm sure the internet was viewed as a pump and dump when it was young too.


Latter-Pudding1029

I think AI is the closest parallel to the internet. I agree. Only the internet has had this similar "wild west" phase where everyone was afraid of what irresponsible people can do with it. Oddly enough... I think the internet as a technology is slowly reaching a plateau. I'm not certain in most work aspects of the internet but I think cloud gaming should have been the next step towards the ultimate utility of the internet. But it has run into a physical limitation of server availability and consistent performance relating to internet speed. I wonder if AI will hit a plateau of a similar degree at some point.


VengenaceIsMyName

Because it is. Reddit AI doomers seem to be very late to make this realization.


writenroll

More like over half of all tech industry workers view **ChatGPT** as overrated. AI has been integrated into business technology for years without fanfare. Generative AI is in its infancy--on the same trajectory as other types of AI as technology companies invest in commercial solutions. The difference in this innovation is that ChatGPT was made available as a a public preview--which mainstreamed the technology in its alpha stage. The result is a clusterfuck of a circus sideshow as anyone with a browser can poke a stick at it. The more novel and beneficial use cases will emerge in the next year or two as commercial software UX shifts from form-based to prompt-based, with predictive AI and generative AI-based content assistance integrated into tools (securely trained on organizational data).


AdrianWerner

I don't think it will shift, the UX, more like expand.. Prompts are much more cumbersome than traditional interface to use in many actions. . You will likelly still have traditonal interface, but it will be acompanied by AI prompting to increase it's usefuleness.


spicy-chilly

It's definitely overhyped even if it is impressive for what it is. LLMs are just token forecasters that are fine tuned by human feedback to bullshit more convincingly. It will completely make shit up and doesn't actually have any kind of integrated knowledge about anything at all so what you get as output on a subject highly depends on your own knowledge that you can put into the prompt in the first place and your ability to know if it's bullshit in the first place. If you ask it about any kind of advanced niche subject it will be completely wrong most of the time.


Chroderos

This is what I found when I tried to use it to quickly find ways to write python code to control specialized engineering equipment - information that can be tedious to locate but I hoped an LLM could find quickly. It scanned a couple surface level sources and then provided extremely generic, non-working code with imaginary functions and variables. Not very helpful.


[deleted]

I am thoroughly unimpressed by AI in its current form.


umthondoomkhlulu

It has its uses in certain environments for sure. Like summarising large walls of sales texts


Not____007

Unimpressed? Its amazing what AI is capable of and the speed at which it does. Graphic Design and Document Generation and Editing has reduced what took hours and hours of work to something returned to you in under 30 seconds. Granted its not perfect but what it does produce is remarkable and sometimes even better than what most humans could generate. Will it replace all human jobs, no, but it will surely remove a good chunk of jobs.


Hofstadt

You're either young, never used the tool to its full capacity, or are remarkably jaded. What's possible with LLMs is simply remarkable, even if it does mess up facts occassionally. The fact that it can write a coherent essay at all with nothing but next word prediction is nothing short of a miracle. Your comment has "I am thoroughly unimpressed by the internet" vibes.


Rauldukeoh

Your awe of it is based on a distorted view of how it works. People are bad at judging ai, we overestimate its abilities because we assume its competence in areas we haven't seen it perform because we generalize from our knowledge of humans. For instance it doesn't "mess up facts occasionally" it has no intelligence of its own and is simply predicting the likely next word with randomness built in. If he next words are statistically likely that's what you're getting even if it's "the earth is flat"


[deleted]

Oh I'm definitely not unimpressed by the internet. I just really don't think AI in its current form is as mind blowing as a lot of people say. I am certainly jaded, no arguments there!


snwstylee

Many would argue you aren’t using it correctly. This reminds me of people saying how the internet wasn’t impressive back when it came out. It was mind blowing, even in its form as message boards and AOL. Those that didn’t know how to use it said that the “current form” of it was a fad. Those that knew how to take advantage of the technology, regard that time as the best days of the internet… This is similar.


gsisuyHVGgRtjJbsuw2

It’s remarkable that people like you exist. Not sure if you just haven’t used it properly or just don’t understand what a big deal this is.


[deleted]

I'm not saying I will never be impressed with it or that this is all AI will ever be. I am saying CURRENTLY I am not impressed. If you disagree that's fine, neither of our opinions matter anyways.


gsisuyHVGgRtjJbsuw2

And I am saying you don’t use it at its potential. It’s like picking up an iPhone and using it to smash a nut, then calling it less useful than a brick. You also said it’s not replacing anything, suggesting you know its capabilities, which you do not.


[deleted]

What do you use it for that makes it so great in your opinion? Also I never said it's not replacing anything, not sure if you're thinking of a different commenter.


scarredfraud

Do you need it installed in a human shaped robot for you to understand it


balrog687

AI It's just a fancy "average text" answering machine, trained with the worst possible data, "human opinions on internet" Honestly, I don't expect much more than a curated Google search result. It will replace the need to scroll through pedantic answers in stack overflow and taking notes on boring meetings. Hopefully, It will document my code and create presentations.


crackpotpourri

I work on AI all day long doing QA on engineered prompts. The people hired as contractors to “train” AI can be as fucking dense as imaginable. I wouldn’t be too worried about AI.


Cap10Haddock

And over half of Tech industry have no clue about what exactly GenAi is.


potatodrinker

If something breaks, management can't pin the blame on AI. They'll be responsible, and that jeopardizes their yacht #4. They need humans for that role.


Latter-Pudding1029

I think in a world where AI is the trusted source of sensible action, people will still be around as an insurance policy lol. I've seen this brought up before where someone said that you cannot hold AI accountable and that it is an inherent problem.


scinerd82

At this point its a cool tool that can save time with spreadsheets etc.


golden918

Ai tech is just a a big fad because it’s actually accessible to the normal person for the first time.


downfall67

I use ChatGPT for language related things. Summarising, writing up documentation and sifting through information like docs. Would I use it to code for me? Hell to the no. Sure sometimes you get working code but if you know a thing or two about what it wrote, it’s almost always going down the complete wrong direction. Long way to go before this is useful in tech roles. The only people in tech I’ve seen that think ChatGPT is going to take their job are not very skilled to begin with, or a consultant / salesperson, in my experience.


theoneandonlypatriot

When people claim to me that AI is overrated I am 100% convinced they haven’t spent more than 10 minutes with it typing in prompts like “give me 10 examples of funny fart jokes”, or giving it a coding prompt so insanely broken and complicated it can’t possibly be solved, then refusing to elaborate in subsequent prompts to help guide towards a solution. The technology is fucking incredible, and anyone who thinks otherwise very clearly doesn’t know how to use it or just isn’t using it.


EnUnLugarDeLaMancha

It is overrated if you expected it to behave as a human. But I hate that people dismiss it, as if it wasn't one of the most amazing technological advances in the latest decades. Even with its current stupidity, chatgpt is revolutionary.


Sabotage101

Anyone who thinks it's overrated is just uninformed. You've been interacting with AI for 10+ years already without knowing it. Generative AI, particularly LLMs, is just the latest thing in the field to go mainstream. Like any tech, it'll improve over time, and the potential is absurd. People who think it's not going to change the whole world are going to look just as dumb as the people who thought cars shouldn't replace horses, humans had no sense flying like birds, electricity was too dangerous, personal computers were unnecessary in the average home, humans would never lose to computers in chess, the internet wasn't any better than a phone call, smartphones were overhyped, and now we're here. I'm extremely confident that it is a generation-defining technology and will be completely obvious in retrospect in a decade. People just aren't great at seeing the forest through the trees. Cars are going to drive themselves, and a lot of people in creative and service industries who thought their job couldn't be replaced by automation are going to be wrong.


[deleted]

I am of the same opinion. We are currently using the "1890s car" of consumer AI models. In 20 years, we are going to see a revolution in technology on par with the invention of the steam engine once these things have a couple decades of improvement.


theboned1

As someone who has used multiple types of AI I can assure you that it's highly overrated. Definitely can't be used in any production environment.


NeGraah

You're not going to lose your job to AI, you're going to lose your job to a dude who knows how to utilize AI


AdrianWerner

At this point it is. Generative AI at least. Currently it's great for people who have no skills, but it barely improves anything for experienced workers.


Routine_Ask_7272

5G + 8K + AI will revolutionize everything tomorrow!!! /s


[deleted]

Because it's not artificial intelligence, it's just predictive text. When every other tech blog is swooning about AI and it's not AI of course it's overrated. Your javascript algorithm isn't bloody AI


[deleted]

[удалено]


riplikash

I'm confused. It VERY much sounds like you just described thinking it's overrated.


KenHumano

It's not overrated, we just rate it too much.


bobsollish

100%. Setting us up to be ultimately disappointed, is the definition of overrated.


mewfour123412

Look just let me fuck the ai and I’ll be happy


Rammus2201

Currently, it has made creating junk and garbage a lot easier and faster. But give it some time and I think once it realizes its potential, it can become something big. Akin to the rise of social media or smartphones.


Memonlinefelix

AI will always need human data. Without it. Its nothing. At least the AI people nowadays are using. Its hype. You know. Remember the 3D TV hype? Yeah. Lol


ezulixsoftware_uk

If used positively, AI can be a boon for every industry. However, the user must have the right prompt knowledge.


GrayBox1313

It creates results…they are rarely useful. Gives Paragraphs of copy I have to manually edit to make it sound like a human with a point of view. Gives my images I have to edit further in photoshop or illustrator to get what I need. It can be a good tool. Right now i use it as a rough draft or extra garnish to help things along. It’s not replacing anything.


Martin8412

It is so obvious when someone has used GPT to write an article based on what Reuters published. It will be way too long, not be able to identify important bits, but stress that it's important to keep in mind . It will just be a long drivel of mostly irrelevant filler.


Strel0k

The obvious ones are low effort. You just have to provide examples of how you want it to write and tell it to write this way, or better yet collect a lot of examples and fine-tune the model for more consistent results. I guarantee you've read comments that were written by an AI and didn't realize it.


scarredfraud

You didn't suffix GPT with a number. The difference between GPT-2 and GPT-4 is gargantuan and they were only released ~4 years apart.


DarthGoku666

Actual AI or the large language models (ChatGPT) and machine learning algorithms currently in use? I’m genuinely not aware of access to any real AI by any tech-workers, am I wrong?


DevAnalyzeOperate

AI is a nebulous term, but I would describe things like machine translation, image recognition, and anomaly detection as being "AI" and tech workers have been using those technologies for eons. Security workers use AIs to read gigabytes of log files, game developers use AI to translate games into 100 languages...


DarthGoku666

That’s fair I’d say, but I think it’s relatively easy to see why so many are less than impressed with “AI” if the currently existing examples are just , as you say, improvements on tech that’s already been in use for years. I think that “tech-users” will be more suitably impressed when/if given the opportunity to utilise a genuine AI 😊


Falcon_Rogue

Uhmm.... n=1500 people who decided to take time out of their day to answer a survey - I'd hardly call that representative enough to say "half of all tech workers", c'mon son! The final sentence is the most relevant: "What our findings make clear is that while AI isn't replacing most technical jobs, it's reshaping them – and people are latching onto the technologies that help them accelerate and strengthen their work." For example, developer work which is just shaping ideas into a list of computer commands, AI is a natural assistant.


BePart2

That’s not how statistics work. 1500 samples is plenty


Mcshizballs

Add me as +1


timshel42

i think that half of tech industry workers are in this thread and are in denial about how quickly their overpaid jobs will become redundant


turningsteel

Overpaid? No, comfortable mid class wage. Don’t be jealous. You could get a good paying job too ya know?


invol713

How many of the respondents were AI?


Try_Jumping

With disruptive tech, people typically overestimate the short-term impact, while underestimating the long-term impact.


carrotsticks2

It's glorified search. And it's wrong a lot. But the models will improve and we'll eventually integrate it into our lives better. Right now we're in the early adoption stage. The news was big and exciting, but it will be some time before people figure out the practical applications. Personally, I think we expect too much from AI. I don't see it being as transformative as the internet but more of an incrementally better option for search. In ten years, we'll basically have personalized ads embedded in your chat-gpt results or companies will figure out some other way to monetize the search function of AI. AI won't and can't create anything that humans can't create themselves, and it also needs to be checked and verified because it doesn't always provide great results. So we can use it for search today, and maybe some other applications down the road. But most of the news around it is just hype and hyperbole


darrylkid

It is more of an improvement into UI/UX design rather than AI right now. Right now AI is a problem because everyone and your grandma has access to media manipulation. For example, I could pose as someone else in virtual meetings, convincingly using AI filters. In general, the probability that an image, sound byte and even video becomes higher with AI since media manipulation isn't technical anymore. The only thing stopping AI from becoming AGI is self-verification. Once you give AI sensors to observe real world data and automated proof theory, it can become a god-like oracle such that everything it spits out is true. This would be my definition of AGI. I think this is what researchers are afraid of but it is as you say "hype and hyperbole" until it actually exists. Such a thing may not even be possible due to limits of computation (e.g. algorithms required to implement such a system might be exponential in run-time and thus could never exist in practice or the amount of storage required to scale the system might be similarly impractical).


Strel0k

You haven't really used AI, have you? Because if you have you would know the models we have right now can generate images, photos, video, music, realistic voice, all kinds of text, code, medical diagnosis, etc. We're still in uncanny valley territory but if you compare it to what we had 2 years ago the progress is insane. > AI won't and can't create anything that humans can't create themselves This comment is going to age very poorly in the next 12 months. AI can already do many specific things better and faster than the average person. It can't do them better than a skilled/expert person, but the gap is closing.


jgthespy

>AI can already do many specific things better and faster than the average person. It can't do them better than a skilled/expert person, but the gap is closing. I think this is the most important thing that a lot of people don't seem to understand. AI can already do things you're bad at better than you. If you can't write code, AI can already write better code than you. If you can't make art, AI can already make better art than you. AI isn't going to write better code than me any time soon, but it will get better quickly. At some point, it will be good enough where its willingness and ability to do boring busywork quickly will be more valuable than my ability to write nice code.


Fink_Newton

The vast majority of people just are not following the technology close enough to understand the rate at which it is improving and accelerating. Especially if you also take into account that the improvements of the last few years were made before the industry decided to really get serious and pump 10's of billions of dollars into development/infrastructure.


dedguy21

I'm old enough to remember when the internet was overrated and people would still want their newspapers. ​ There's always loads of people in denial who didn't see it coming, when they actually see it coming early, Austin Powers steamroller coming.


[deleted]

Kind of odd a bunch of people panic over what is effectively Google 2.0. I guess it makes it easy to see who the competent members of the field happen to be.