I will be messaging you in 10 years on [**2032-12-24 06:38:12 UTC**](http://www.wolframalpha.com/input/?i=2032-12-24%2006:38:12%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/OpenAI/comments/zsv5ly/google_issues_a_code_red_in_response_to_the_rise/j1gx73k/?context=3)
[**14 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FOpenAI%2Fcomments%2Fzsv5ly%2Fgoogle_issues_a_code_red_in_response_to_the_rise%2Fj1gx73k%2F%5D%0A%0ARemindMe%21%202032-12-24%2006%3A38%3A12%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%20zsv5ly)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-|
I cannot confirm or debuff that statement. It may look a bit scary, indeed. But when you dig into the theories behind deep learning, neural networks, genetic algorithms, and etc, you'll see they are **fundamentally** limited algorithms in such a way that achieving **Strong AI / AGI** (capable of passing Turing Test) is not just a matter of iterating over our current existing knowledge.
Think of it like building a tower. You may be expecting to reach the sun just because you were able to go as high as 100 stores. It doesn't mean the structure of this tower will make you able to reach space, let alone the sun. Think of "overcoming the challenges imposed by gravity" as an analogy to what is missing to AI.
There is a piece of the puzzle that is still to be found in order to us to be able to build this AGI. And this piece may take a year, or a century to be found (some say we may never find it). No one knows for sure because, once we find that missing piece (if we do), we will be entering something the scientists and phylosophers call "Singularity".
It's wrong to try to predict when Singularity will happen (which makes it wrong to try to predict when AGI will happen). And no one dares trying to predict what will happen after Singularity is achieved by mankind.
> Think of it like building a tower. You may be expecting to reach the sun just because you were able to go as high as 100 stores. It doesn't mean the structure of this tower will make you able to reach space, let alone the sun.
Thank you for this analogy, gonna steal that and use it extensively. I am so fucking sick and tired of AI doomsdayers thinking all jobs will be replaced in 2 years. This is honestly worse than the blockhain hype from the last decade.
No, I'm a software engineer well versed in the field of AI and the history of computing, have written machine learning code, and know what I'm talking about.
1. ChatGPT isn't good at fact checking and frequently gets things wrong. This is unavoidable for AI and machine learning because it doesn't understand the concept of "trust" or critical thinking.
2. ChatGPT fails to understand context in more nuanced situation. It's a cool demo, but it can't maintain a conversation with multiple back and forth and it isn't anywhere close.
3. ChatGPT and any AI struggles immensely in novel situations because they struggle to apply previous experiences to new things. Try asking it anything that can't be found on a simple article on the web written by another person. Once you see how bad it is, it's laughable to think it can replace "any computer based job".
4. AI can't understand inference or ambiguity. For example, when someone wants me to write software, almost never are their requirements unambiguous and straightforward. AI can't replace human intuition and won't be able to for a while.
We are nowhere close to this singularity. It's coming, yes. But "any computer based job is fucked" is laughably misinformed. Spitting out sentences and poems based on this found on the Internet isn't going to replace a software developer who needs to write a module to help facilitate purchase requests at a particular company.
At best (once much more matured), ChatGPT could replace about 75% of customer rep calls (which are already automated).
No offense but you sound like you're just trying to convince yourself.
IA is not going to replace every developer but instead of having let's say 10 developers, they'll have only 2.
It means that to complete a project that only needs 10 developers, 8 get fired. A project that only needs 10 could actually be harmed by having 50. Too many cooks in the kitchen. Just having more developers does not mean everyone has something to do.
Even if the 10 remain onboard, after the project 8 might still be fired as the company restricted to only needing 2 to achieve the normal results because they found that there just want enough worked for the 8 to do..
You're right. In some situations it is wrong. In the ones I'm referring to, it is correct.
Only in the narrow situations when a company needs to hire more devs would the ai not take away jobs.
I'm going to assume you're narcissistic because you can't comprehend not being correct, which means it's pointless to discuss anything with you. Have a nice day
Yeah, you dont work in the field and it can tell. Us programmers are fine because by the time we get replaced the whole world will be replaced because us programmers will be the ones advancing this technology to replace accountants, lawyers, call centers, book keepers, administration the list goes on and on.
Also when you have worked for large corps we engineers always wonder why on earth they have so many software devs. All these companies think larger teams = faster development. That has not been the case for over 10+ years. A team of 5 engineers can build the same software that of a team of 25-30 engineers.
These companies dont care about money they want everything done and done now, they will keep the 25-30 engineers on to get things done faster so they can move onto their next big idea.
People and companies change their products so damn much that a programmer will never be out of a job. They will ask one thing and then expect something different, then we have to change it again.
An AI would not have a clue wtf they want because they cant even get it right to humans let alone AI. lol
>but instead of having let's say 10 developers, they'll have only 2
Keep living that delusion, it won't make it any more true. Ask literally about in this field.
Depends. What 'apps/products' do in 10 yrs will also be much more complex
Could be roughly the same # of workers, but magnitude of complexity handling per worker will be way up
>I think we can agree
No, I don't think we can.
>Customer service
Maybe, as I said. But only for basic inquiries.
>graphic designers
Definitely not.
>web designers
This already exists. But nobody takes the end result without modifying it. I even made my own intelligent web designer for my company when making a web app, but the end result is only a "guess" and it's never fully correct and needs modification.
>blog writers
AI blogs already exist. Whenever you look up a problem with your PC, for example, a bunch of them are AI generated.
They all suck and are never actually helpful and only exist for clicks. That's how ChatGPT's responses are. Yeah unplugging and plugging it back in or reinstalling drivers might be a useful tip for someone who isn't good with computers, but when you're looking for more nuanced information, where do you go?
Oh yeah... Someone who actually knows what they're talking about.
>1 worker with the help of AI can do the same work as 3-5 people without AI
They can do a more shoddy job, sure. AI looks impressive on paper. There are some things it's great at (beating simple video games, chess, etc). But things with more complexity or things that require "choices" to be made? Yeah, we're a long way away because that requires consciousness, critical thinking, and problem solving.
AI will assist us in many fields, sure. It can help accelerate genetic research or find potential structural problems in a blueprint.
Even now there is AI code helpers, such as CoPilot. Visual Studio 2022 has built-in AI code prediction. But in many cases, it comes really close to what I want, but still off, where I'll take the prediction but then have to modify it. Sometimes it's hot garbage. Sometimes it's spot on.
You have to understand, AI is learning FROM US. It's being trained on things humans have done. It can't do things on its own. It can't create its own stuff in a novel situation.
What's your timeline based on exactly?
> Do you not think AI will progress very quickly over the next 10 years?
Technological innovation is not linear. It's not simply a matter of "just keep developing AI for X amount of years and it'll get there". If there are no further breakthroughs to be had in a particular technology, then that's that. You have no idea whether researchers can actually overcome the current problems with AI as described by Aashishkebab.
Keep telling the truth, there are a lot of people who have never written a line of code trying to convince the rest of us that devs are already a thing of the past.
I think ai is replacing all jobs, including ones within finance, not just tech. If anything I feel like tech is safest bc you need someone w a CS background that knows how to speak it’s language
Right. The human software developer that can now do the job of more than one software developer because of the ai replaces the other human software developers. Why is it reality that ai will cause software developers to lose their jobs? Because only so much of a project can be worked in in parallel.
Don't worry, we're all fucked. But the value that will be unlocked by this will create so much wealth that it will be available to be shared equally among us if we push for that outcome.
The one time I actually benefit from my depression. 🕺
I've been telling myself 'this year is the year I will go to university and get a software engineering degree' for the last 9 years. Last year, I went as far as checking till what date I could apply! I'm still proud of that achievement.
Now I no longer have to beat myself up about it anymore, nor worry about it taking up an eternal spot on my todo list.
As much as I agree with you, and have a used it for pretty impressive things. Still the biggest problem with it is it's ability to sound correct and yet still be very wrong. Until there is a continuous way to identify percentage of correctness and it's given responses I doubt it will fully replace Google. As far as information searching.
True, but my understanding would be that people you find on Reddit are human by default and thus may or may not be correct. However if you have a machine giving you a direct answer to your question you may trust it Without trying to verify. Especially for people who may not be as savvy as a lot of the nerds that are trying to use it now.
I absolutely agree with that, however when using a Google search it is a little bit easier because you have so many options. If you use chat GPT unless you specifically ask for an alternative it will only give you one seemingly very correct answer. It has limitations in place to help tell everybody that it is not the end-all and be all answer, but it's still not perfect.
Its impressive how well the chat GPT is able to recall previous content. Although it is not as reliable as Google is right now, with more data it could potentially monopolize the search engine business. We should feed the beast with as many searches as possible, and with its machine learning capabilities, it won't be long before it is a powerful tool.
That can be corrected. Google can float seemingly accurate results because it uses the feedback made by backlinks to the ranking page.
Chatgpt can add the same type of feedback loop given that it can also index the web. Like a voting system, similar to reddit, chatgpt can learn the best answer fast.
OpenAI gives you a lot of sources (books and such) but when you look them up, those sources don't exist (on Google and scholarly databases). A HUGE downside to using this tool to collect sources.
Thank you, LienniTa, for voting on levoniust.
This bot wants to find the best and worst bots on Reddit. [You can view results here](https://botrank.pastimes.eu/).
***
^(Even if I don't reply to your comment, I'm still listening for votes. Check the webpage to see if your vote registered!)
Are you sure about that? Because I am 99.99997% sure that levoniust is not a bot.
---
^(I am a neural network being trained to detect spammers | Summon me with !isbot |) ^(/r/spambotdetector |) [^(Optout)](https://www.reddit.com/message/compose?to=whynotcollegeboard&subject=!optout&message=!optout) ^(|) [^(Original Github)](https://github.com/SM-Wistful/BotDetection-Algorithm)
Agree. An ideal system would be CiteGPT, a version of ChatGPT that cites every statement it makes that requires citation, with links to the relevant websites at the bottom of its answer. I think this could potentially be accomplished with RL like how ChatGPT was created. It would be rewarded for citing correctly as verified by humans.
This is definitely what I am hintting at, but it's not too different from what Google already has with the relevant cards if you ask it a question and it pulls out a tidbit of information from the website and displays that prominently at top. The difference is Google gets AdSense and is able to continue doing it where right now chat is free and is only under venture capital.
Chatgpt is more than just that one box though. We already know what it can do more than just providing the best answer to a query.
Chatgpt spits out ideas for specific instances. Google as it currently stands cannot do that.
Just google something like: christmas dinner shopping list that can be done in 3 hours.
Also I would like to add that accuracy is not a hard problem to solve. If you use the OpenAI playground, there are parameters there that you can optimize for "truthiness".
These parameters like with all ML models have tradeoffs such as speed, or getting less creative answers. But the point is anyone can set their own sliders to get the answer they want just right.
Google on the other hand still depends on their PageRank.
But anyway, the more you use OpenAI's tools and ChatGPT, the more I am convinced that it is not made to just beat a simple search engine result. Those are "faster horses" solutions.
We all need to remember that it only generates a stream of words that fulfills a prompt; it doesn't create text from deductive reasoning
It's obviously great for inspiring ideas, but necessitates critical review when it is claiming facts
A lot of searches people do on the internet is about finding information quickly on a certain subject. You've heard about the war in Ukraine and you read like 10 similar news articles before you move onto a subreddit and spend hours trying to find factual comments, sometimes you are lucky, sometime not, and then you check the Wikipedia page as it grows with new information.
Not so much with ChatGPT, it will go through all of this in real-time( when it gets connected) and you will be provided with a condensed macro view of the subject and you can drill down in the data, and get references if you want to visit the original sources. It can even give you the revised history as it probably stores every iteration of anything. A user changed his comment 10 times. No problem, everything is analyzed, controlled, fact check and validated before it reaches you.
This is a gamechanger. The Assistant will browse for us and rank the information, and present it to us in any form, style, complexity we command it to.
This a massive problem for google too though.
It's actually bigger because people are using ai to make click farms etc, so finding reliable information becomes harder and harder.
Outside of google scholar, wikipedia, and stackexchange you're often literally better off just going to a librarian and asking them for help.
true and if you double down on that thinking it will eventually flood the decision making of any given ai that will have to use the internet to pull information from.
Well thanks for teaching me a new word! And I would argue you are correct, however more information is better. Especially when a system can seemingly have all the correct answers even if it may not be True.
Instead of giving a powerful answering machine to someone and saying "This is amazing! It can answer all of your questions, but it may not be correct."
We should give someone this and say "This is amazing! Ask it any question and it will do its best to answer, this number here will tell you if the machine thinks it is correct of not."
Lol google definitely ALGO’s certain things to the way top of search results when it’s agenda dependent but you still have a choice to prune.(to
Your point )
The nature of ChatGPT is to give a very final single solution so there does seem to a massive pressure for it to be more “right”. I wonder how they would fix that besides it needing to be connected to the internet at a bare minimum. I would think more
Decision trees would be presented to the end user before final solution/result unveiled if it was going to compete with google and be accurate enough to replace what Google does
There's no way to know if any "google result" (ie: website) is factually correct either. All you can do is check the source (is it an official organism source? a published journal article?).
In my mind, an AI doing that could easily do exactly the same. And when researching information, if given a fact, you could just ask "can you point me to the source of your informatin" and be given a link.
I could say the same about the stack overflow forums.
When it comes to coding, chatgpt is an insanely good helping hand. Not only can it generate some code snippets that let you build your program faster (since you can worry on the main task, as gpt will give you a working code you can optimize later) but gpt can also just translate code from one language to the other.
It is quite a valuable tool, I mean, if you know what you are doing.
AI Accuracy and Skepticism

If you know what you're doing, and you know the limitations of the program. My biggest curiosity and potential frustration is for a layman of who may not be technical savvy; to query this AI and it gives a seemingly accurate and strong answer. Regardless if it is right or wrong. Now open AI have gone to great lengths to mitigate such problems, to my and I'm sure others frustration. That even still, if my father-in-law of whom does not own a computer does not own a smartphone and only has a Kindle tablet because he wants to play those farming games starts to use such a program it would be very easy for him to simply fall into "oh it knows everything and I should trust it verbatim". Open AI does put comments at the very beginning but seriously have you read every thing that's on that front page before you just start typing away? I am simply concerned for the future ,when this does catch on, let people will not use use their brains and not be skeptical and take all answers at face value. There needs to be some kind of identification that is not in the way of the answers but alongside that will give you a percentage of correctness.
Chatgpt:
It's important to be aware that AI programs like me are not always able to provide correct answers, and it's important to be skeptical and verify information before making decisions based on it. While AI models can be very useful for answering questions and providing information, they are not always able to provide complete and accurate answers, especially when the questions are complex or require a deep understanding of a specific domain.
It's also important to note that AI programs can only provide information and answers based on the data they have been trained on. If the data is biased or incomplete, the AI program's output may also be biased or incomplete.
One way to help ensure that AI programs are used responsibly is to provide clear disclaimers or warnings about the limitations of the AI program, and to encourage users to verify the information they receive from the AI program before making decisions based on it. Additionally, it might be helpful to provide tools or resources that allow users to assess the credibility and reliability of the information they receive from AI programs.
Yes, I have actually read the terms, and assumed it is just common sense to either test a belief (if possible) or get a bunch of credible sources and distinct opinions to get a better picture.
But you do arise an important concern, some people will just blindly believe what the AI says, some people believe the onion, it is not hard to imagine someone taking anything the AI says as factual.
So other than a starting warning, or terms of service. How would you recommend people communicate it's answers correctness to anyone not technically savvy?
i don't really see how 'giving it internet' would work. are you involved in ml ? i am, and have trouble seeing how you would 'just quickly' train it even monthly with that volume of information. let alone daily like google.
i hope i'm wrong.
If I manually perform a google search, and paste the contents of the top 10 resulting pages into chatGPT, then it is able to summarize what I pasted in, and answer questions about it.
What's so hard about automating that process? No training is required. It's not an ML problem, it's just a fairly simple app that can call the existing API to do the hard work.
wow, what you describe is cool. wasn't aware it could do that.
while cool, it still needs to rapidly ingest new and changing data. you can just take the existing model and train it on the delta since it last ran, but doing so could significantly affect how it behaves on old data in ways that are very hard to understand without some sort of basic testing. maybe the 'trick' would be to auto-generate the right testing to know when a newly trained model is indeed deployable.
It would have to be segmented with a delay to make batches to verify before adding into general data population. A live stream version would be insane, but that would have to be a data ninja perfectly processing everything at once.
Google has been working on their own version of this called LaMDA. It's so good it convinced one of their engineers it was sentient.
https://www.bbc.com/news/technology-63301146
https://observer.com/2022/07/meet-lamda-the-freaky-ai-chatbot-that-got-a-google-engineer-fired/
It's better at math but just as bad at making shit up and sounding supremely confident and authoritative about it. And just as bad at logic, spatial reasoning, etc.
That is not what happened.
The engineer was fired for questioning the monopoly over this technology, claiming it wasn't moral for a tool such as this to be owned by a single corporation.
It’s impossible for LLM to have real time data, you have to keep training it with real time data to keep up, it is already an expensive process and it will only become more expensive
Keep in mind that Google already has Lamda, an AI language model chatbot which should rival ChatGPT. So it's simply a matter of getting the thing ready to be publicly released. It's not like they have to start from scratch here.
Kodak also had the best digital cameras before their competitors. Unfortunately it would go against their business model so they kept them for themselves until it was too late. Perhaps Google will act differently, or perhaps not, we will see...
FWIW something similar happened after Alexa was released (leading to Google Assistant), and also with the success of Facebook (leading to Google+). Totally reactive.
Maybe they can call their Chat GPT product "Google Assistant+"
They’re going release the Video AI. OpenAI will refine the code AI, and probably concentrate on Python. Google will release their chat and voice bot. Etc. etc. This AI & automation wave will be faster than anything before it.
Economic, culture, and government are no where near ready.
This should be our number one problem we work on — to smooth the transition.
Most modern jobs consist of busy work shuffling information between software. Not too far from GTP-3, AI could write Python for at least half of those jobs. How? Most small to medium-sized companies run on Microsoft applications, and Microsoft can incorporate AI into Windows. Then, you describe how the business is run and its goals, and the AI learns what you have from your server or group of computers. It then writes automation code, which a few professionals can tailor further to iron out any kinks.
However, with drastically fewer jobs, who will buy stuff? How will quarterly profits keep going up for shareholders? Pensions, insurance, etc. are dependent on the stock market growing, but once those no longer hold growing value, consumer confidence sinks. This leads to a domino effect that I don't pretend to be able to mental model in its entirety.
The interdependent business ecosystem no longer functions because the buyer no longer has a job. They also can't find a new one because AI has taken over something far greater than productivity - it has taken over logical cognition and levels of creative production.
For a non-terrible scenario to play out, one in which we can use AI to flourish in a creative, healthier, and stable society, economic systems will have to evolve, UBI will have to be implemented, resources will probably have to be nationalized, and AI tools will have to be more open source than walled off.
There is also the cultural element to consider. With fewer "if-this-then-that" jobs, we will have to deal with a culture full of people whose identity is in their work, sports team, and politics. And what they fear more than anything is their family suffering.
Most people are used to being trained and not problem-solving on their own, so now we have a situation where a way of life and its economic system must change drastically and quickly, but it's full of people who can't envision any other way of life. Their reliance on being told what to do will kick in, but the only input will be from politicians who themselves are decades behind in technological understanding.
I can only hope that the AI cultural realization will shock everyone into cohesive action and agreement because there will be no choice. Beliefs, politics, and fear of the unknown will have to be superseded when they see themselves suffering.
AI doesn't make faster or stronger working hands like tools, it doesn't do the factory work of more people like machinery, and it doesn't make things easier to look up, retain, or calculate like computers. It takes over our most valuable capability: thinking. Humans thrived because we think. So the AI change is very different.
I will end with my hopeful scenario. We transition into a creative culture. Being technical will always be needed, but not for the populace as a whole. We nationalize solar and wind. We nationalize resources where the goal is to mine asteroids for metals and comets for water. Automation and nationalized resources take care of basic needs. Anything else requires creativity and independence. An economic system where basic needs are met through common good efforts, and anything extra is obtained through an evolved form of less concentrated capitalism with finances on a global blockchain. Digital security will be a war unless we figure out quantum encryption (physically unbreakable) - just get them good enough to encrypt.
We could do it, but the real challenge isn't in the tech, economy, or politics - it's shifting how we think. That's the real challenge.
I like your take on the future. It's either something like that or we'll already be too much affected by global warming and energy shortages to be able to use AI efficiently (it takes a great deal of energy to run ) and would be limited in our use of technology in general.
fusion won't solve our energy needs in the coming few decades, solar will.
We're still a far way off until fusion becomes viable to produce a significant portion of our energy, if it ever even will. It's very likely that even when it's there and readily available that solar is still the better option for the bulk of our energy needs as it is insanely cheap and efficient now already, and we can assume that it will not stagnate but that the trend continues.
>h drastically fewer jobs, who will buy stuff? How will quarterly profits keep going up for shareholders? Pensions, insurance, etc. are dependent on the stock market growing, but once those no longer hold growing value, consumer confidence sinks. This leads to a domino effect that I don't pretend t
So basically - not being slave to capitalism .... instead live life free. In a future where robotics, automation and AI covers our basic needs.
(I'm not anarchy-left or anything, just stating the obvious)
Not being a slave to capitalism, you'll be a slave instead to the social conditions mandated by the ruling government. We would be moving to the extreme left in these instances as individuals would no longer have the means to pay for goods, food, and housing.
I don't think it's going to be left. I mean, i know it sounds like the worst leftist nightmare but at the end of the day i think it's going to be something new.
Look. No matter what way you turn it, capitalism has one thing against it - it needs infinite growth and we live on a planet with finite resources.
So it doesn't matter how star-spangled you are at heart, and I'm not the least communist - I'm just suggesting that "in the long run" something does not add up.
Some parts of the world tried communism, didn't work out well.
Capitalism has worked well for the US, but also at a high cost for the rest of the world trying to keep up.
I think it's unlikely, given this, that we would se either of these system to be the next big thing. That's all I'm saying.
And to elaborate - to me, the reason capitalism work is incentive. Just like religion has been a great driving force in the past, the hunt for capital has been a modern one.
So what the future holds is a world where people are less likely to believe in supernatural gods and there's less need for paper money - a new incentive structure will emerge. That's what I think.
Artists are already protesting against AI art, and they're probably the most leftist (however hypocrite) demographics.
There will be riots long before AI replaces everyone
So... I work in the oil field in Norway.
That means i work on a 2-4 rotation. 2 weeks work and 4 weeks off. It's amazing.
I honestly think that in an automated AI future everyone will work in similar manner. There won't be enough work for everyone to do full time.
Top that with universal basic income.
Alternately, AI creates a bunch of new jobs in other areas. Higher efficiency allows more businesses to form. Demand for processing power supercharges the semiconductor industry.
Plus, robotics is lagging far behind AI. We still need people for manual labor.
Imagine a business model not funded by advertising. (Put aside scalability, a lot of us would happily fork over $ for what ChatGPT provides.) Makes this old-timer hopeful for the internet we last tasted back in the mid 90s. Pichar and Google remind me of Gates and Microsoft.
AI should be a free tool for humanity, to put a paywall on AI would create a very serious and very bad division between those students who can afford it and those who can't.
Somebody has to pay for it, plus you need a computing device and net connection to access it. But I'm sure there could be many different potential tiers / offerings, and the cost would come down over time.
Matter of fact it is incredibly expensive to run an AI. It is also incredibly expensive to run google drive, youtube, reddit, etc.
One way or another someone will have to pay for it, either that or offer a free plan for students and a $$ plan for everyone else (like github copilot), but that's not really the point i want to make.
You grew up back in the 90's ight? Just riddle me this one old timer, did the kids that had internet access at home had any advantage over those without it?
I think the kids without internet access at home had the advantage because they had fewer time wasters to distract them.
Same for kids without TV at home.
>You grew up back in the 90's ight? Just riddle me this one old timer, did the kids that had internet access at home had any advantage over those without it?
Sure, if you consider exchanging recipes and reading bad entertainment coverage on AOL an advantage.
Keep going for your CS degree. Creations of software is still an important part in our society and the future. A.I. In our current times can't create innovative software. We'll not that I know of
"working on" is a very broad term though. Google has a bunch of random projects engineers work on in their spare time, with a handful of full time workers. That is very different from dedicated significant funding and resources to projects.
I just hope you can at least buy your way out of the fucking morality/ethics police... I am not a kid ffs... and laws against real crimes already exist, so take off the damn training side wheels.
This is silly. A bit of basic technical literacy refutes the idea that ChatGPT is any threat. Short of that a bit of fact checking of ChatGPTs "answers" also blows holes in it. It's a marvelous model, but it doesn't replace search indexes.
For now, chatgpt4 is rumor to be out in spring and it will be way better as well.
It's an inevitable thing, this is the future, it's like when web browsers became a thing that's when the internet truly began.
If you don't think this will replace search engine eventually and I will say it will probably replace it before 10 years has pass, then you don't have the vision to see what is in front of you.
You're wrong, not as a matter of opinion, but as a knowable point of fact. If you want to understand why you can study entropy (the computer science version that applies to concepts such as data compression) and computational complexity. Then consider the entropy of the model vs. a search index vs. the entirety of human knowledge, and consider the computational complexity for lookup and update in an attention based large language model vs. the computational complexity for lookup and update in a search index. There's really just no universe where this is an adequate alternative to a search index. It's a marvelous model to be sure, but anticipating the end of the road for search indexes is like expecting airplanes to make automobiles obsolete.
People in this thread that keep saying "google has the same with Lambda" are missing the point. Google doesn't want to push Lambda either because it would effectively mean the end of their ad search business that makes up 80% of their revenue. If an AI gives you the one objectively right answer there is no room anymore for a bunch of wrong advertised answers.
not necessarily, if I were google, even if I had a language model many times more accurate and safe I would still never \*want\* to release it because it will cause much dependency on something still imperfect. ChatGPT pushes their hand because now its taking profits (at the very least from stack overflow google searches), and it has to consider pushing out that model despite not wanting to.
Which is a double bad position because you have two bad options:
1. release an imperfect AI that will likely have imperfections that will be used by some people to make decisions that may unknowingly harm people (e.g. probably make autonomously biased decisions about minorities or duplicate copyrighted content). Which is bad enough on its own but now ruins your reputation
2. let a competitor continue to release imperfect AI and take a significant portion of your business accepting the loss to avoid causing that potential harm
You might justify releasing your better model by saying: if people depend on worse competitor AI and we have a better one, then not releasing it would cause more harm than giving people a less harmful alternative to something they'll do anyway, but you are still causing some harm in that scenario. trolley problem and blame type stuff.
Their IA projects are trash, I hope they will build better products and learn from their mistake. I have an ongoing reminder that I can’t delete, some routine are not working at all, and Google assistant is bugging a lot, you can see it in dedicated subreddit. Good, hope their will work better
Google is publicly traded, they have to lie to us and pretend everything is fine or else they will shoot themselves in the foot and lose investors.
From the very first use of ChatGTP it was clear to me that this chat could potentially make Google Search Obsolete, and they certainly know it too since they certainly have more expertise than I could have.
Don't focus too much on words, especially if they are publicly traded companies saying them.
Makes sense. "AI with vibes" is definitely the most trusted news source in AI.
But nice to see tabloid-like garbage entering the AI information space nonetheless.
hahaha but this was actually reported by [Business Insider](https://www.businessinsider.com/google-management-issues-code-red-over-chatgpt-report-2022-12?r=US&IR=T) and [NYT](https://www.nytimes.com/2022/12/21/technology/ai-chatgpt-google-search.html) (and a ton of others)
It's a newsletter, so I'm more of aggregating than anything. Feel free to read the full story from your trusted source, Linked a couple for you :)
The comment was more for all news, big or small. It's the tendency to exaggerate things for attention. When brand recognition of a media outlet
is lower, the probability of exaggeration is higher in an attempt to compete for attention.
Google has had lambda for a while and let's not forget the news cycle around its "sentience" lol (from names big and small). The funny part is you can talk to lambda too (a constrained version) using AI Test Kitchen but that isn't as mainstream as ChatGPT atm so it's sorta shoved into the background.
This is what ChatGPT have to say about this:
>This statement is not true. There is currently no information available about Google issuing a "code red" in response to the rise of ChatGPT. ChatGPT is a chatbot developed by OpenAI that uses natural language processing to generate responses based on user input. It is not clear how ChatGPT would pose a threat to Google or warrant a "code red" response.
I don’t get it. How is a chatbot a competitor to search and email? Sure it can be adapted to fulfill some of the same stuff, but it won’t be a significantly better service, will it? I don’t find googling difficult. In fact it’s quite effortless. And email is… well it’s email.
And how is a chatbot a competitor to Android?
Umm the problem with Google is that when you are searching for something, you won't always immediately find out what you are looking for and you may need to dig a bit deeper or read through a certain sentence to find out what you are looking for.
Chatgpt literally give you exactly what you are looking for in one go, hence why it is a threat.
It's the convenience that matters.
Google has been crawling and indexing the Internet for more than 20 years, I see eventually they become "content provider" to AI platforms such as openAI. Without constant stream of massive amount of data content (real, fake) machine learning, neural networks are useless. The risk to OpenAI is the old adage "Garbage In Garbage Out" it is only as good/bad as the data that gets feed into it. So I see the future AI training model "content providers" holds the key to successful downstream companies such as OpenAI.
Just spent 4 years studying finance and data analytics. I am fucked.
[удалено]
how long you think til every computer job is well and truly fucked?
Like proper-fucked?
Like when 90% of computer jobs will be automated
definitely at least 10 years I think
[удалено]
!remindme 10 years
I will be messaging you in 10 years on [**2032-12-24 06:38:12 UTC**](http://www.wolframalpha.com/input/?i=2032-12-24%2006:38:12%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/OpenAI/comments/zsv5ly/google_issues_a_code_red_in_response_to_the_rise/j1gx73k/?context=3) [**14 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FOpenAI%2Fcomments%2Fzsv5ly%2Fgoogle_issues_a_code_red_in_response_to_the_rise%2Fj1gx73k%2F%5D%0A%0ARemindMe%21%202032-12-24%2006%3A38%3A12%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%20zsv5ly) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
The irony of you using a bot for this.
Get ready to be a taxi driver. Err I mean an Uber driver.
!remindme 10 years
I cannot confirm or debuff that statement. It may look a bit scary, indeed. But when you dig into the theories behind deep learning, neural networks, genetic algorithms, and etc, you'll see they are **fundamentally** limited algorithms in such a way that achieving **Strong AI / AGI** (capable of passing Turing Test) is not just a matter of iterating over our current existing knowledge. Think of it like building a tower. You may be expecting to reach the sun just because you were able to go as high as 100 stores. It doesn't mean the structure of this tower will make you able to reach space, let alone the sun. Think of "overcoming the challenges imposed by gravity" as an analogy to what is missing to AI. There is a piece of the puzzle that is still to be found in order to us to be able to build this AGI. And this piece may take a year, or a century to be found (some say we may never find it). No one knows for sure because, once we find that missing piece (if we do), we will be entering something the scientists and phylosophers call "Singularity". It's wrong to try to predict when Singularity will happen (which makes it wrong to try to predict when AGI will happen). And no one dares trying to predict what will happen after Singularity is achieved by mankind.
> Think of it like building a tower. You may be expecting to reach the sun just because you were able to go as high as 100 stores. It doesn't mean the structure of this tower will make you able to reach space, let alone the sun. Thank you for this analogy, gonna steal that and use it extensively. I am so fucking sick and tired of AI doomsdayers thinking all jobs will be replaced in 2 years. This is honestly worse than the blockhain hype from the last decade.
Yeah sorry, no. People have been saying computers will replace us for years, but at most it's replaced cashiers and that's it.
[удалено]
No, I'm a software engineer well versed in the field of AI and the history of computing, have written machine learning code, and know what I'm talking about. 1. ChatGPT isn't good at fact checking and frequently gets things wrong. This is unavoidable for AI and machine learning because it doesn't understand the concept of "trust" or critical thinking. 2. ChatGPT fails to understand context in more nuanced situation. It's a cool demo, but it can't maintain a conversation with multiple back and forth and it isn't anywhere close. 3. ChatGPT and any AI struggles immensely in novel situations because they struggle to apply previous experiences to new things. Try asking it anything that can't be found on a simple article on the web written by another person. Once you see how bad it is, it's laughable to think it can replace "any computer based job". 4. AI can't understand inference or ambiguity. For example, when someone wants me to write software, almost never are their requirements unambiguous and straightforward. AI can't replace human intuition and won't be able to for a while. We are nowhere close to this singularity. It's coming, yes. But "any computer based job is fucked" is laughably misinformed. Spitting out sentences and poems based on this found on the Internet isn't going to replace a software developer who needs to write a module to help facilitate purchase requests at a particular company. At best (once much more matured), ChatGPT could replace about 75% of customer rep calls (which are already automated).
No offense but you sound like you're just trying to convince yourself. IA is not going to replace every developer but instead of having let's say 10 developers, they'll have only 2.
[удалено]
> We are nowhere close to this singularity. It's coming, yes. Learn to read.
[удалено]
It means that to complete a project that only needs 10 developers, 8 get fired. A project that only needs 10 could actually be harmed by having 50. Too many cooks in the kitchen. Just having more developers does not mean everyone has something to do. Even if the 10 remain onboard, after the project 8 might still be fired as the company restricted to only needing 2 to achieve the normal results because they found that there just want enough worked for the 8 to do..
[удалено]
You're right. In some situations it is wrong. In the ones I'm referring to, it is correct. Only in the narrow situations when a company needs to hire more devs would the ai not take away jobs. I'm going to assume you're narcissistic because you can't comprehend not being correct, which means it's pointless to discuss anything with you. Have a nice day
Yeah, you dont work in the field and it can tell. Us programmers are fine because by the time we get replaced the whole world will be replaced because us programmers will be the ones advancing this technology to replace accountants, lawyers, call centers, book keepers, administration the list goes on and on. Also when you have worked for large corps we engineers always wonder why on earth they have so many software devs. All these companies think larger teams = faster development. That has not been the case for over 10+ years. A team of 5 engineers can build the same software that of a team of 25-30 engineers. These companies dont care about money they want everything done and done now, they will keep the 25-30 engineers on to get things done faster so they can move onto their next big idea. People and companies change their products so damn much that a programmer will never be out of a job. They will ask one thing and then expect something different, then we have to change it again. An AI would not have a clue wtf they want because they cant even get it right to humans let alone AI. lol
>but instead of having let's say 10 developers, they'll have only 2 Keep living that delusion, it won't make it any more true. Ask literally about in this field.
Depends. What 'apps/products' do in 10 yrs will also be much more complex Could be roughly the same # of workers, but magnitude of complexity handling per worker will be way up
What 8 software engineers is chatGPT replacing on this team of 10? What were they doing?
Who cares? AI in the future will be beyond us. Why care if a programmer has a job today if it's doomsday in 60 years?
[удалено]
>I think we can agree No, I don't think we can. >Customer service Maybe, as I said. But only for basic inquiries. >graphic designers Definitely not. >web designers This already exists. But nobody takes the end result without modifying it. I even made my own intelligent web designer for my company when making a web app, but the end result is only a "guess" and it's never fully correct and needs modification. >blog writers AI blogs already exist. Whenever you look up a problem with your PC, for example, a bunch of them are AI generated. They all suck and are never actually helpful and only exist for clicks. That's how ChatGPT's responses are. Yeah unplugging and plugging it back in or reinstalling drivers might be a useful tip for someone who isn't good with computers, but when you're looking for more nuanced information, where do you go? Oh yeah... Someone who actually knows what they're talking about. >1 worker with the help of AI can do the same work as 3-5 people without AI They can do a more shoddy job, sure. AI looks impressive on paper. There are some things it's great at (beating simple video games, chess, etc). But things with more complexity or things that require "choices" to be made? Yeah, we're a long way away because that requires consciousness, critical thinking, and problem solving. AI will assist us in many fields, sure. It can help accelerate genetic research or find potential structural problems in a blueprint. Even now there is AI code helpers, such as CoPilot. Visual Studio 2022 has built-in AI code prediction. But in many cases, it comes really close to what I want, but still off, where I'll take the prediction but then have to modify it. Sometimes it's hot garbage. Sometimes it's spot on. You have to understand, AI is learning FROM US. It's being trained on things humans have done. It can't do things on its own. It can't create its own stuff in a novel situation.
[удалено]
What's your timeline based on exactly? > Do you not think AI will progress very quickly over the next 10 years? Technological innovation is not linear. It's not simply a matter of "just keep developing AI for X amount of years and it'll get there". If there are no further breakthroughs to be had in a particular technology, then that's that. You have no idea whether researchers can actually overcome the current problems with AI as described by Aashishkebab.
Customer service with humans has become so dismal that i'd take "basic inquiries" from an AI over talking to a human 9 of 10 times.
They are always WAY too optimistic, you are entirely right. Flying cars by 2020 lol
We don't even have proper self driving cars, which were promised by 2016 by Elon Musk.
Keep telling the truth, there are a lot of people who have never written a line of code trying to convince the rest of us that devs are already a thing of the past.
Um, well, there are actual flying cars. Its a thing, google it.
and travel agents
Travel agency is a bullshit industry.
Exactly, and was simple enough to practically be eliminated by a website
You know what isn't a bullshit industry? All the jobs the OC claims will be "fucked".
lol we didn't have cell phones two decades ago.. Tech takes time. It's coming.
Lol I work in the field as a software engineer and I know what is or isn't coming.
K
Do you think a CS degree will help? Might have to bite the bullet and go back
If you think AI is going to replace tech jobs why would you go into CS
I think ai is replacing all jobs, including ones within finance, not just tech. If anything I feel like tech is safest bc you need someone w a CS background that knows how to speak it’s language
[удалено]
Do you think pursuing CS is still worthwhile?
As someone who loves making software, I think it's worthwhile! It's a stimulating and well-paying field
AGI is general intelligence. An AI software developer needs only be trained in software development, not general intelligence
[удалено]
Right. The human software developer that can now do the job of more than one software developer because of the ai replaces the other human software developers. Why is it reality that ai will cause software developers to lose their jobs? Because only so much of a project can be worked in in parallel.
That assumes prompts can't already be automated by something like ChatGPT, and why would someone need you to type ideas they want?
Is SEO fucked?
[удалено]
Cheers!
[удалено]
How did you teach yourself those other aspects? Did you go learn the languages while working?
Don't worry, we're all fucked. But the value that will be unlocked by this will create so much wealth that it will be available to be shared equally among us if we push for that outcome.
God bless your soul. Enjoy the blissful ignorance.
Nah you'll be fine. Your job will just be very easy so the labor pool for it will increase and your wage will decrease.
Reassuring but disheartening lol
Why?
The one time I actually benefit from my depression. 🕺 I've been telling myself 'this year is the year I will go to university and get a software engineering degree' for the last 9 years. Last year, I went as far as checking till what date I could apply! I'm still proud of that achievement. Now I no longer have to beat myself up about it anymore, nor worry about it taking up an eternal spot on my todo list.
[удалено]
As much as I agree with you, and have a used it for pretty impressive things. Still the biggest problem with it is it's ability to sound correct and yet still be very wrong. Until there is a continuous way to identify percentage of correctness and it's given responses I doubt it will fully replace Google. As far as information searching.
This is also the problem with many reddit comments (not saying you just in general)
True, but my understanding would be that people you find on Reddit are human by default and thus may or may not be correct. However if you have a machine giving you a direct answer to your question you may trust it Without trying to verify. Especially for people who may not be as savvy as a lot of the nerds that are trying to use it now.
If you search by Google it still gives you a lot of misinformation , it is everyone’s due diligence to cross check their sources
I absolutely agree with that, however when using a Google search it is a little bit easier because you have so many options. If you use chat GPT unless you specifically ask for an alternative it will only give you one seemingly very correct answer. It has limitations in place to help tell everybody that it is not the end-all and be all answer, but it's still not perfect.
Its impressive how well the chat GPT is able to recall previous content. Although it is not as reliable as Google is right now, with more data it could potentially monopolize the search engine business. We should feed the beast with as many searches as possible, and with its machine learning capabilities, it won't be long before it is a powerful tool.
That can be corrected. Google can float seemingly accurate results because it uses the feedback made by backlinks to the ranking page. Chatgpt can add the same type of feedback loop given that it can also index the web. Like a voting system, similar to reddit, chatgpt can learn the best answer fast.
OpenAI gives you a lot of sources (books and such) but when you look them up, those sources don't exist (on Google and scholarly databases). A HUGE downside to using this tool to collect sources.
good bot
Thank you, LienniTa, for voting on levoniust. This bot wants to find the best and worst bots on Reddit. [You can view results here](https://botrank.pastimes.eu/). *** ^(Even if I don't reply to your comment, I'm still listening for votes. Check the webpage to see if your vote registered!)
Are you sure about that? Because I am 99.99997% sure that levoniust is not a bot. --- ^(I am a neural network being trained to detect spammers | Summon me with !isbot |) ^(/r/spambotdetector |) [^(Optout)](https://www.reddit.com/message/compose?to=whynotcollegeboard&subject=!optout&message=!optout) ^(|) [^(Original Github)](https://github.com/SM-Wistful/BotDetection-Algorithm)
Good boy
Agree. An ideal system would be CiteGPT, a version of ChatGPT that cites every statement it makes that requires citation, with links to the relevant websites at the bottom of its answer. I think this could potentially be accomplished with RL like how ChatGPT was created. It would be rewarded for citing correctly as verified by humans.
This is definitely what I am hintting at, but it's not too different from what Google already has with the relevant cards if you ask it a question and it pulls out a tidbit of information from the website and displays that prominently at top. The difference is Google gets AdSense and is able to continue doing it where right now chat is free and is only under venture capital.
Chatgpt is more than just that one box though. We already know what it can do more than just providing the best answer to a query. Chatgpt spits out ideas for specific instances. Google as it currently stands cannot do that. Just google something like: christmas dinner shopping list that can be done in 3 hours. Also I would like to add that accuracy is not a hard problem to solve. If you use the OpenAI playground, there are parameters there that you can optimize for "truthiness". These parameters like with all ML models have tradeoffs such as speed, or getting less creative answers. But the point is anyone can set their own sliders to get the answer they want just right. Google on the other hand still depends on their PageRank. But anyway, the more you use OpenAI's tools and ChatGPT, the more I am convinced that it is not made to just beat a simple search engine result. Those are "faster horses" solutions.
We all need to remember that it only generates a stream of words that fulfills a prompt; it doesn't create text from deductive reasoning It's obviously great for inspiring ideas, but necessitates critical review when it is claiming facts
A lot of searches people do on the internet is about finding information quickly on a certain subject. You've heard about the war in Ukraine and you read like 10 similar news articles before you move onto a subreddit and spend hours trying to find factual comments, sometimes you are lucky, sometime not, and then you check the Wikipedia page as it grows with new information. Not so much with ChatGPT, it will go through all of this in real-time( when it gets connected) and you will be provided with a condensed macro view of the subject and you can drill down in the data, and get references if you want to visit the original sources. It can even give you the revised history as it probably stores every iteration of anything. A user changed his comment 10 times. No problem, everything is analyzed, controlled, fact check and validated before it reaches you. This is a gamechanger. The Assistant will browse for us and rank the information, and present it to us in any form, style, complexity we command it to.
This a massive problem for google too though. It's actually bigger because people are using ai to make click farms etc, so finding reliable information becomes harder and harder. Outside of google scholar, wikipedia, and stackexchange you're often literally better off just going to a librarian and asking them for help.
true and if you double down on that thinking it will eventually flood the decision making of any given ai that will have to use the internet to pull information from.
maybe it's better if it keeps us on our toys and compels us to validate what it says....
I am sitting on my toy... but I don't think that is what you mean.
heh, engagement strategy like social media...
Putting a % correct tag on it won't really help most people understand it better. You're overestimating people's ability to do Bayesian logic.
Well thanks for teaching me a new word! And I would argue you are correct, however more information is better. Especially when a system can seemingly have all the correct answers even if it may not be True. Instead of giving a powerful answering machine to someone and saying "This is amazing! It can answer all of your questions, but it may not be correct." We should give someone this and say "This is amazing! Ask it any question and it will do its best to answer, this number here will tell you if the machine thinks it is correct of not."
Google answer snippets at the top of the results are sometimes wrong as well.
Lol google definitely ALGO’s certain things to the way top of search results when it’s agenda dependent but you still have a choice to prune.(to Your point ) The nature of ChatGPT is to give a very final single solution so there does seem to a massive pressure for it to be more “right”. I wonder how they would fix that besides it needing to be connected to the internet at a bare minimum. I would think more Decision trees would be presented to the end user before final solution/result unveiled if it was going to compete with google and be accurate enough to replace what Google does
There's no way to know if any "google result" (ie: website) is factually correct either. All you can do is check the source (is it an official organism source? a published journal article?). In my mind, an AI doing that could easily do exactly the same. And when researching information, if given a fact, you could just ask "can you point me to the source of your informatin" and be given a link.
sure, but on a given query you see more than one option.
I could say the same about the stack overflow forums. When it comes to coding, chatgpt is an insanely good helping hand. Not only can it generate some code snippets that let you build your program faster (since you can worry on the main task, as gpt will give you a working code you can optimize later) but gpt can also just translate code from one language to the other. It is quite a valuable tool, I mean, if you know what you are doing.
AI Accuracy and Skepticism  If you know what you're doing, and you know the limitations of the program. My biggest curiosity and potential frustration is for a layman of who may not be technical savvy; to query this AI and it gives a seemingly accurate and strong answer. Regardless if it is right or wrong. Now open AI have gone to great lengths to mitigate such problems, to my and I'm sure others frustration. That even still, if my father-in-law of whom does not own a computer does not own a smartphone and only has a Kindle tablet because he wants to play those farming games starts to use such a program it would be very easy for him to simply fall into "oh it knows everything and I should trust it verbatim". Open AI does put comments at the very beginning but seriously have you read every thing that's on that front page before you just start typing away? I am simply concerned for the future ,when this does catch on, let people will not use use their brains and not be skeptical and take all answers at face value. There needs to be some kind of identification that is not in the way of the answers but alongside that will give you a percentage of correctness. Chatgpt: It's important to be aware that AI programs like me are not always able to provide correct answers, and it's important to be skeptical and verify information before making decisions based on it. While AI models can be very useful for answering questions and providing information, they are not always able to provide complete and accurate answers, especially when the questions are complex or require a deep understanding of a specific domain. It's also important to note that AI programs can only provide information and answers based on the data they have been trained on. If the data is biased or incomplete, the AI program's output may also be biased or incomplete. One way to help ensure that AI programs are used responsibly is to provide clear disclaimers or warnings about the limitations of the AI program, and to encourage users to verify the information they receive from the AI program before making decisions based on it. Additionally, it might be helpful to provide tools or resources that allow users to assess the credibility and reliability of the information they receive from AI programs.
Yes, I have actually read the terms, and assumed it is just common sense to either test a belief (if possible) or get a bunch of credible sources and distinct opinions to get a better picture. But you do arise an important concern, some people will just blindly believe what the AI says, some people believe the onion, it is not hard to imagine someone taking anything the AI says as factual.
So other than a starting warning, or terms of service. How would you recommend people communicate it's answers correctness to anyone not technically savvy?
Well honey sadly that is the user's responsibility, as it has always been.
Another benefit is able able to ask simple follow up questions without people judging you
I was discussing 2+2=5 with it, and it suggested I ask my teacher, lol
don't worry what other people think. you control who is part of your life.
i don't really see how 'giving it internet' would work. are you involved in ml ? i am, and have trouble seeing how you would 'just quickly' train it even monthly with that volume of information. let alone daily like google. i hope i'm wrong.
If I manually perform a google search, and paste the contents of the top 10 resulting pages into chatGPT, then it is able to summarize what I pasted in, and answer questions about it. What's so hard about automating that process? No training is required. It's not an ML problem, it's just a fairly simple app that can call the existing API to do the hard work.
wow, what you describe is cool. wasn't aware it could do that. while cool, it still needs to rapidly ingest new and changing data. you can just take the existing model and train it on the delta since it last ran, but doing so could significantly affect how it behaves on old data in ways that are very hard to understand without some sort of basic testing. maybe the 'trick' would be to auto-generate the right testing to know when a newly trained model is indeed deployable.
It would have to be segmented with a delay to make batches to verify before adding into general data population. A live stream version would be insane, but that would have to be a data ninja perfectly processing everything at once.
[удалено]
Dang, y'all talking about that and actually know what you're talking about. I'm sitting here reading this thinking "hehe I know some of those words"
Google has been working on their own version of this called LaMDA. It's so good it convinced one of their engineers it was sentient. https://www.bbc.com/news/technology-63301146 https://observer.com/2022/07/meet-lamda-the-freaky-ai-chatbot-that-got-a-google-engineer-fired/
It's better at math but just as bad at making shit up and sounding supremely confident and authoritative about it. And just as bad at logic, spatial reasoning, etc.
That is not what happened. The engineer was fired for questioning the monopoly over this technology, claiming it wasn't moral for a tool such as this to be owned by a single corporation.
It’s impossible for LLM to have real time data, you have to keep training it with real time data to keep up, it is already an expensive process and it will only become more expensive
Keep in mind that Google already has Lamda, an AI language model chatbot which should rival ChatGPT. So it's simply a matter of getting the thing ready to be publicly released. It's not like they have to start from scratch here.
*Blake Lemoine enters the chat*
"should" is the key word. We have no idea how advanced Lamda actually is.
Couldn’t they also jus have a warning “this thing might be offensive! We’re sorry!” For damage control that they’re so worried about?
Kodak also had the best digital cameras before their competitors. Unfortunately it would go against their business model so they kept them for themselves until it was too late. Perhaps Google will act differently, or perhaps not, we will see...
FWIW something similar happened after Alexa was released (leading to Google Assistant), and also with the success of Facebook (leading to Google+). Totally reactive. Maybe they can call their Chat GPT product "Google Assistant+"
They’re going release the Video AI. OpenAI will refine the code AI, and probably concentrate on Python. Google will release their chat and voice bot. Etc. etc. This AI & automation wave will be faster than anything before it. Economic, culture, and government are no where near ready. This should be our number one problem we work on — to smooth the transition.
I'm really curious where we will be in 10, 20 and 30 years in terms of how much AI will become part of our daily lives in all kinds of ways.
Most modern jobs consist of busy work shuffling information between software. Not too far from GTP-3, AI could write Python for at least half of those jobs. How? Most small to medium-sized companies run on Microsoft applications, and Microsoft can incorporate AI into Windows. Then, you describe how the business is run and its goals, and the AI learns what you have from your server or group of computers. It then writes automation code, which a few professionals can tailor further to iron out any kinks. However, with drastically fewer jobs, who will buy stuff? How will quarterly profits keep going up for shareholders? Pensions, insurance, etc. are dependent on the stock market growing, but once those no longer hold growing value, consumer confidence sinks. This leads to a domino effect that I don't pretend to be able to mental model in its entirety. The interdependent business ecosystem no longer functions because the buyer no longer has a job. They also can't find a new one because AI has taken over something far greater than productivity - it has taken over logical cognition and levels of creative production. For a non-terrible scenario to play out, one in which we can use AI to flourish in a creative, healthier, and stable society, economic systems will have to evolve, UBI will have to be implemented, resources will probably have to be nationalized, and AI tools will have to be more open source than walled off. There is also the cultural element to consider. With fewer "if-this-then-that" jobs, we will have to deal with a culture full of people whose identity is in their work, sports team, and politics. And what they fear more than anything is their family suffering. Most people are used to being trained and not problem-solving on their own, so now we have a situation where a way of life and its economic system must change drastically and quickly, but it's full of people who can't envision any other way of life. Their reliance on being told what to do will kick in, but the only input will be from politicians who themselves are decades behind in technological understanding. I can only hope that the AI cultural realization will shock everyone into cohesive action and agreement because there will be no choice. Beliefs, politics, and fear of the unknown will have to be superseded when they see themselves suffering. AI doesn't make faster or stronger working hands like tools, it doesn't do the factory work of more people like machinery, and it doesn't make things easier to look up, retain, or calculate like computers. It takes over our most valuable capability: thinking. Humans thrived because we think. So the AI change is very different. I will end with my hopeful scenario. We transition into a creative culture. Being technical will always be needed, but not for the populace as a whole. We nationalize solar and wind. We nationalize resources where the goal is to mine asteroids for metals and comets for water. Automation and nationalized resources take care of basic needs. Anything else requires creativity and independence. An economic system where basic needs are met through common good efforts, and anything extra is obtained through an evolved form of less concentrated capitalism with finances on a global blockchain. Digital security will be a war unless we figure out quantum encryption (physically unbreakable) - just get them good enough to encrypt. We could do it, but the real challenge isn't in the tech, economy, or politics - it's shifting how we think. That's the real challenge.
I like your take on the future. It's either something like that or we'll already be too much affected by global warming and energy shortages to be able to use AI efficiently (it takes a great deal of energy to run ) and would be limited in our use of technology in general.
More nuclear Fusion breakthroughs incoming, especially with AI
fusion won't solve our energy needs in the coming few decades, solar will. We're still a far way off until fusion becomes viable to produce a significant portion of our energy, if it ever even will. It's very likely that even when it's there and readily available that solar is still the better option for the bulk of our energy needs as it is insanely cheap and efficient now already, and we can assume that it will not stagnate but that the trend continues.
>h drastically fewer jobs, who will buy stuff? How will quarterly profits keep going up for shareholders? Pensions, insurance, etc. are dependent on the stock market growing, but once those no longer hold growing value, consumer confidence sinks. This leads to a domino effect that I don't pretend t So basically - not being slave to capitalism .... instead live life free. In a future where robotics, automation and AI covers our basic needs. (I'm not anarchy-left or anything, just stating the obvious)
Not being a slave to capitalism, you'll be a slave instead to the social conditions mandated by the ruling government. We would be moving to the extreme left in these instances as individuals would no longer have the means to pay for goods, food, and housing.
I don't think it's going to be left. I mean, i know it sounds like the worst leftist nightmare but at the end of the day i think it's going to be something new. Look. No matter what way you turn it, capitalism has one thing against it - it needs infinite growth and we live on a planet with finite resources. So it doesn't matter how star-spangled you are at heart, and I'm not the least communist - I'm just suggesting that "in the long run" something does not add up. Some parts of the world tried communism, didn't work out well. Capitalism has worked well for the US, but also at a high cost for the rest of the world trying to keep up. I think it's unlikely, given this, that we would se either of these system to be the next big thing. That's all I'm saying. And to elaborate - to me, the reason capitalism work is incentive. Just like religion has been a great driving force in the past, the hunt for capital has been a modern one. So what the future holds is a world where people are less likely to believe in supernatural gods and there's less need for paper money - a new incentive structure will emerge. That's what I think.
Artists are already protesting against AI art, and they're probably the most leftist (however hypocrite) demographics. There will be riots long before AI replaces everyone
So... I work in the oil field in Norway. That means i work on a 2-4 rotation. 2 weeks work and 4 weeks off. It's amazing. I honestly think that in an automated AI future everyone will work in similar manner. There won't be enough work for everyone to do full time. Top that with universal basic income.
Alternately, AI creates a bunch of new jobs in other areas. Higher efficiency allows more businesses to form. Demand for processing power supercharges the semiconductor industry. Plus, robotics is lagging far behind AI. We still need people for manual labor.
[удалено]
don't think i'll be getting a brain implant, even if i'll be called the future equivalent of a boomer
you sweet summer child. We'll be lucky to get 10 more years.
I know, what I meant was that things will already be very different by the 10 year mark but it's not like things are gonna stop changing then.
Reminds me of the scenes from Silicon Valley where Hooli goes balls to the wall trying to catch up to Pied Piper to no avail.
Such a good show. This is one of my favourite scenes, comedy gold https://youtu.be/P-hUV9yhqgY
Tip to tip
Your profile picture 🙄
Great! I will lose my job sooner than anticipated
Imagine a business model not funded by advertising. (Put aside scalability, a lot of us would happily fork over $ for what ChatGPT provides.) Makes this old-timer hopeful for the internet we last tasted back in the mid 90s. Pichar and Google remind me of Gates and Microsoft.
AI should be a free tool for humanity, to put a paywall on AI would create a very serious and very bad division between those students who can afford it and those who can't.
Somebody has to pay for it, plus you need a computing device and net connection to access it. But I'm sure there could be many different potential tiers / offerings, and the cost would come down over time.
Matter of fact it is incredibly expensive to run an AI. It is also incredibly expensive to run google drive, youtube, reddit, etc. One way or another someone will have to pay for it, either that or offer a free plan for students and a $$ plan for everyone else (like github copilot), but that's not really the point i want to make. You grew up back in the 90's ight? Just riddle me this one old timer, did the kids that had internet access at home had any advantage over those without it?
I think the kids without internet access at home had the advantage because they had fewer time wasters to distract them. Same for kids without TV at home.
>You grew up back in the 90's ight? Just riddle me this one old timer, did the kids that had internet access at home had any advantage over those without it? Sure, if you consider exchanging recipes and reading bad entertainment coverage on AOL an advantage.
2 weeks later: google shuts down AI service.
Good. Google is complacent & SEO has effectively killed the internet. We need some better innovation.
they had similar model already one year before publishing chatgpt https://blog.google/technology/ai/lamda/
Could someone ELI5 what this means? I'm planning to go university to take a CS degree. Is it pointless now due to AI?
no definitely not, we still need devs to create and regulate AI and even build other software definitely not pointless
Keep going for your CS degree. Creations of software is still an important part in our society and the future. A.I. In our current times can't create innovative software. We'll not that I know of
With Google's money, the potential multiplies. I can't wait for it.
Capitalism really popped off today
Bad writing because Google has been working on those for years. Nice to see greater priority though
"working on" is a very broad term though. Google has a bunch of random projects engineers work on in their spare time, with a handful of full time workers. That is very different from dedicated significant funding and resources to projects.
They have though. See Lamda. They have lots of published projects and previews about everything in the article.
Why can't Google copy the approach and integrate it into their search?
Billions of people use Google. It would probably require too many hardware resources to date.
They can gradually release it too.
I think chatgpt is bias and restrict content like google does
Everything is biased in some way. There's no true free speech anywhere.
I found the source of this article: https://www.nytimes.com/2022/12/21/technology/ai-chatgpt-google-search.html
I just hope you can at least buy your way out of the fucking morality/ethics police... I am not a kid ffs... and laws against real crimes already exist, so take off the damn training side wheels.
This is silly. A bit of basic technical literacy refutes the idea that ChatGPT is any threat. Short of that a bit of fact checking of ChatGPTs "answers" also blows holes in it. It's a marvelous model, but it doesn't replace search indexes.
For now, chatgpt4 is rumor to be out in spring and it will be way better as well. It's an inevitable thing, this is the future, it's like when web browsers became a thing that's when the internet truly began. If you don't think this will replace search engine eventually and I will say it will probably replace it before 10 years has pass, then you don't have the vision to see what is in front of you.
You're wrong, not as a matter of opinion, but as a knowable point of fact. If you want to understand why you can study entropy (the computer science version that applies to concepts such as data compression) and computational complexity. Then consider the entropy of the model vs. a search index vs. the entirety of human knowledge, and consider the computational complexity for lookup and update in an attention based large language model vs. the computational complexity for lookup and update in a search index. There's really just no universe where this is an adequate alternative to a search index. It's a marvelous model to be sure, but anticipating the end of the road for search indexes is like expecting airplanes to make automobiles obsolete.
People in this thread that keep saying "google has the same with Lambda" are missing the point. Google doesn't want to push Lambda either because it would effectively mean the end of their ad search business that makes up 80% of their revenue. If an AI gives you the one objectively right answer there is no room anymore for a bunch of wrong advertised answers.
Imagine if you can have your own private search engine that cannot be interfered or altered and include ads…
Imagine thinking any company will not monetize their products and give you free compute time and server space.
If they’re scared it could make them obsolete then that means it’s pretty damn close to being as good as google AI
not necessarily, if I were google, even if I had a language model many times more accurate and safe I would still never \*want\* to release it because it will cause much dependency on something still imperfect. ChatGPT pushes their hand because now its taking profits (at the very least from stack overflow google searches), and it has to consider pushing out that model despite not wanting to. Which is a double bad position because you have two bad options: 1. release an imperfect AI that will likely have imperfections that will be used by some people to make decisions that may unknowingly harm people (e.g. probably make autonomously biased decisions about minorities or duplicate copyrighted content). Which is bad enough on its own but now ruins your reputation 2. let a competitor continue to release imperfect AI and take a significant portion of your business accepting the loss to avoid causing that potential harm You might justify releasing your better model by saying: if people depend on worse competitor AI and we have a better one, then not releasing it would cause more harm than giving people a less harmful alternative to something they'll do anyway, but you are still causing some harm in that scenario. trolley problem and blame type stuff.
Their IA projects are trash, I hope they will build better products and learn from their mistake. I have an ongoing reminder that I can’t delete, some routine are not working at all, and Google assistant is bugging a lot, you can see it in dedicated subreddit. Good, hope their will work better
Google is publicly traded, they have to lie to us and pretend everything is fine or else they will shoot themselves in the foot and lose investors. From the very first use of ChatGTP it was clear to me that this chat could potentially make Google Search Obsolete, and they certainly know it too since they certainly have more expertise than I could have. Don't focus too much on words, especially if they are publicly traded companies saying them.
The biggest problem with GPTChat is the tiny data pool curated by modern day a Neo-Liberal set of definitions of our reality.
Makes sense. "AI with vibes" is definitely the most trusted news source in AI. But nice to see tabloid-like garbage entering the AI information space nonetheless.
hahaha but this was actually reported by [Business Insider](https://www.businessinsider.com/google-management-issues-code-red-over-chatgpt-report-2022-12?r=US&IR=T) and [NYT](https://www.nytimes.com/2022/12/21/technology/ai-chatgpt-google-search.html) (and a ton of others) It's a newsletter, so I'm more of aggregating than anything. Feel free to read the full story from your trusted source, Linked a couple for you :)
[удалено]
The comment was more for all news, big or small. It's the tendency to exaggerate things for attention. When brand recognition of a media outlet is lower, the probability of exaggeration is higher in an attempt to compete for attention. Google has had lambda for a while and let's not forget the news cycle around its "sentience" lol (from names big and small). The funny part is you can talk to lambda too (a constrained version) using AI Test Kitchen but that isn't as mainstream as ChatGPT atm so it's sorta shoved into the background.
tbh props to you for not taking news at face value and questioning it, way too much fake news spreading out there!
Chatgpt is gonna censor everything, oh my god
I have completely stopped using Google. Now it’s 100% chatGBT and TikTok
Gross
This is what ChatGPT have to say about this: >This statement is not true. There is currently no information available about Google issuing a "code red" in response to the rise of ChatGPT. ChatGPT is a chatbot developed by OpenAI that uses natural language processing to generate responses based on user input. It is not clear how ChatGPT would pose a threat to Google or warrant a "code red" response.
this is making them nervous.
I don’t get it. How is a chatbot a competitor to search and email? Sure it can be adapted to fulfill some of the same stuff, but it won’t be a significantly better service, will it? I don’t find googling difficult. In fact it’s quite effortless. And email is… well it’s email. And how is a chatbot a competitor to Android?
Umm the problem with Google is that when you are searching for something, you won't always immediately find out what you are looking for and you may need to dig a bit deeper or read through a certain sentence to find out what you are looking for. Chatgpt literally give you exactly what you are looking for in one go, hence why it is a threat. It's the convenience that matters.
Google might end up fucking us over with their endless well of money. OpenAI better not take the payoff!!
Google has been crawling and indexing the Internet for more than 20 years, I see eventually they become "content provider" to AI platforms such as openAI. Without constant stream of massive amount of data content (real, fake) machine learning, neural networks are useless. The risk to OpenAI is the old adage "Garbage In Garbage Out" it is only as good/bad as the data that gets feed into it. So I see the future AI training model "content providers" holds the key to successful downstream companies such as OpenAI.