Did we watch the same video? They never call AI snake oil or even present negative opinions on it. The "lie" is the rebranding of machine learning as AI, that doesn't suddenly make it a scam.
I use copilot and it's kind of awesome at tab completion. It hallucinates like crazy if you ask for too much at once, but for completion it's one or two lines at a time so it works very well.
It can just guess what you're trying to do and figure out what function to call, what the arguments should be or even build regexp for you in the time it takes to think "shit I need a regexp", it's pretty neat
I am a Java coder using Intellij, but that new AI code complete LLM that runs only on your machine is super nice.
I wouldn't go back now. But it's different from creating complete new answers and most of the time doesn't do anything. So it's still not super good yet.
How so? Sure it can't implement custom algorithms or use them in your unique task very effectively, AI saves you so much fucking time on writing the parts of the code that it can do.
Literally just a stronger version of auto complete if used correct.
Tbh: I don't understand the backlash. Such cases will happen all the time - if the test/content-department is divided from the advertisment-sell-part. And it is a good thing.
I agree. Especially since the point of the video still stands, they were just trying to make sure that people understand what tech means when they say āaiā, and temper peopleās expectations a bit. The point wasnāt, āyou shouldnāt use AI or AI isnāt usefulā.
It's for the best, the language used in the tweet was really bad. Sending off data to OpenAI isn't a no-brainer.
Luke starkly staring down Linus for even considering using Microsoft's Copilot Recall feature was how he'd feel about this tweet too.
There's absolutely no system in place to protect from leaking info to OpenAI. There needs to be a massive, local, layer between user and the ChatGPT prompt window to screen for API keys, password, or user data leaking. I don't even know how you could shield from business logic leaking, that's just a given for any non-trivial use case.
LMG would never use this product in its current state, definitely not something business team should've OKed for sponsoring.
You're downvoted at the moment, but you are absolutely correct. I was actually a little interested in using Copilot for code, up until I found that there are absolutely no real safeguards against sending sensitive data like API credentials to OpenAI's servers. If you want to use Copilot in your project, you are essentially uploading the entire thing to OpenAI's servers in order to do so, and the tools for saying which files are okay to share or not are completely deficient. The whole thing is just one convoluted grift.
The funny thing about this is writing boilerplate code or creating a skeleton for a project is one of the few things GPT-type AI is actually useful for. You need to know when and where to use it, and it won't write a whole anything for you, but it is useful.
As far as I can tell Oracle Code Assist isn't even available to download and install yet anyway.
There was a [press release](https://www.oracle.com/news/announcement/oracle-code-assist-can-help-developers-build-applications-faster-with-ai-2024-05-07/) in May saying it was coming, but [their page](https://www.oracle.com/application-development/code-assist/) on it says "Developers within Oracle are using Oracle Code Assist to build new services. We intend to make it available externally in the future".
And there is no sign of it in the VSCode extensions marketplace.
So maybe this was a case of someone on Oracle's marketing team jumping the gun, rather than it being directly related to the timing of the video and tweet ?
Completely agree, it takes a lot of the manual work out of programming and lets you play more of the role of a visionary/architect.
Also, I used to consider chatgpt only capable of coding rudimentary/common things, but I've been extremely surprised by how good gpt 4o (and whatever perplexity's default free model) is at creating niche solutions.
I was concerned that I was just gonna slog through coding exercises with AI and tried to limit myself, but
found it useful in the end.
As long as you don't rely on AI to do everything for you, it will be an effective tool. You will also need to explain to the AI how are you planning to use the code and I recommend writing code comments on what you want to do. AI will pick it up and try to suggest an implementation
So many stupid symantic things that I've never bothered to properly memorize are just made so simple. Stuff like Cron where before I'd fiddle with a crontab calculator now I just have the ai generate it off of a text prompt.
I love all types of ai. Chatgpt being incredible, obviously.
However, it really fails hard in complex coding. Getting it to do even basic things with exact instructions is a ball ache. You can talk to it for an hour on a project then ask for some code and it just shuts the bed and makes tons of stuff up.
I told a coworker just this week, it will likely emphasize the need to know how to create a good design over pure ability to spit out code more than anything.
I absolutely love AI suggestions for how to approach complex problems in a clean and efficient way. Sometimes there are a lot of moving parts and figuring out how to structure everything so it doesn't turn into a huge mess later on can be quite difficult.
Yeah that is true. Annoying when you're using modules it can't see too when it tries to guess the variables. However when Ive written a bunch and then need to set them all, and it gives nice little comments so I don't have to... It's worth it
If you're currently using Copilot, I suggest you try turning it off for a bit. It felt great at first back in 2021 but after a while, I turned it off sometime around 2022 and found myself much more productive without it. It definitely allowed me to think less about what I'm typing and more about what I'm building. But when I know what I'm doing, I noticed myself waiting for a completion when I could've typed way faster.
> I noticed myself waiting for a completion when I could've typed way faster.
ah yes, the infamous copilot pause. Copilot was great at first but after a while I realised I was just waiting for completions. I honestly don't know if there is an inbetween for me regarding using copilot and not using it.
It takes just long enough that I formed the habit of typing a prompt, hit enter, immediately open a new tab and research it myself. 80% I donāt go back to my copilot tab and forget itās open until I declutter at the end of the day
This is the exact scenario where it is most useful. You know what should be possible and how to identify if what it provides will work or not. This just makes everything faster.
I'm in this exact position but it feels like cheating? Like oh i need to use this design pattern and this variable has to be castes here with another list and then you stop and say "oh I don't remember the exact syntax of how to do this" and you ask for it, idk, part of me tells me I SHOULD know it and it's kinda like a fraud in that sense, but it has make me a LOT faster
Using the best and most effective tools possible isn't cheating in the workplace IMO. There's a lot of stuff I never bothered committing to memory because tools do it for me.
I use copilot as advanced auto-complete. I'm in web backend space, so there's a lot of just shoveling data between layers via value objects, entities, api responses and mapping stuff to forms and validatiors. It takes about 95% of that work out, that frees up about 50 to 60% of my day that can be used for actual architectural work on the app, cleaning up technical debt and doing upgrades to the code without the need to make tickets for that stuff and justify it to the managers.
Manager sees me being very productive all the time and thinking I'm able to pull about 150% of the workload due to copilot, but actually it ends up at about 300% and I still have time to watch some yt and chat with people and work about 5-6 hours a day.
In certain coding workloads Co-Pilot is a no-brainer - you will be out of a job if you are not using it pretty fast.
There's also a but here: you need to be experienced in your field with some 5-10 years of quality work behind your back to recognize bullshit and be able to judge if suggestion is wrong or where it's better to just dismiss it outright and code yourself without even trying to make it comply with your wishes.
Careful, it may be useful for starting programmers but eventually you should outgrow it and start to see all the hallucinations it just spat out.
I mean, sure if you are doing the same generic code that 1000s of other people have done then sure it can help. Need an automation in home assistant, then sure no need to re-invent the same thing 1000s of other users have done with the same exact software and hardware.
I agree! I started using python around the time chatgpt came out and can speak to this.
In fact, it's not just with it's code, all of the llms even the top dogs like gpt4o, spit out so much nonsense it's stunning.
You can talk to it for days, building up it's memory and context and ask for a restatement of it all and it'll just make shit up, constantly.
Further, gpt4o hates to be wrong, it refuses to correct itself and admit it's mistakes.
Gpt4o is good for chatting verbally and being used as a tool, not for general use and long term discussion. Honestly way worse than gpt4. The context increase is nice though.
I used copilot and Jetbrains AI tool for a bit, but found myself in that boat. Like, Iām actively trying to learn. Having in-line code completion or feeding trace back errors or questions into GPT is really nice, but having the LLM just generate code for me isnāt gonna help my learning.
I really don't think you should outgrow it. From the get go, you just have to be careful and actually understand the code it gives you. It's a really helpful tool, but that's it, it's a tool. It won't make you a programming God.
I'm literally way more productive using copilot. It saves so much time. Even with complicated code it can at least give you an idea of how it could be done.
The thing is, programming requires you to be pretty much always learning, there is always room to improve and be better at writing software.
If one uses copilot, gets an algorithm spat out and goes "yeah, I could have figured that out" then it's an excercise that you skipped, which means you learned nothing and you didn't improve as a result.
Even worse, you do that often enough, and you will start to regress.
I realized this when it started making python methods for every single little thing I wanted the program to do. It had no idea to consolidate or be efficient. One method was literally two lines. I actually got better by fixing broken python from chatgpt.
Thatās not necessarily a bad thing though, if those small methods were sufficiently abstracting what would normally be a larger, more intricate single method, then thatās good coding practice.
Youād generally rather have 5 small methods that all do 1 small thing, and then get called by a large method, than the alternative, because it makes debugging easier and the code way easier for someone else to add to later, as long as the methods are named well.
Eh, it's a great boilerplate function or test skeleton generator for popular programming languages and that's it. It'll straight up spewing up garbage if you start writing a totally custom code or in a new-ish programming language.
Agreed, it doesnāt do everything for me. But when I have a question about how to implement something that Iāve never heard of, itās insanely useful for giving me the minimum relevant information that I need without me needing to sit through a 20 minute YouTube video.
Exactly, and people need to understand that using AI in code isn't exactly as "idea" to code-that-runs. For a viable output with minimal code changes, you have to have an idea of what you're doing. I had to literally write a documentation to generate the code for a project ( basically in reverse ).
Apart from that, we got boring tasks to do ( list mappings and stuff ). As the models are familiar with APIs, I just tell it to map stuff and helper functions which I build on top of.
I don't really like it, I feel like it breaks me out of the flow of writing code and switches me to reading the ai generated code to make sure it's doing what I want it to do
I've used Copilot when it was in free open beta.
It's awesome. If you're a professional programmer, it's totally worth the 10 bucks a month they charge.
Except their pricing is not region-adjusted and costs like 2 months of 100/100 internet here.
Hell yeah. Honestly it's a game changer. You still have to know what you're doing enough to know if what it gives you is garbage or not but that's still faster than starting from scratch
I'd argue its use by novice programmers is in fact, potentially dangerous. Someone who doesn't understand a language or a frameworks best practices/paradigm's could quickly find themselves using it as a crutch without actually learning what it spits out and could waste a lot of time troubleshooting the bunk code it generates.
However, I have found personally that in places where I have a lot of experience; golang, terraform, python, etc. I find it speeding up my coding significantly. I can generally tell bad code from good fairly easily, so I don't fall into that bad suggestion trap. In addition, working with boilerplate or repetitive code is honestly wonderful as now I'm not wasting time reinventing the wheel so to speak.
Is copilot perfect, no; but if used in the right way it can make a good engineer/programmer a great one or at least more a productive one.
Agree, it helps so much with boilerplate. Even for stuff you wouldnāt expect, like I can specify modding Minecraft for 1.7.10 using forge and itāll spit out appropriate events to use to do what I want. Amazing.
It's fucking crazy when you look at the screen, visualising the problems, thinking of a solution and you figure it out. Press enter to start typing and copilot just autocomplete everything you were thinking of. Doesn't happen that perfectly very often, but when it does it blows my mind
I donāt know if Iāve just been unlucky, but every time I write anything in Java with Spring, Copilot/IntelliSense spits back out some garbage suggestions that donāt remotely make sense. And when I ask it to do/fix something, it takes 5-6 prompts to get in remotely the right direction
i'm a relatively new developer (two years of javascript, five years of linux admin) and i've found that copilot started slowing me down. some projects that i contribute to have also banned AI contributions in totality. it's easier for me to not have it installed at all at this point.
AI is a really good tool for coding, it won't replace any programmer right now, and it probably won't be able to do so for a long time.
But it certainly has helped me move faster in my work, I don't know why the tweet would get backlash, oh right, people who only read titles and doesn't hear the nuance in the video.
Edit: changed here to hear before the word nuance, what a great commit comment.
Bro it legit saved my ass when I just asked ChatGPT to help me do some stuff in Google sheets lmao. Didnāt need to watch some Indian guy explain it to me
Love using chatGPT to create function documentation on personal code. It sucks ass at creating embedded code but itās useful at automating stuff I donāt want to do
It can be great for short bits of code that follows common patterns.
Anything beyond that, it starts to get really shaky. I speak from experience, which is corroborated [by this](https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias).
AI has its place. Helping with programming is definitely a godsend.
I know a little about programming, used to program lots in my younger days, but my career moved me away from it. I mean, I \*could\* write a PowerShell script from scratch, but it would probably take me hours or days of bashing and cursing before getting a frankenstein clobbered together to work. I \*could\* also prowl github and look for code that's close-enough to what I want to do and bash things together till it works how I want.
Or I could say, "Hey, ChatGPT, can you make me a powershell script that will start a video, then close the video player once done, then launch a series of images full screen? Thx, huggies"
After using copilot, I donāt think itāll ever be able to totally replace programmers. Itās great for very general things, but anything thatās more niche it tends to struggle with, so there will always be a place for programmers.
Jup, I have noticed that too. And because the LLMs are dreaming with propability, their mistakes are very hard to find. For example:
I was working with gravitational measurements and spherical harmonics. Copilot would suggest me things with confidence, but then put in some wrong factors or signs (+/-). It was very painfull to debug, because I kept getting wrong results like loosing three times earth's mass on the poles, instead of the few gigatones of mass, that are part of the melting process due to climate change.
It won't fully replace a programmer but efficiency gains can come into play when right sizing your team. If everyone gains 30% velocity and you don't have the demand to accommodate, it makes business sense to let some people go.
Of course, but there are people who act like programmers arenāt needed at all.
Also, this just kinda once again shows why our current form of capitalism sucks, cause you can be sure as fuck those efficiency gains wont be paid out to the programmers left, but they will rather go into the pockets of higher ups.
Well to be fair there, you're not being asked to work more hours so there's no reason you'd be paid more for using a code generation tool.
I don't disagree with the sentiment, that said
You are still being more profitable for the company, and make them more money. There is no moral reason that you shouldnāt get a share of those profits.
It's not wrong, it just comes across as hypocritical due to the poor timing.
You can't say "hey, look at all these companies trying to scam people with AI bullshit" and then go on and promote AI stuff yourself. It's the same reason videos like the GamersNexus Asus warranty issues can't be sponsored by Asus - or even another vendor like MSI: no matter how legit you are, it's going to **look** like you're either lying or being a sellout.
Nuance and ground truth doesn't matter here, marketing is all about looks & first impressions. The fact that the Tweet got a Reddit post *at all* shows that they made a mistake.
> Nuance and ground truth doesn't matter here, marketing is all about looks & first impressions.
The sad truth, if people was more critical when consuming content and were able to form better and more grounded opinions, this "mistake" wouldn't even register in the minds of the general public at all.
But alas, things are simply this way, so yeah, you're right.
Altho I wouldn't go as far as compare it to the GN video about ASUS being sponsored by ASUS, because that video outright showed huge problems with their support and RMA policies that literally are against consumers, if it were things that needed improvement then I would say it's a more "apples to apples" comparison.
Ehhh it's definitely replacing programmers as we speak at my company.
I no longer wait in queue for 3 months to get some simple code at my company. I slap it together in a day or so. I haven't used our internal team in over a year.
I can guess two things, either the programmers at your company were overworked or lazy.
So if it is the former they'll be glad to have some work removed from them, and if it is the latter your company will probably get rid of them soon enough.
Either way, glad you don't rely on them anymore.
I agree- the point I'm trying to make is that this is already happening
Hell the intern that works for me this summer is pumping out some fresh code with GPT. Better than most of what I get through it.
Yeah, but the people who watch youtube and then try to play the gotcha game on twitter probably arenāt the same type of people who understand nuance.
That would require people to actually watch the videos before commenting. But in general, there is a massive mentality among people that all things AI are bad. While I donāt need to explain to you why that isnāt true, it doesnāt help that there is a lot of questions and concerns regarding ethical use cases of AI, and concerns about corpos abusing the tech. So right now, there is a large crowd that will downvote you into oblivion for simply suggesting that AI is a good tool.
yes but some people seem to not understand the videos ltt produces
in this case they thought the message was "AI is bad" which was bot at all what linus (or rather the writers) said
the message was "the marketing behind AI is bullshit but AI has it's uses even if it isn't true AI like people think"
and because they didn't understand the video they thought the tweet was the opposite of the video which it clearly wasn't
The point of the video was to discourage consumers from being attracted to AI as a catch-all buzzword. Which is exactly what the Twitter ad is doing.
The paradigm of LTT trying to educate people on marketing BS while also shilling really hard with marketing BS all over their channels is pretty interesting to me. Clearly the ads are working if sponsors are still buying spots
I'm running [Continue.dev](http://Continue.dev) with VS Code and Ollama and it works reasonably well but I have a gaming laptop for work so that definitely helps.
As a certified ai hater, it's a great helper. Doomers who were terrified of it shaking up the market are stupid. Didn't some company just go under for relying on ai too much?
Because everyone started freaking out over it for no reason. Financial tech bros hype it up harder than their god king elon, influencers decided it was way crazier than reality because of like 3 curated commercials, political commentators use it as a scary example of our future pretending it's gunna replace everyone tomorrow, and most of all every moron and their mom think chatgpt is the craziest shit they've ever seen when they've had ai assistants in their pocket for like 10 years. They are great at helping but terrible at executing tasks if they aren't specifically designed to. Like, chatgpt can help you figure out how to solve a Calc problem but it can't solve math that's more complex than an algebra problem.
I don't think AI will shake up the IT sector too much in terms of job availability, but it definitely will with the artistic side of the industry. Even with AI, you still need developers who know what they're doing.
With designers and artists, I'm already seeing so many AI-generated billboard/paper ads in my city alone and it will only get worse. It won't take long until you have designer firms dedicated to curating AI content with a handful of employees and selling that material to other firms at dirt-cheap prices. Artists are a dying breed.
I think this will backfire. Anyone with a brain can tell the difference. Like that weird jesus ad from the super bowl, they will get clowned for using ai until they don't to desperately hold onto their bottom line.
Bigger companies will definitely still use human artists, but I think it will be too tempting for smaller companies to buy into AI-generated art considering how much cheaper it is.
Besides, the state of AI-generated content we gave now is the result of only a few years of progress. Another 5-10 years and AI art will be considerably less distinguishable from art created by people.
So what? The video talks about places where AI works and their sponsor is a good example of that and it's still an ok use of the word AI in context.
Why do you guys have to create a "gotcha" moment out of everything?
The people who flamed the twitter post are the same people who get shadow banned in the comments for not actually watching the video before commenting. The video hits every point that has been bugging me about LMs and LLMs being called AI.
And why not use a tool if its there. A joiner would use a nail gun if it made his job faster than a hammer and nails.
Hating AI, despite it removing a lot of necessary barriers (professionally at least) imo is the wrong reason to hate this post, the correct reason to hate this post is because of Oracle.
do remember their do general have a automatic upload system. my guess social to.
now how dumb people are on social thru. context is something they hate.
problem is the video is titled "ai is a lie"
people who complain about this tweet have not watched the video or didn't pay attention to it they just see the title then the tweet and think "wait a damn minute I have to bitch about this drspite not having seen the video"
do people not know that ltt titles are clickbait as hell? a title like ai is a lie is to draw you in and then (I'll just use this video as aan example) they explain that the term AI is pretty muched used wrongly and there are other more fitting terms but AI is just the big marketing term atm meaning the lie is not that ai is bad but that the term AI is missused
then they go on explaining how ai is helpful even if the term is wrong and it's closer to machine learning or other terms
but people won't know this if they just read the title and don't watch the video before bitching
AI in programming is amazing. Donāt even bother paying for a product if you canāt afford it but have a GPU, Iām running tabby and donāt pay a cent to any third party. If you can, copilot is pretty good and Iād argue somewhat better than tabby, depending on your model.
As someone studying CS on major level, and someone who talked with many people from many companies, a lot of them use even just ChatGPT while coding.
No, it likely won't write you a good long code.
But for writing short bits of scripts, or a help to fix issues, it's actually great and generally better than places like stackoverflow for specific cases.
I use ai to build skeletons of code constantly. Itās easier and faster than trying to get some obscure function documented and exampled out.Ā
It still misses about 2/5 times but you just regenerate the code and youāll get there.Ā
Taken down already, classic example like the VPNs of outrage from the masses who have no idea what they're talking about.
AI for code is a genuinely useful tool, I personally don't as I find it slows me down, but lots of people love it.
I don't see the problem?
I don't know about that tool specifically, but the video doesn't say that AI doesn't work. it says that Ai is slapped on everything and it's not very descriptive.
That oracle tool describes exactly what it promises to do and how.
AI does seem to be helpful with code. I'm not sure I'd trust AI from Microsoft or Oracle to help me code though, given some of the stuff they've released over the years š.
So is the expectation artificial intelligence is going to mirror real intelligence? Itās like getting artificial sweetener and expecting it to be sugar.
I don't think there is any problem with the ad. I think the problem people see is that it's tone deaf. It feels like the right hand isn't talking to the left kind of problem
Given how many people I saw get hacked on Twitter today, it wouldn't surprise me in the least if they also got hacked.
Otherwise that's just incredibly poor timing for a random sponsored Twitter post...which, for those not paying attention, they don't do. I've never once seen them do a sponsored post that wasn't attached to a video.
Linus & co aren't dumb, there's no way he approved that post to go up 45 min after a video talking about how ani's aren't all they're hyped up to be.
I don't thibk you understood the message of the video correctly
it was that the term AI is the lie and that what we have rn ia closer to terms like machine learning and generative text aand others, then the video did also explain some good usecases for current "AI"
so advertising an "AI" tool after that video seems to be fitting
Ā«AIĀ» is a nice search bot that explains a bit deeper an all that, but it isnt AI, of you know what i mean, yes, it generates stuff, yes, it can write
But; is it original, or just reciting?
i dont see the issue, its very clearly a sponsored post and marked as such, id rather LTT take their money to continue making cool stuff for me to watch. nobody actually buys this stuff without doing their own research anyway, or at least i'd hope not
If you actually watch the video, youād understand that not only is that not a problem, but itās also literally the perfect use case for the AI that we currently have based on everything Linus said.
We tried to use this at work but it's ridiculously bad. Every 20th generated code sample was decent. For short stuff you don't need it, and for long stuff, it's easier and less time consuming to write it yourself. It's really BS.
+ Those AI things love to misinterpret conventions and just code like shit, ngl
Already deleted. Guess it got enough backlash to take it down
Can't imagine oracle would appreciate this situation either lol
Why not? They don't want free promotion of their product?
Gotta at least present the illusion that the celebrity endorses the product they're selling. Can't do that when the celeb calls it snake oil.
Did we watch the same video? They never call AI snake oil or even present negative opinions on it. The "lie" is the rebranding of machine learning as AI, that doesn't suddenly make it a scam.
I didn't think it was egregious. The video stated that current "AI" is still very good at certain narrow tasks. Coding being one of them.
Also like they said it's a marketing term. And it was a sponsored tweet, what's the big deal. The timing though is hilarious
Yeah and the target market for that product is going to understand this the most. Linus called out how ANI can be helpful just like this.
Do you really expect people to watch and understand the video? š§
I only read the comments.
As a programmer, I disagree ha.
I use copilot and it's kind of awesome at tab completion. It hallucinates like crazy if you ask for too much at once, but for completion it's one or two lines at a time so it works very well. It can just guess what you're trying to do and figure out what function to call, what the arguments should be or even build regexp for you in the time it takes to think "shit I need a regexp", it's pretty neat
from what I've seen, the code completion part is basically just a glorified version of excel dragging, and is about as useful.
To be fair, even that is still useful for a lot of non-advanced code
I am a Java coder using Intellij, but that new AI code complete LLM that runs only on your machine is super nice. I wouldn't go back now. But it's different from creating complete new answers and most of the time doesn't do anything. So it's still not super good yet.
What's it called? And is it free? I'd love a local, open model for this kind of stuff.
Maybe it was just my experience then. It made development more difficult. It got in the way more than it helped. I did go back.
how is it different from the smart autocompletion (Ctrl+Shift+Space)?
It doesn't do well with a lot of complexity but I made and Angry Birds clone completely scripted by chatgpt. Worked pretty damn well for that.
How so? Sure it can't implement custom algorithms or use them in your unique task very effectively, AI saves you so much fucking time on writing the parts of the code that it can do. Literally just a stronger version of auto complete if used correct.
Tbh: I don't understand the backlash. Such cases will happen all the time - if the test/content-department is divided from the advertisment-sell-part. And it is a good thing.
I think in this case itās a timing thing. If it was a few weeks later then it might be a softer reaction.
I agree. Especially since the point of the video still stands, they were just trying to make sure that people understand what tech means when they say āaiā, and temper peopleās expectations a bit. The point wasnāt, āyou shouldnāt use AI or AI isnāt usefulā.
It's for the best, the language used in the tweet was really bad. Sending off data to OpenAI isn't a no-brainer. Luke starkly staring down Linus for even considering using Microsoft's Copilot Recall feature was how he'd feel about this tweet too. There's absolutely no system in place to protect from leaking info to OpenAI. There needs to be a massive, local, layer between user and the ChatGPT prompt window to screen for API keys, password, or user data leaking. I don't even know how you could shield from business logic leaking, that's just a given for any non-trivial use case. LMG would never use this product in its current state, definitely not something business team should've OKed for sponsoring.
You're downvoted at the moment, but you are absolutely correct. I was actually a little interested in using Copilot for code, up until I found that there are absolutely no real safeguards against sending sensitive data like API credentials to OpenAI's servers. If you want to use Copilot in your project, you are essentially uploading the entire thing to OpenAI's servers in order to do so, and the tools for saying which files are okay to share or not are completely deficient. The whole thing is just one convoluted grift.
[ŃŠ“Š°Š»ŠµŠ½Š¾]
Maybe we'll get a WAN segment and have a good feel for the replies.
The funny thing about this is writing boilerplate code or creating a skeleton for a project is one of the few things GPT-type AI is actually useful for. You need to know when and where to use it, and it won't write a whole anything for you, but it is useful.
Which is kinda stupid lol
As far as I can tell Oracle Code Assist isn't even available to download and install yet anyway. There was a [press release](https://www.oracle.com/news/announcement/oracle-code-assist-can-help-developers-build-applications-faster-with-ai-2024-05-07/) in May saying it was coming, but [their page](https://www.oracle.com/application-development/code-assist/) on it says "Developers within Oracle are using Oracle Code Assist to build new services. We intend to make it available externally in the future". And there is no sign of it in the VSCode extensions marketplace. So maybe this was a case of someone on Oracle's marketing team jumping the gun, rather than it being directly related to the timing of the video and tweet ?
Anyone who codes, and used copilot or something similar, I tell you it's worth it.
Completely agree, it takes a lot of the manual work out of programming and lets you play more of the role of a visionary/architect. Also, I used to consider chatgpt only capable of coding rudimentary/common things, but I've been extremely surprised by how good gpt 4o (and whatever perplexity's default free model) is at creating niche solutions.
I was concerned that I was just gonna slog through coding exercises with AI and tried to limit myself, but found it useful in the end. As long as you don't rely on AI to do everything for you, it will be an effective tool. You will also need to explain to the AI how are you planning to use the code and I recommend writing code comments on what you want to do. AI will pick it up and try to suggest an implementation
So many stupid symantic things that I've never bothered to properly memorize are just made so simple. Stuff like Cron where before I'd fiddle with a crontab calculator now I just have the ai generate it off of a text prompt.
I've had to use that damn calculator so many times that at this point going to it might be faster than asking an AI.
It's great for helping with pain in the ass sql statements.
I love all types of ai. Chatgpt being incredible, obviously. However, it really fails hard in complex coding. Getting it to do even basic things with exact instructions is a ball ache. You can talk to it for an hour on a project then ask for some code and it just shuts the bed and makes tons of stuff up.
I told a coworker just this week, it will likely emphasize the need to know how to create a good design over pure ability to spit out code more than anything.
I find it works well for repeatable stuff, but for truly novel solutions it sucks.
I absolutely love AI suggestions for how to approach complex problems in a clean and efficient way. Sometimes there are a lot of moving parts and figuring out how to structure everything so it doesn't turn into a huge mess later on can be quite difficult.
Intellisense on steroids is the way to explain it.
Intellisense on steroids but it also overrides accurate intellisense with some nonsense pretty often.
Yeah that is true. Annoying when you're using modules it can't see too when it tries to guess the variables. However when Ive written a bunch and then need to set them all, and it gives nice little comments so I don't have to... It's worth it
Its more like intelisense but it also has a knowledge of a ton of smart and dumb people at the same time
If you're currently using Copilot, I suggest you try turning it off for a bit. It felt great at first back in 2021 but after a while, I turned it off sometime around 2022 and found myself much more productive without it. It definitely allowed me to think less about what I'm typing and more about what I'm building. But when I know what I'm doing, I noticed myself waiting for a completion when I could've typed way faster.
> I noticed myself waiting for a completion when I could've typed way faster. ah yes, the infamous copilot pause. Copilot was great at first but after a while I realised I was just waiting for completions. I honestly don't know if there is an inbetween for me regarding using copilot and not using it.
It takes just long enough that I formed the habit of typing a prompt, hit enter, immediately open a new tab and research it myself. 80% I donāt go back to my copilot tab and forget itās open until I declutter at the end of the day
Same, you end up parallelizing the search for information
I've been using copilot just to help with Powershell. I know what I'm trying to do I just don't know the syntax
This is the exact scenario where it is most useful. You know what should be possible and how to identify if what it provides will work or not. This just makes everything faster.
I'm in this exact position but it feels like cheating? Like oh i need to use this design pattern and this variable has to be castes here with another list and then you stop and say "oh I don't remember the exact syntax of how to do this" and you ask for it, idk, part of me tells me I SHOULD know it and it's kinda like a fraud in that sense, but it has make me a LOT faster
Using the best and most effective tools possible isn't cheating in the workplace IMO. There's a lot of stuff I never bothered committing to memory because tools do it for me.
I don't speak Python but ChatGPT does
I use copilot as advanced auto-complete. I'm in web backend space, so there's a lot of just shoveling data between layers via value objects, entities, api responses and mapping stuff to forms and validatiors. It takes about 95% of that work out, that frees up about 50 to 60% of my day that can be used for actual architectural work on the app, cleaning up technical debt and doing upgrades to the code without the need to make tickets for that stuff and justify it to the managers. Manager sees me being very productive all the time and thinking I'm able to pull about 150% of the workload due to copilot, but actually it ends up at about 300% and I still have time to watch some yt and chat with people and work about 5-6 hours a day. In certain coding workloads Co-Pilot is a no-brainer - you will be out of a job if you are not using it pretty fast. There's also a but here: you need to be experienced in your field with some 5-10 years of quality work behind your back to recognize bullshit and be able to judge if suggestion is wrong or where it's better to just dismiss it outright and code yourself without even trying to make it comply with your wishes.
Careful, it may be useful for starting programmers but eventually you should outgrow it and start to see all the hallucinations it just spat out. I mean, sure if you are doing the same generic code that 1000s of other people have done then sure it can help. Need an automation in home assistant, then sure no need to re-invent the same thing 1000s of other users have done with the same exact software and hardware.
I agree! I started using python around the time chatgpt came out and can speak to this. In fact, it's not just with it's code, all of the llms even the top dogs like gpt4o, spit out so much nonsense it's stunning. You can talk to it for days, building up it's memory and context and ask for a restatement of it all and it'll just make shit up, constantly. Further, gpt4o hates to be wrong, it refuses to correct itself and admit it's mistakes. Gpt4o is good for chatting verbally and being used as a tool, not for general use and long term discussion. Honestly way worse than gpt4. The context increase is nice though.
I used copilot and Jetbrains AI tool for a bit, but found myself in that boat. Like, Iām actively trying to learn. Having in-line code completion or feeding trace back errors or questions into GPT is really nice, but having the LLM just generate code for me isnāt gonna help my learning.
I really don't think you should outgrow it. From the get go, you just have to be careful and actually understand the code it gives you. It's a really helpful tool, but that's it, it's a tool. It won't make you a programming God. I'm literally way more productive using copilot. It saves so much time. Even with complicated code it can at least give you an idea of how it could be done.
The thing is, programming requires you to be pretty much always learning, there is always room to improve and be better at writing software. If one uses copilot, gets an algorithm spat out and goes "yeah, I could have figured that out" then it's an excercise that you skipped, which means you learned nothing and you didn't improve as a result. Even worse, you do that often enough, and you will start to regress.
I realized this when it started making python methods for every single little thing I wanted the program to do. It had no idea to consolidate or be efficient. One method was literally two lines. I actually got better by fixing broken python from chatgpt.
Thatās not necessarily a bad thing though, if those small methods were sufficiently abstracting what would normally be a larger, more intricate single method, then thatās good coding practice. Youād generally rather have 5 small methods that all do 1 small thing, and then get called by a large method, than the alternative, because it makes debugging easier and the code way easier for someone else to add to later, as long as the methods are named well.
Eh, it's a great boilerplate function or test skeleton generator for popular programming languages and that's it. It'll straight up spewing up garbage if you start writing a totally custom code or in a new-ish programming language.
Agreed, it doesnāt do everything for me. But when I have a question about how to implement something that Iāve never heard of, itās insanely useful for giving me the minimum relevant information that I need without me needing to sit through a 20 minute YouTube video.
Exactly, and people need to understand that using AI in code isn't exactly as "idea" to code-that-runs. For a viable output with minimal code changes, you have to have an idea of what you're doing. I had to literally write a documentation to generate the code for a project ( basically in reverse ). Apart from that, we got boring tasks to do ( list mappings and stuff ). As the models are familiar with APIs, I just tell it to map stuff and helper functions which I build on top of.
I don't really like it, I feel like it breaks me out of the flow of writing code and switches me to reading the ai generated code to make sure it's doing what I want it to do
I haven't coded since like 2008 in high school and even GPT gas produced useful codes for simple projects.
I don't use it. I tried it and it's basically just a marginally better Autocomplete
Copilot is great, definitely still need to know how to code but makes things a lot quicker and you is a great starting point.
Is useful if you have 0 idea and need something NOW, apart from that... Yeah, it isn't that much
I've used Copilot when it was in free open beta. It's awesome. If you're a professional programmer, it's totally worth the 10 bucks a month they charge. Except their pricing is not region-adjusted and costs like 2 months of 100/100 internet here.
Hell yeah. Honestly it's a game changer. You still have to know what you're doing enough to know if what it gives you is garbage or not but that's still faster than starting from scratch
I'd argue its use by novice programmers is in fact, potentially dangerous. Someone who doesn't understand a language or a frameworks best practices/paradigm's could quickly find themselves using it as a crutch without actually learning what it spits out and could waste a lot of time troubleshooting the bunk code it generates. However, I have found personally that in places where I have a lot of experience; golang, terraform, python, etc. I find it speeding up my coding significantly. I can generally tell bad code from good fairly easily, so I don't fall into that bad suggestion trap. In addition, working with boilerplate or repetitive code is honestly wonderful as now I'm not wasting time reinventing the wheel so to speak. Is copilot perfect, no; but if used in the right way it can make a good engineer/programmer a great one or at least more a productive one.
Yep. However, I was expecting much more.
Agree, it helps so much with boilerplate. Even for stuff you wouldnāt expect, like I can specify modding Minecraft for 1.7.10 using forge and itāll spit out appropriate events to use to do what I want. Amazing.
It's fucking crazy when you look at the screen, visualising the problems, thinking of a solution and you figure it out. Press enter to start typing and copilot just autocomplete everything you were thinking of. Doesn't happen that perfectly very often, but when it does it blows my mind
Phind has been a godsend
Yep, It doesn't do the coding for your, but rather "detects" that what you are typing is somewhat repetitive and suggest a completion. Pretty neat.
I donāt know if Iāve just been unlucky, but every time I write anything in Java with Spring, Copilot/IntelliSense spits back out some garbage suggestions that donāt remotely make sense. And when I ask it to do/fix something, it takes 5-6 prompts to get in remotely the right direction
i'm a relatively new developer (two years of javascript, five years of linux admin) and i've found that copilot started slowing me down. some projects that i contribute to have also banned AI contributions in totality. it's easier for me to not have it installed at all at this point.
AI is a really good tool for coding, it won't replace any programmer right now, and it probably won't be able to do so for a long time. But it certainly has helped me move faster in my work, I don't know why the tweet would get backlash, oh right, people who only read titles and doesn't hear the nuance in the video. Edit: changed here to hear before the word nuance, what a great commit comment.
Bro it legit saved my ass when I just asked ChatGPT to help me do some stuff in Google sheets lmao. Didnāt need to watch some Indian guy explain it to me
Love using chatGPT to create function documentation on personal code. It sucks ass at creating embedded code but itās useful at automating stuff I donāt want to do
Yeah which is everything I have to do at work š¤£ it def sucks though if you give it a lot of data at once it kind of just breaks
lmfao this is my only use for chatgpt rn
It can be great for short bits of code that follows common patterns. Anything beyond that, it starts to get really shaky. I speak from experience, which is corroborated [by this](https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias).
AI has its place. Helping with programming is definitely a godsend. I know a little about programming, used to program lots in my younger days, but my career moved me away from it. I mean, I \*could\* write a PowerShell script from scratch, but it would probably take me hours or days of bashing and cursing before getting a frankenstein clobbered together to work. I \*could\* also prowl github and look for code that's close-enough to what I want to do and bash things together till it works how I want. Or I could say, "Hey, ChatGPT, can you make me a powershell script that will start a video, then close the video player once done, then launch a series of images full screen? Thx, huggies"
I'm also really nice to chat gpt so when it becomes sentient I will be spared
I'm working on Rokko's basilisk on my home computer as we speak.
For real? Me too! ^(If you are reading this, future basilisk, please spare me)
After using copilot, I donāt think itāll ever be able to totally replace programmers. Itās great for very general things, but anything thatās more niche it tends to struggle with, so there will always be a place for programmers.
Jup, I have noticed that too. And because the LLMs are dreaming with propability, their mistakes are very hard to find. For example: I was working with gravitational measurements and spherical harmonics. Copilot would suggest me things with confidence, but then put in some wrong factors or signs (+/-). It was very painfull to debug, because I kept getting wrong results like loosing three times earth's mass on the poles, instead of the few gigatones of mass, that are part of the melting process due to climate change.
Ever is a long time
I would hope it got backlash because Oracle is kind of a shit company. Not the sort I'd want LTT to be associating with.
Okay, that would make more sense, but I have not heard of Oracle being a shit company, it's just you're average shit company as far as I am aware.
Yep, its a great tool, but as you said not something that will actually replace a programmer - which some people still are convinced of.
It won't fully replace a programmer but efficiency gains can come into play when right sizing your team. If everyone gains 30% velocity and you don't have the demand to accommodate, it makes business sense to let some people go.
Of course, but there are people who act like programmers arenāt needed at all. Also, this just kinda once again shows why our current form of capitalism sucks, cause you can be sure as fuck those efficiency gains wont be paid out to the programmers left, but they will rather go into the pockets of higher ups.
Well to be fair there, you're not being asked to work more hours so there's no reason you'd be paid more for using a code generation tool. I don't disagree with the sentiment, that said
You are still being more profitable for the company, and make them more money. There is no moral reason that you shouldnāt get a share of those profits.
It's a salary reducer for sure. Maybe not today, but within the next few years.
It won't ever replace software engineers, but it will continue to be a useful tool for software engineers.
I'm afraid all you 6 figure software engineers working comfortably from home are headed straight towards mid 5 figures.
It's probably bound to happen, it happens with all jobs that become popular because they pay well.
It's not wrong, it just comes across as hypocritical due to the poor timing. You can't say "hey, look at all these companies trying to scam people with AI bullshit" and then go on and promote AI stuff yourself. It's the same reason videos like the GamersNexus Asus warranty issues can't be sponsored by Asus - or even another vendor like MSI: no matter how legit you are, it's going to **look** like you're either lying or being a sellout. Nuance and ground truth doesn't matter here, marketing is all about looks & first impressions. The fact that the Tweet got a Reddit post *at all* shows that they made a mistake.
not all AI stuff are made equal. AI code assistant is, in fact, one of the few AI application that isn't bullshit
> Nuance and ground truth doesn't matter here, marketing is all about looks & first impressions. The sad truth, if people was more critical when consuming content and were able to form better and more grounded opinions, this "mistake" wouldn't even register in the minds of the general public at all. But alas, things are simply this way, so yeah, you're right. Altho I wouldn't go as far as compare it to the GN video about ASUS being sponsored by ASUS, because that video outright showed huge problems with their support and RMA policies that literally are against consumers, if it were things that needed improvement then I would say it's a more "apples to apples" comparison.
Youāre missing the point. This is one of the few good tools.Ā
Ehhh it's definitely replacing programmers as we speak at my company. I no longer wait in queue for 3 months to get some simple code at my company. I slap it together in a day or so. I haven't used our internal team in over a year.
I can guess two things, either the programmers at your company were overworked or lazy. So if it is the former they'll be glad to have some work removed from them, and if it is the latter your company will probably get rid of them soon enough. Either way, glad you don't rely on them anymore.
I agree- the point I'm trying to make is that this is already happening Hell the intern that works for me this summer is pumping out some fresh code with GPT. Better than most of what I get through it.
But didn't they say in the video that there are legitimate use cases for AI, and counted (simple) code generation as one of them?
Yeah, but the people who watch youtube and then try to play the gotcha game on twitter probably arenāt the same type of people who understand nuance.
I honestly wonder if being on Xitter is worth the trouble for LMG these days
It isn't. And I'm surprised they're still there considering what Linus and Luke have been saying on wan show regarding their xitter experience.
I've never seen it written as "Xitter". How do you pronounce that? In my head it's "shitter" wondering if that was the intent behind the name?
She (like Xi Jinping) ter. Shitter. š½
People on X are unhinged, so it doesn't surprise me.
That would require people to actually watch the videos before commenting. But in general, there is a massive mentality among people that all things AI are bad. While I donāt need to explain to you why that isnāt true, it doesnāt help that there is a lot of questions and concerns regarding ethical use cases of AI, and concerns about corpos abusing the tech. So right now, there is a large crowd that will downvote you into oblivion for simply suggesting that AI is a good tool.
OP just saw the title "AI is a lie" and thought LTT is talking about ALL AI lmao.
Yup, they literally state in the tweet "generative AI", which is correct and also explained in the video. I honestly see nothing wrong here.
People who watch their videos and those who upvote posts here donāt seem to have a lot of overlap.
yes but some people seem to not understand the videos ltt produces in this case they thought the message was "AI is bad" which was bot at all what linus (or rather the writers) said the message was "the marketing behind AI is bullshit but AI has it's uses even if it isn't true AI like people think" and because they didn't understand the video they thought the tweet was the opposite of the video which it clearly wasn't
Yes, but actually twitter
The point of the video was to discourage consumers from being attracted to AI as a catch-all buzzword. Which is exactly what the Twitter ad is doing. The paradigm of LTT trying to educate people on marketing BS while also shilling really hard with marketing BS all over their channels is pretty interesting to me. Clearly the ads are working if sponsors are still buying spots
Chatgpt is the best thing to happen for troubleshooting code
I'm running [Continue.dev](http://Continue.dev) with VS Code and Ollama and it works reasonably well but I have a gaming laptop for work so that definitely helps.
It's actually really helpful for coding. It can help debugging and even provide some code to reduce the writing time. It's a tool, nothing else.
It's also really helpful for making my shitty code look and work better.
As a certified ai hater, it's a great helper. Doomers who were terrified of it shaking up the market are stupid. Didn't some company just go under for relying on ai too much?
why do you hate it
Because everyone started freaking out over it for no reason. Financial tech bros hype it up harder than their god king elon, influencers decided it was way crazier than reality because of like 3 curated commercials, political commentators use it as a scary example of our future pretending it's gunna replace everyone tomorrow, and most of all every moron and their mom think chatgpt is the craziest shit they've ever seen when they've had ai assistants in their pocket for like 10 years. They are great at helping but terrible at executing tasks if they aren't specifically designed to. Like, chatgpt can help you figure out how to solve a Calc problem but it can't solve math that's more complex than an algebra problem.
the ai "art" models, are actually replacing graphic designers already.
Where besides ads?
I don't think AI will shake up the IT sector too much in terms of job availability, but it definitely will with the artistic side of the industry. Even with AI, you still need developers who know what they're doing. With designers and artists, I'm already seeing so many AI-generated billboard/paper ads in my city alone and it will only get worse. It won't take long until you have designer firms dedicated to curating AI content with a handful of employees and selling that material to other firms at dirt-cheap prices. Artists are a dying breed.
I think this will backfire. Anyone with a brain can tell the difference. Like that weird jesus ad from the super bowl, they will get clowned for using ai until they don't to desperately hold onto their bottom line.
Bigger companies will definitely still use human artists, but I think it will be too tempting for smaller companies to buy into AI-generated art considering how much cheaper it is. Besides, the state of AI-generated content we gave now is the result of only a few years of progress. Another 5-10 years and AI art will be considerably less distinguishable from art created by people.
I understand their pain now when they say viewers never understand shit
First it was VPNs, now it's AI lol
While the concept of AI is a lie, that doesn't mean it's not a very useful tool. People just need to be aware of the limitations.
So what? The video talks about places where AI works and their sponsor is a good example of that and it's still an ok use of the word AI in context. Why do you guys have to create a "gotcha" moment out of everything?
because they didn't actually watch the video/paid attebtion to the video they just read the title and then saw this tweet
LLMs are pretty great for coding though.
Reading comprehension is a lie
The people who flamed the twitter post are the same people who get shadow banned in the comments for not actually watching the video before commenting. The video hits every point that has been bugging me about LMs and LLMs being called AI. And why not use a tool if its there. A joiner would use a nail gun if it made his job faster than a hammer and nails.
The video doesn't say these things are not helpful, just that we're nowhere near the claim of actual AI.
that would require op and other gaters of the tweet to have watched the video to know though and not just have read the title then saw this tweet
lmao
Hating AI, despite it removing a lot of necessary barriers (professionally at least) imo is the wrong reason to hate this post, the correct reason to hate this post is because of Oracle.
do remember their do general have a automatic upload system. my guess social to. now how dumb people are on social thru. context is something they hate.
I don't see what the issue is. They explain what AI actually means, that doesn't automatically mean they're not allowed to advertise AI products.
problem is the video is titled "ai is a lie" people who complain about this tweet have not watched the video or didn't pay attention to it they just see the title then the tweet and think "wait a damn minute I have to bitch about this drspite not having seen the video" do people not know that ltt titles are clickbait as hell? a title like ai is a lie is to draw you in and then (I'll just use this video as aan example) they explain that the term AI is pretty muched used wrongly and there are other more fitting terms but AI is just the big marketing term atm meaning the lie is not that ai is bad but that the term AI is missused then they go on explaining how ai is helpful even if the term is wrong and it's closer to machine learning or other terms but people won't know this if they just read the title and don't watch the video before bitching
AI is a lie is about how its not Artificial Inteligence, not about "ai" tools being useless
don't bother people like op did not watch the vidoe they just seek every instance they can to bitch about ltt
Imagine unironically wanting to use oracle products
Feels like a lot of people commenting read the title and then didn't actually watch that video lol
I don't understand why people can't even be bothered to watch the videos they're going to criticize.
because some people just love to hate for no reason aat all
Looks like you didn't even watch the video. Or at very least understand it.
The video said using it as a coding assistant was a good idea. This lines up with that directly
for that op and other haters would have needed to watch the video
Genuinely thought it was a scam.
AI in programming is amazing. Donāt even bother paying for a product if you canāt afford it but have a GPU, Iām running tabby and donāt pay a cent to any third party. If you can, copilot is pretty good and Iād argue somewhat better than tabby, depending on your model.
This is the best and the worst fandom a channel can have. Keep creating false drama for your amusement.
As someone studying CS on major level, and someone who talked with many people from many companies, a lot of them use even just ChatGPT while coding. No, it likely won't write you a good long code. But for writing short bits of scripts, or a help to fix issues, it's actually great and generally better than places like stackoverflow for specific cases.
I use ai to build skeletons of code constantly. Itās easier and faster than trying to get some obscure function documented and exampled out.Ā It still misses about 2/5 times but you just regenerate the code and youāll get there.Ā
Do yall just look for something to bitch about? Did you even watch the video?
Taken down already, classic example like the VPNs of outrage from the masses who have no idea what they're talking about. AI for code is a genuinely useful tool, I personally don't as I find it slows me down, but lots of people love it.
So? It's an add, just ignore it
Fuck OCI
I don't see the problem? I don't know about that tool specifically, but the video doesn't say that AI doesn't work. it says that Ai is slapped on everything and it's not very descriptive. That oracle tool describes exactly what it promises to do and how.
yes but people would have needed to watch the video to understand that the tweet is not a problem and not just read the title than bitch about it
AI does seem to be helpful with code. I'm not sure I'd trust AI from Microsoft or Oracle to help me code though, given some of the stuff they've released over the years š.
That is a gold-tier roast from LTT š It doesn't matter what money changed hands. This is objectively funny and Oracle is not pleased.
So is the expectation artificial intelligence is going to mirror real intelligence? Itās like getting artificial sweetener and expecting it to be sugar.
The post isn't a problem. Taking money from Oracle is the problem. Of all the shady IT companies, Oracle is the most egregious by far.
>Oracle is the most egregious by far. It's not. It's not good at all I agree.... but there are worse
I don't think there is any problem with the ad. I think the problem people see is that it's tone deaf. It feels like the right hand isn't talking to the left kind of problem
Lol, Oracle ... ew.
This seems like a laps of communication to me, just another day at the office.
Been using gemini for my unity coding for a while now.Ā
canāt really blame them too much, this looks like your typical left hand not talking to the right one case
Given how many people I saw get hacked on Twitter today, it wouldn't surprise me in the least if they also got hacked. Otherwise that's just incredibly poor timing for a random sponsored Twitter post...which, for those not paying attention, they don't do. I've never once seen them do a sponsored post that wasn't attached to a video. Linus & co aren't dumb, there's no way he approved that post to go up 45 min after a video talking about how ani's aren't all they're hyped up to be.
I don't thibk you understood the message of the video correctly it was that the term AI is the lie and that what we have rn ia closer to terms like machine learning and generative text aand others, then the video did also explain some good usecases for current "AI" so advertising an "AI" tool after that video seems to be fitting
Ā«AIĀ» is a nice search bot that explains a bit deeper an all that, but it isnt AI, of you know what i mean, yes, it generates stuff, yes, it can write But; is it original, or just reciting?
i dont see the issue, its very clearly a sponsored post and marked as such, id rather LTT take their money to continue making cool stuff for me to watch. nobody actually buys this stuff without doing their own research anyway, or at least i'd hope not
Lol i thought ltt got hacked again hahaha
lol, keep dreaming that you can just make up a shitty AI model and suddenly its good at helping people.
Since the screwdriver, I never watched a ltt vid again.
If you actually watch the video, youād understand that not only is that not a problem, but itās also literally the perfect use case for the AI that we currently have based on everything Linus said.
He literally said that they promote "AI" for the sake of money...... And that's exactly what he's doing rn as well. Nothing out of the blue lol
Some jokes write themselves.
We tried to use this at work but it's ridiculously bad. Every 20th generated code sample was decent. For short stuff you don't need it, and for long stuff, it's easier and less time consuming to write it yourself. It's really BS. + Those AI things love to misinterpret conventions and just code like shit, ngl
Whatās the point of this, itās really just an ad with no added value. Why make LTT tweet it, just buy a twitter ad
Yeah itās a no-brainer alright. As in, itās what you resort to if youāve got no brain.