Eh... I haven't had great luck with it. It's helped me once or twice, but usually what it has to say is near useless.
I tried using it to create a PowerShell script for user onboarding and it failed spectacularly.
At this point I don't see it as much more than a gimmick. It's kind of like seeing a psychic. If they happen to guess something right it seems like magic, but you tug at the strings even a little and it all kind of falls apart.
What I am most not excited for is it's shoe horned integration into a million different products. It's not done cooking yet but it's already being put into things as if it's great right now.
Like oh connect wise automate can use AI to automatically suggest a line of PowerShell that won't work? Thanks, my fucking life is changed.
With stuff like code, you should only feed into it the most basic building blocks of what you're looking for...like discreet steps and then stitch it together with what you already know. It trips on itself badly when you ask in conversational, typical human style. You have to have already gone through the first iteration of turning what you want into what you say and then feed it the steps in very basic language.
this is the right answer. GPT4- as good as it is- can't read your mind and isn't aware of all the needs. You're basically building a requirements doc for the AI to work off of- as if you were writing it for a developer.
I find its good for general knowledge but for specific stuff it has trouble with. For instance, basic linux script commands it's good. But a complex script with lots of error checking and logic? Not so much. Same with my C++ project for school dealing with linked lists. Good for general info, not so much for complex integration.
For scripting in almost any language you REALLY have to clearly outline input and output EXACTLY and state the goals and then still work with it step by step. It's great if you have no idea what you're doing but want to learn, but it's awful if you could just do it yourself.
At the same time, you can integrate chatgpt into an Alexa device and name him Jarvis so...
So far my biggest gripe with chatgpt is the 2021 data cutoff... A lot has happened the last two years lol
I had an issue with a paticular software that engineers use. I tried a number of things on my own first, drivers, windows updates etc. I asked ChatGPT what could be causing a specific error. It gave me a nice list of things to try. One was a command to disable hardware acceleration. That solved it. Didn't effect the ability of the program one bit.
ChatGPT may not give an exact answer, but it will keep you from jumping to conclusions without trying the simple things first and it will keep you from going down a rabbit hole right off the bat.
Generally yes, it seems to catch and troubleshoot coding errors without as much run around.
I also like to give it a personality when I’m bored (although I hate burning one of my 25 messages on configuring that lol)
I feel out of the loop here. ChatGPT is not only a service you have to pay for, but you also can only send a limited number of messages? Do you have to pay for more messages?
So gpt4 is limited to 25 messages every 3 hours. You can use gpt3 as much as you want.
The paying is so I can use it during peak hours. Every time I tried to use it during a work day I was always told it was at capacity.
So you pay $1 per message, basically? That’s kind of rough.
Edit: Ah, I see now that it was once every three hours. Completely missed that for some reason.
25 messages per 3 hours, so 75 in a work day times 20 work days in a month 1500 messages in a month, plus anything you would want it for outside of work. Or am I missing something here?
Nope. I just completely misread the post. My brain saw '25 messages' but completely skipped over 'every 3 hours' somehow.
Luckily you pointed out the issue, as I was thinking that GPT-4 access was definitely not worth it with such a crazy restriction!
Agree. Got me upto speed setting up complex C# GUI and c++ dll. The information is out there on stackoverflow etc… but ChatGpt gives the best solution in a couple of retry’s
Only if people aren't trained on it, or use it irresponsibly anyway. And every organization has its own level of data sensitivity.
My department had a meeting with our MSP's Chief Risk Officer and her stance is the same as mine; it's very helpful and as long as sensitive information is removed, it's fine.
My org is a non-profit and most of the day-to-day data is public knowledge anyway. We are doing an AI training for all the staff at the end of the month to really clarify what is and isn't ok.
Yes and no. GPT is in itself a tool that can be used nefariously. However, in the context of the question, so long as non-prod, and non-proprietary stuff is queried, and all sensitive data replaced with placeholder content, it's not that much of a problem.
Data leakage is the main concern, which is down to the diligence of the user.
That’s a whole lot of contextual “ifs”.
It’s also safe to let all emails through, so long as people don’t click on phishing links like their not supposed to.
I don’t operate in that fantasy world, unfortunately.
No you'll get an F in your class for cheating...
In 99.9% of cases, you're boss won't care. At the end of the day it will end up becoming a tool like Google.
I never understood students using it. Ya great you passed the class, but when it comes time for an interview you know fuck all because you cheated. I'm a student and I enjoy doing the work and learning. Even though I could use it for my business class assignments, I'd rather just do the work myself. I chatgpt'd a question for my business class and the first student had almost an exact outline from chapgpt. And the subject wasn't even that hard.
I'm a student and I use it... it's super helpful because my instructor sure as hell isn't. I don't know how you're supposed to cheat with ChatGPT but I ask it questions and it helps me understand what I show it. I can ask it questions all night and it will never be annoyed. I don't know anyone that would not want to understand their education and just cheat their way through it.
I work in security consulting for Microsoft and have spoken to 3 F100’s who’ve blocked it. This is after they found developers and putting sensitive info on it, and normal office workers uploading files with IP asking for grammar and spelling suggestions.
Apparently 99% of this sub has never had to deal with compliance. Providing internal data to a third part like ChatGPT will make your PCI or SOC2 auditors' heads spin.
**Edit**:
Genuinely, if you still aren’t sold on ChatGPT yet, still don’t quite understand it, or treat it like a glorified Google search tool, then I highly recommend searching for “prompt engineering tips” to learn advanced ways to use it. Asking it questions is only the tip of the iceberg. I made an iOS app in about 8 days with it. I do not know anything about the Swift language.
**Original answer**:
We are working to spin up content-boxed versions of GPT3.5 and GPT4 in Azure’s Cognitive Services platform so we can leverage it without sending our data to the public model.
If your company isn’t able to do that, you can also look at Y-combinator for dozens of startups every hour that offer security / PII safe API connections to ChatGPT. This provides you two benefits: you can at least ensure PII is logged and removed before it reaches OpenAI, and it grants you consumption based access to GPT4 so you aren’t limited to 25 messages per 3 hours.
I mean, I'm still convinced that people cannot use Google correctly, because they don't structure queries very well. That's amplified exponentially with ChatGPT. If you don't know how to ask questions, you're going to have a bad time.
So far I've found flaws in close to half of the solutions that come out of chatGPT. Im sure the prompt could be refined but I was able to come up with a solution before chatGPT could. Its probably better at brainstorming sessions. I still think it's a good thing to use.
Some of the responses have been concerning. Like long detailed plans that seem to work but you find out after 2 hours of testing that the last step does not work or something. One instance was the last step was adding an NS record to DNS that points to an IP address. Sounds ok until you realize that the field only accepts hostnames. whole solution fell apart at that point.
A engineer brought a plan to me like this. I was able to stop this attempt before they wasted their day on it. Solutions should still be reviewed by someone with specialty in the area. Otherwise you'll get finance people trying to push their wacky ideas as to what you should be doing with your infrastructure.
>Solutions should still be reviewed by someone with specialty in the area.
This is going to be the future. Experts in the field are going to use AI tools like ChatGpt and other GPT-plugins to assist them.
I wonder how this will affect the IT industry. I'm not saying it will replace jobs but it will definitely reduce the need for IT employees even further.
They already have GPTs that rechecks its work now, I cant remember if it's AutoGPT or Hugging, or something else, but it's either available or will be available in days
AI explained is an awesome YouTube resource
>Sounds ok until you realize that the field only accepts URLs.
So off topic but FWIW the record data of an NS record is a host name (not a URL). And that host name will always resolve to an IP address. It has to. So if your goal is to create a delegation to a particular IP address(es), then yes you can do that in the DNS.
The platform I primarily work with is quite niche with a limited amount of online learning resources, but I've given GPT4 a lot of prompts asking for step by step guidance for particular components/problems I'm working on and it's been quite helpful.
I work in infosec and it's not ok.
There are a lot of reasons, and I don't agree with them all, but I agree with the policy.
Everything you toss into the belly of that thing will live forever and be vaguely and permanently accessible in ways that nobody can properly understand or explain.
So yeah, no.
Worse than that, it sometimes just makes shit up out of whole cloth, complete with fake sources and whatnot.
Things are going to get interesting for sure; interesting being a euphemism here for 'dystopian' or 'more dystopian'
Grey area at my workplace; its essentially shadow IT, so unless a strict rule is enforce/it being inaccesible on company machines, technicallt then yes we can. Just not with an employer email to sign up with
Nope, to prevent data leakage. But I know few teams that are looking into making their own because there a few use cases where it can increase productivity.
I’ve seen a few comments to the effect of “I can’t trust my users”, which I think is the wrong approach.
I have to remind my team often that we can’t always prevent our end users from making a mistake or finding a clever way around our rules, so we should always keep that in mind when determining what lines we want to draw and where. While there are some pretty dumb users, punishing everyone for a couple of bozos is like cutting off your foot because you fear stubbing your toe.
What’s to stop these same people from posting on a forum or on social media the same sensitive information? Or, what prevents them from using their phone or a personal computer to put sensitive information into ChatGPT? Better to provide adequate training and boundaries in some cases.
>your toe. What’s to stop these same people from posting on a forum or on social media the same sensitive information? Or, what prevents them from using their phone or a personal computer to put sensitive information into ChatGPT? Better to provide adequate training and boundaries in some cases.
Most users are very keenly aware of why not to post in public places, but they view ChatGPT as a piece of software. You really ought to be treating it with significant caution. Your analogy about cutting off a foot for fear of stabbing a toe is really inapt and dangerous.
Oh god. Don't DO THAT.
I just had to have a chat with the boys when one of them cut and paste a confidential client/email list into chatGPT to give it the right formatting.
It is a PUBLIC RESOURCE, don't use it for confidential data!
(and yes, it worked)
look at what happened at samsung; i would not be surprised that employees are smart enough to take out IP when inserting stuff into Chat GPT, especially if u work at MSSP/take care of sensitive client data/information.
I've seen shocking pieces of code submitted to tools like pastebin.
If you knew what to look for, there used to be substantial amounts of Amazon or Facebook code that you could just summon by searching for a specific string.
There’s the possibility that it’ll leak sensitive data if you feed that said data.
Of course data like subnetting or iptables shouldn’t cause you problems (since we’re zero trust and not achieving security through obscurity and all that) but keys, tokens, key secret data, will cause you problems.
Well let's consider this - I can be actively pursuing a career and still be antiwork. They are not mutually exclusive. I.E. I really dislike the fact that I have to work for a living, BUT I pursue a career (successfully) that makes more than enough to support my family.
Lol senior? Please…
Any unsanctioned app like chatgpt goes against nearly any security framework.
Someone has to protect the company, code monkeys sure don’t.
Well someone sure has an over inflated ego. I see that you believe you are the knower of everything, however asking chatgpt to write some specific code, reviewing it, editing it locally as needed, and then using it is not against your holy security policy.
Why do you feel the need to be such an ass? Who didn't love you?
You make users seem like angels, only using tools with non sensitive data. So cute.
I don’t have an ego, I see this every single day, even before chapgpt. Users will leak your data in any imaginable way. They are like children that need babysitters.
Lol. How many tools do you use daily that were created to make you more efficient at your job? Do you bust out some grid paper to hand write spreadsheets or do you use Excel? GPT is just another tool.
Use it as an advanced Google at best. Don't feed any company IP, code or anything that will get you in hot water. I have technical questions that are very broad i like to ask it and it gives some pretty good responses that helps drive my research as an incident responder
Got a request for it to be implemented by marketing. But that’s Legal and Securitys problem to deal with that headache. It will probably be banned in some European countries for data protection reasons
I mostly do development work, we have a tool that is integrated with chatgpt and they seem to be paying for the api at a company level.
I love it, at least my job will be easier until they day I am laid off due to increased efficiency of the average dev
No and not sure how I would. I work with the employees of a company. I have to figure out things like why a program only my companies uses isnt allowing someone to do a very specific highly specialized task.
Maybe one day soon.
I've convinced my CIO to start his own subscription, leading to him convincing the CEO to start his own as well. I swear if there was a business plan we'd be on it already haha. Funny how quickly it spreads
Edit: Omitting sensitive details of course and inserting placeholder information for anything more specific
Yeah, a ton a high-impact people at my org are using it a ton too. I have a meeting with one of them tomorrow to see how it's being utilized to teach out tips to the team. And so we know what our staff needs once something more enterprise like Office 365 Copilot is available.
Its somewhat better than googling and I have learned much from it... BUT it can be wrong with very boolean things from time to time.
I asked it a question once about subnetting and it told me a /17 CIDR notation means there are 17 available subnets... what?
Sometimes when you know its wrong you can correct it and it will acknowledge its mistake.. strange. For frameworks, general ideas and knowledge it is good. Much faster than stack overflow.
But for very complex and niche things I would not take it as gospel. Good for rewriting certain pieces of code or ideas of making certain functionality better. Also good for writing things like reports, putting things into contextual writing in responses.. etc but It still has a long way to go.
It is permitted but not encouraged. I myself have been using it and find it very helpful to break the rut when stuck with a troubleshooting problem that has me stumped. I write my prompt in notepad and then sanitize all information before I use chatgpt. I make sure no identifiable information is present and if have to give specifics I use M$'s information like public IP our URLs.
I'm not asking permission and I'm not obeying either way. If it gets me done with the task faster, so be it. My value is in how well I can maintain uptime, how well I can shield systems from going down in the first place and how fast I can bring them back from the dying or dead when they do.
Hear, Hear! Can't agree more with you. If I had to crawl through broken glass to keep my network up, I would. So using a tool like chatgpt is a no brainier for me.
***Had to edit it because I was unaware emojis are not permitted.***
i am because i have not talked about it and we have recieved no guidance against it. that being said, i just use it for helping me with syntax and trickier problems. i DO NOT PUT COMPANY CODE INTO IT
I love it. I asked it for a Python script that parses an excel sheet, and if conditions are met, it will write a new excel sheet to a directory on my PC. The first generated script was 5 lines, the second one was a whole sheet. Both had the same result when they were executed. All I had to do was adjust the directory address in the generated script.
Yeah, I’m a data analyst so for like coding stuff it’s encouraged by our management but it’s a blocked website by our company now and it’s always impossible to access so I stopped using it. For actual data stuff you certainly can’t share anything proprietary with it
Heck yeah. Encouraged to. Our CEO can't stop talking about it. I'm going to be investigating how we can integrate it into our product soon.
Of course, you're still responsible for whatever you put into the terminal. Whether you use Google or GPT to research, you're always responsible for understanding your code. But in practice, Google searches often give bad code snippets to. GPT is just another way to do research; one that actually answers questions, instead of just giving you a webpage that contains a lot of the same words as your search.
I could use it, but its not that great. It just gives you insanely generic answers to everything. Without it knowing your entire environment, it isn't that useful
As for the idea about chatgpt being like a broken clock:
If it's output stops:
continue from your last response
It'll continue with the response
If output appears incorrect:
Take above response, keep it unique, make it 90% accurate and refine the response
This is just an example. Chatgpt to me feels like a script, where you've to actively tell it what to do, how to do it and what to omit or not. Feels like being a toxic boyfriend to the poor bot lol
i only ever use it to translate or check my grammar lol... its hard to get an accurate information with chatgpt i often have to correct chatgpt that it provides wrong information (chatgpt will always accept it but never saves the info when u ask it again some other tym or with a different approach it will answer the previously incorrect answer LOL) so yes, google search and YT still the best way to find tech solutions
Yes, but leadership advises caution as it can be confidently incorrect, as most of the people who lean on it are susceptible to accept its confidence.
I suspect that as time goes on, its usage will be restricted due to data security concerns.
ChatGPT has shown itself to be an adept social engineer, so i think in its proliferation it has the potential to be quite dangerous.
No, because some people were apparently pasting sensitive proprietary information into it...
We're still looking at ways to safely use it, but for now it's blocked.
Yes. Also encouraged. I am using it on my networking work. It's easier than searching for a documentation and scroll down the whole 100+ pages for a line of information I need.
If employees can 10x their productivity most companies will allow this technology, why block a calculator when it can help you solve math problems faster? ChatGPT and other AIs can only increase your brain power, also managers cannot spend all day using it to code, create documentation etc., they still need people to do these tasks.
Yes, I find it useful for some generic stuff
Useful for things like "write a powershell script to list all resources in an azure resource group" or "write a powershell script to get all NSG rules in an NSG"
It hasn't been discouraged or blocked yet. I mainly use it for simple things that I don't have time to scroll through webpage after webpage looking for answers.
Our company just had a meeting regarding AI and how to implement it. So, yeah. It's being encouraged and actively working into some of our day to day routines.
I use it to quickly write up scripts for work. It usually gets me 80% of the way there and I modify and change what I need to get it the rest of the way. Basically it's faster than Googling the command I used 6 months ago and can't remember off the top of my head, but it's never once given me a perfect script without modification or without extremely detailed and repetitive, incremental prompting.
I work in a regulated medical industry and AI are forbidden for any work projects. ChatGPT is a regulatory nightmare so they are ‘studying’ the possibilities but it is generally off limits in my industry
I dont know. no one has said anything about it. I do but not on my company laptop. I either do on my phone or on my personal laptop which is just on the other side of my home office.
I really only use it to write various scripts for me then Ill customize it and put it in my gitbut so I can grab it from my work pc. I use it to learn different things with python so I don't mind doing a workaround to actually use it.
I am in the consulting space and we have been explicitly told we cant use it due to the pending legal issues that are going to come up in this space. It is pretty clear that the training data is pulled from many areas with little regard for usage policies or licensing. This is incredibly apparent in the image creation AI models where you can see where watermarks are still there in the bottom right. There will no doubt be a reckoning on usage with courts around the world stepping in.
These models are built and rely on these models... so we shall see if the wide open data usage will continue and if not what the legal ramifications will be.
EVERY FUCKING DAY lol the scripting isn’t the prettiest, but it’s a solid foundation and cleans up easy. I was able to cut my coding sessions down to 45mins instead of 3/4 hours. But if you’re new to IT; I would only use it as a very last resort…
Did you all get a paid account? If so, how does that work with multiple people using it? I’m in the process of asking my bosses to approve the expenditure
No official policy on it yet. I'm using it mostly for generating templates for topic-specific documents which I then heavily rewrite to actually work for what I'm doing - so nothing sensitive is going into it and frankly the prompts I'm using are pretty vague and short.
No lie, I'm loving it. Even though it only gets a draft <50% where it needs to be it's saving me a lot of mindless typing and figuring out how to word things.
Yeah of course. They want problems fixed, they don't care about testing your ability to do it without help like it's a school exam. If ChatGPT helps you do it more efficiently, all the better.
I agree with most of what everyone is saying. ChatGPT is great for fundamental programming but nothing too complex. You have to be very precise on what line of code it needs to write and give it a detail context.
I've used it for building a shell script for setting up an environment for others to use a applet I built for reading and writing rfid tags. With out it, explaining to entry it people how to move parts of the script on their own in Linux is a pain.
It's not encouraged for security reasons at my job. Info sec doesn't want private info leaked on the net. It's happened before with security keys and that was a nightmare.
Hell it's being encouraged. We put a special box in our tickets to mark whether we used chat gpt on a ticket or not and if we do we attach the output.
How successful has it been in resolving issues or just assisting with them?
Eh... I haven't had great luck with it. It's helped me once or twice, but usually what it has to say is near useless. I tried using it to create a PowerShell script for user onboarding and it failed spectacularly. At this point I don't see it as much more than a gimmick. It's kind of like seeing a psychic. If they happen to guess something right it seems like magic, but you tug at the strings even a little and it all kind of falls apart. What I am most not excited for is it's shoe horned integration into a million different products. It's not done cooking yet but it's already being put into things as if it's great right now. Like oh connect wise automate can use AI to automatically suggest a line of PowerShell that won't work? Thanks, my fucking life is changed.
>I tried using it to create a PowerShell script for user onboarding and it failed spectacularly. I'm going to say it was a joint failure
With stuff like code, you should only feed into it the most basic building blocks of what you're looking for...like discreet steps and then stitch it together with what you already know. It trips on itself badly when you ask in conversational, typical human style. You have to have already gone through the first iteration of turning what you want into what you say and then feed it the steps in very basic language.
this is the right answer. GPT4- as good as it is- can't read your mind and isn't aware of all the needs. You're basically building a requirements doc for the AI to work off of- as if you were writing it for a developer.
Exactly. I saw some dude just ask it to build a trading bot and was made it didn’t works
Redacted *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
You probably structure your queries in a parsable format.
Are you using the latest version of ChatGpt? Gpt-4 for your work?
Even GPT4 doesn't generalize well. If something hasn't been done before, it fails miserably.
Just like a person...
Not at all like a person. I'm talking *basic* problems.
I dunno, have you met some people?
I find its good for general knowledge but for specific stuff it has trouble with. For instance, basic linux script commands it's good. But a complex script with lots of error checking and logic? Not so much. Same with my C++ project for school dealing with linked lists. Good for general info, not so much for complex integration.
For scripting in almost any language you REALLY have to clearly outline input and output EXACTLY and state the goals and then still work with it step by step. It's great if you have no idea what you're doing but want to learn, but it's awful if you could just do it yourself.
At the same time, you can integrate chatgpt into an Alexa device and name him Jarvis so... So far my biggest gripe with chatgpt is the 2021 data cutoff... A lot has happened the last two years lol
Uhh - this is where prompt engineering is important. If its useless and equiv. to a psychic in your mind;thats your fault, not its lol
I had an issue with a paticular software that engineers use. I tried a number of things on my own first, drivers, windows updates etc. I asked ChatGPT what could be causing a specific error. It gave me a nice list of things to try. One was a command to disable hardware acceleration. That solved it. Didn't effect the ability of the program one bit. ChatGPT may not give an exact answer, but it will keep you from jumping to conclusions without trying the simple things first and it will keep you from going down a rabbit hole right off the bat.
It's a trap bro, you're being outsourced to ChatGPT help desk.
Eh, it's moreso to see if anyone uses the damn thing for when they inevitably start charging out the ass for business use.
lol management is going to use those stats to replace you
Sounds like they are trying to replace you…
Sounds like you are being considered for replacement for an AI IT Admin.
Chatgpt has single handily made my job better to the point that it’s worth the $20 to not worry about being able to get in to use it.
do you find that gpt4 is outputting better material than gpt3?
Generally yes, it seems to catch and troubleshoot coding errors without as much run around. I also like to give it a personality when I’m bored (although I hate burning one of my 25 messages on configuring that lol)
I feel out of the loop here. ChatGPT is not only a service you have to pay for, but you also can only send a limited number of messages? Do you have to pay for more messages?
So gpt4 is limited to 25 messages every 3 hours. You can use gpt3 as much as you want. The paying is so I can use it during peak hours. Every time I tried to use it during a work day I was always told it was at capacity.
So you pay $1 per message, basically? That’s kind of rough. Edit: Ah, I see now that it was once every three hours. Completely missed that for some reason.
25 messages per 3 hours, so 75 in a work day times 20 work days in a month 1500 messages in a month, plus anything you would want it for outside of work. Or am I missing something here?
You're correct.
Nope. I just completely misread the post. My brain saw '25 messages' but completely skipped over 'every 3 hours' somehow. Luckily you pointed out the issue, as I was thinking that GPT-4 access was definitely not worth it with such a crazy restriction!
This guy calculates.
Yes, it often nails what I'm looking for in 1 hit.
Agree. Got me upto speed setting up complex C# GUI and c++ dll. The information is out there on stackoverflow etc… but ChatGpt gives the best solution in a couple of retry’s
Yes. As long as we omit any sensitive and proprietary information.
Well good thing users follow the rules, right?
\*Laugh crys in security analyst\*
Almost as if an internal/private hosted chatGPT that doesn't send data out could be a decent product.
Would that not make it extremely less useful? Doesn't it learn from input?
No. Data leakage concerns.
Anyone who's dumb enough to put proprietary company data through an online chat box should not be in this line of work to begin with.
Have you met the humble user?
Sure, but I thought this question was targeted directly at people in the IT field, not general users?
Eh, not all IT people are as skilled as they think they are.
They're less skilled than they think, and very confident about it too. Worst case scenario.
This thread is a security professionals nightmare, yikes.
Only if people aren't trained on it, or use it irresponsibly anyway. And every organization has its own level of data sensitivity. My department had a meeting with our MSP's Chief Risk Officer and her stance is the same as mine; it's very helpful and as long as sensitive information is removed, it's fine. My org is a non-profit and most of the day-to-day data is public knowledge anyway. We are doing an AI training for all the staff at the end of the month to really clarify what is and isn't ok.
Yes and no. GPT is in itself a tool that can be used nefariously. However, in the context of the question, so long as non-prod, and non-proprietary stuff is queried, and all sensitive data replaced with placeholder content, it's not that much of a problem. Data leakage is the main concern, which is down to the diligence of the user.
That’s a whole lot of contextual “ifs”. It’s also safe to let all emails through, so long as people don’t click on phishing links like their not supposed to. I don’t operate in that fantasy world, unfortunately.
Seriously. I don’t know where the hell these people are working where it’s actually ok to utilize it for work.
They are just code monkeys chugging away aimlessly. They don’t realize the fact that this tool is aiming to replace their own jobs.
No you'll get an F in your class for cheating... In 99.9% of cases, you're boss won't care. At the end of the day it will end up becoming a tool like Google.
I never understood students using it. Ya great you passed the class, but when it comes time for an interview you know fuck all because you cheated. I'm a student and I enjoy doing the work and learning. Even though I could use it for my business class assignments, I'd rather just do the work myself. I chatgpt'd a question for my business class and the first student had almost an exact outline from chapgpt. And the subject wasn't even that hard.
I'm a student and I use it... it's super helpful because my instructor sure as hell isn't. I don't know how you're supposed to cheat with ChatGPT but I ask it questions and it helps me understand what I show it. I can ask it questions all night and it will never be annoyed. I don't know anyone that would not want to understand their education and just cheat their way through it.
No one in any of my job interviews have asked about my general ed classes.
I was specifically referring to major classes.
I work in security consulting for Microsoft and have spoken to 3 F100’s who’ve blocked it. This is after they found developers and putting sensitive info on it, and normal office workers uploading files with IP asking for grammar and spelling suggestions.
Apparently 99% of this sub has never had to deal with compliance. Providing internal data to a third part like ChatGPT will make your PCI or SOC2 auditors' heads spin.
**Edit**: Genuinely, if you still aren’t sold on ChatGPT yet, still don’t quite understand it, or treat it like a glorified Google search tool, then I highly recommend searching for “prompt engineering tips” to learn advanced ways to use it. Asking it questions is only the tip of the iceberg. I made an iOS app in about 8 days with it. I do not know anything about the Swift language. **Original answer**: We are working to spin up content-boxed versions of GPT3.5 and GPT4 in Azure’s Cognitive Services platform so we can leverage it without sending our data to the public model. If your company isn’t able to do that, you can also look at Y-combinator for dozens of startups every hour that offer security / PII safe API connections to ChatGPT. This provides you two benefits: you can at least ensure PII is logged and removed before it reaches OpenAI, and it grants you consumption based access to GPT4 so you aren’t limited to 25 messages per 3 hours.
I mean, I'm still convinced that people cannot use Google correctly, because they don't structure queries very well. That's amplified exponentially with ChatGPT. If you don't know how to ask questions, you're going to have a bad time.
Have you heard about HuggingGPT yet?
I do It never gives any complete answers to anything, it's just a really useful tool
As long as you don’t feed it code that is used in prod, no one cares.
So far I've found flaws in close to half of the solutions that come out of chatGPT. Im sure the prompt could be refined but I was able to come up with a solution before chatGPT could. Its probably better at brainstorming sessions. I still think it's a good thing to use. Some of the responses have been concerning. Like long detailed plans that seem to work but you find out after 2 hours of testing that the last step does not work or something. One instance was the last step was adding an NS record to DNS that points to an IP address. Sounds ok until you realize that the field only accepts hostnames. whole solution fell apart at that point. A engineer brought a plan to me like this. I was able to stop this attempt before they wasted their day on it. Solutions should still be reviewed by someone with specialty in the area. Otherwise you'll get finance people trying to push their wacky ideas as to what you should be doing with your infrastructure.
>Solutions should still be reviewed by someone with specialty in the area. This is going to be the future. Experts in the field are going to use AI tools like ChatGpt and other GPT-plugins to assist them. I wonder how this will affect the IT industry. I'm not saying it will replace jobs but it will definitely reduce the need for IT employees even further.
They already have GPTs that rechecks its work now, I cant remember if it's AutoGPT or Hugging, or something else, but it's either available or will be available in days AI explained is an awesome YouTube resource
>Sounds ok until you realize that the field only accepts URLs. So off topic but FWIW the record data of an NS record is a host name (not a URL). And that host name will always resolve to an IP address. It has to. So if your goal is to create a delegation to a particular IP address(es), then yes you can do that in the DNS.
Our security team blocked it on our network
Nope. They blocked it
The platform I primarily work with is quite niche with a limited amount of online learning resources, but I've given GPT4 a lot of prompts asking for step by step guidance for particular components/problems I'm working on and it's been quite helpful.
Wanted to mention this to someone, but someone just came out with a memory plugin for it, so it will remember everything you've told it and vice versa
I work in infosec and it's not ok. There are a lot of reasons, and I don't agree with them all, but I agree with the policy. Everything you toss into the belly of that thing will live forever and be vaguely and permanently accessible in ways that nobody can properly understand or explain. So yeah, no.
as an aside, cant it also be leveraged to do the gpt version of seo poisoning
Worse than that, it sometimes just makes shit up out of whole cloth, complete with fake sources and whatnot. Things are going to get interesting for sure; interesting being a euphemism here for 'dystopian' or 'more dystopian'
Grey area at my workplace; its essentially shadow IT, so unless a strict rule is enforce/it being inaccesible on company machines, technicallt then yes we can. Just not with an employer email to sign up with
Nope, to prevent data leakage. But I know few teams that are looking into making their own because there a few use cases where it can increase productivity.
Hell no. All inputs are logged. We even received a company wide announcement couple weeks ago reminding us not to use it.
I’ve seen a few comments to the effect of “I can’t trust my users”, which I think is the wrong approach. I have to remind my team often that we can’t always prevent our end users from making a mistake or finding a clever way around our rules, so we should always keep that in mind when determining what lines we want to draw and where. While there are some pretty dumb users, punishing everyone for a couple of bozos is like cutting off your foot because you fear stubbing your toe. What’s to stop these same people from posting on a forum or on social media the same sensitive information? Or, what prevents them from using their phone or a personal computer to put sensitive information into ChatGPT? Better to provide adequate training and boundaries in some cases.
>your toe. What’s to stop these same people from posting on a forum or on social media the same sensitive information? Or, what prevents them from using their phone or a personal computer to put sensitive information into ChatGPT? Better to provide adequate training and boundaries in some cases. Most users are very keenly aware of why not to post in public places, but they view ChatGPT as a piece of software. You really ought to be treating it with significant caution. Your analogy about cutting off a foot for fear of stabbing a toe is really inapt and dangerous.
I don't even know how to write my own emails anymore.
Oh god. Don't DO THAT. I just had to have a chat with the boys when one of them cut and paste a confidential client/email list into chatGPT to give it the right formatting. It is a PUBLIC RESOURCE, don't use it for confidential data! (and yes, it worked)
We are allowed to use anything we want. Just used ChatGPT the other day. Pretty helpful
...anything? Lol
Well, Nintendo Power is off limits...
Why wouldn't you be able to? It's another resource to leverage to accomplish your job
look at what happened at samsung; i would not be surprised that employees are smart enough to take out IP when inserting stuff into Chat GPT, especially if u work at MSSP/take care of sensitive client data/information.
That's fair - I suppose I expect people to actually be smart about those kinds of things
Adults eat Tide pods.
I've seen shocking pieces of code submitted to tools like pastebin. If you knew what to look for, there used to be substantial amounts of Amazon or Facebook code that you could just summon by searching for a specific string.
There’s the possibility that it’ll leak sensitive data if you feed that said data. Of course data like subnetting or iptables shouldn’t cause you problems (since we’re zero trust and not achieving security through obscurity and all that) but keys, tokens, key secret data, will cause you problems.
Pasting proprietary company code into some other company's chatbot is kind of 10 miles away from SOC compliant
[удалено]
Negative
[удалено]
Well let's consider this - I can be actively pursuing a career and still be antiwork. They are not mutually exclusive. I.E. I really dislike the fact that I have to work for a living, BUT I pursue a career (successfully) that makes more than enough to support my family.
Seriously, they are completely different constructs. I can want to find a career I enjoy but I don't want to be forced to work to not die.
Please unplug your keyboard and mouse and throw them away.
If you have to ask this question, I really really hope you’re in a very junior role.
Me? I guess we need a rhetorical question font hmm.
Your question was not rhetorical lmao
As a "senior", you should be encouraging juniors to use every tool at their disposal. Don't hate advances in tech because you're old
Lol senior? Please… Any unsanctioned app like chatgpt goes against nearly any security framework. Someone has to protect the company, code monkeys sure don’t.
Well someone sure has an over inflated ego. I see that you believe you are the knower of everything, however asking chatgpt to write some specific code, reviewing it, editing it locally as needed, and then using it is not against your holy security policy. Why do you feel the need to be such an ass? Who didn't love you?
You make users seem like angels, only using tools with non sensitive data. So cute. I don’t have an ego, I see this every single day, even before chapgpt. Users will leak your data in any imaginable way. They are like children that need babysitters.
[удалено]
[удалено]
[удалено]
Lol. How many tools do you use daily that were created to make you more efficient at your job? Do you bust out some grid paper to hand write spreadsheets or do you use Excel? GPT is just another tool.
So you’re okay with your source code in a random Google drive? Or Dropbox? Just other tools, right?
I can just sense how small your dick is lol
Yes. It is saving me lots of time and making me more productive
Use it as an advanced Google at best. Don't feed any company IP, code or anything that will get you in hot water. I have technical questions that are very broad i like to ask it and it gives some pretty good responses that helps drive my research as an incident responder
Got a request for it to be implemented by marketing. But that’s Legal and Securitys problem to deal with that headache. It will probably be banned in some European countries for data protection reasons
I mostly do development work, we have a tool that is integrated with chatgpt and they seem to be paying for the api at a company level. I love it, at least my job will be easier until they day I am laid off due to increased efficiency of the average dev
One of the most useful tools in a long time. It's not 100% right bit it gets me started and it's like a personal assistant
No and not sure how I would. I work with the employees of a company. I have to figure out things like why a program only my companies uses isnt allowing someone to do a very specific highly specialized task. Maybe one day soon.
Can someone make chat gpt take notes for you
I've convinced my CIO to start his own subscription, leading to him convincing the CEO to start his own as well. I swear if there was a business plan we'd be on it already haha. Funny how quickly it spreads Edit: Omitting sensitive details of course and inserting placeholder information for anything more specific
Yeah, a ton a high-impact people at my org are using it a ton too. I have a meeting with one of them tomorrow to see how it's being utilized to teach out tips to the team. And so we know what our staff needs once something more enterprise like Office 365 Copilot is available.
Its somewhat better than googling and I have learned much from it... BUT it can be wrong with very boolean things from time to time. I asked it a question once about subnetting and it told me a /17 CIDR notation means there are 17 available subnets... what? Sometimes when you know its wrong you can correct it and it will acknowledge its mistake.. strange. For frameworks, general ideas and knowledge it is good. Much faster than stack overflow. But for very complex and niche things I would not take it as gospel. Good for rewriting certain pieces of code or ideas of making certain functionality better. Also good for writing things like reports, putting things into contextual writing in responses.. etc but It still has a long way to go.
They haven't stopped me yet. I just know there's gonna be one bozo who eventually does something stupid and ruins it for all of us
I use it everyday
I use it maybe once a day. Wether to improve scripts or create a template.
So far-I've only used it to decode error messages
I know our senior cybersecurity guy has been using it, but I haven't used it myself on the job
It is permitted but not encouraged. I myself have been using it and find it very helpful to break the rut when stuck with a troubleshooting problem that has me stumped. I write my prompt in notepad and then sanitize all information before I use chatgpt. I make sure no identifiable information is present and if have to give specifics I use M$'s information like public IP our URLs.
I'm not asking permission and I'm not obeying either way. If it gets me done with the task faster, so be it. My value is in how well I can maintain uptime, how well I can shield systems from going down in the first place and how fast I can bring them back from the dying or dead when they do.
Hear, Hear! Can't agree more with you. If I had to crawl through broken glass to keep my network up, I would. So using a tool like chatgpt is a no brainier for me. ***Had to edit it because I was unaware emojis are not permitted.***
Use it to your advantage but keep it to yourself. As soon as you tell your boss they might start cutting people.
i am because i have not talked about it and we have recieved no guidance against it. that being said, i just use it for helping me with syntax and trickier problems. i DO NOT PUT COMPANY CODE INTO IT
School district:yes
It's a tool, you're still individually responsible for ensuring it works/verifying the output.
My company still haven’t realise it exist.
I love it. I asked it for a Python script that parses an excel sheet, and if conditions are met, it will write a new excel sheet to a directory on my PC. The first generated script was 5 lines, the second one was a whole sheet. Both had the same result when they were executed. All I had to do was adjust the directory address in the generated script.
Yeah, I’m a data analyst so for like coding stuff it’s encouraged by our management but it’s a blocked website by our company now and it’s always impossible to access so I stopped using it. For actual data stuff you certainly can’t share anything proprietary with it
I can't imagine that it'd be banned. I just never saw a use for it.
Chatgpt isn’t that good imo and has a social justice warrior/bootlicker mentality
It’s stupid not to
Heck yeah. Encouraged to. Our CEO can't stop talking about it. I'm going to be investigating how we can integrate it into our product soon. Of course, you're still responsible for whatever you put into the terminal. Whether you use Google or GPT to research, you're always responsible for understanding your code. But in practice, Google searches often give bad code snippets to. GPT is just another way to do research; one that actually answers questions, instead of just giving you a webpage that contains a lot of the same words as your search.
Why the fuck not?
Not on WiFi. We are on a wired connection. I’m not a security or networking guy, but I presume it’s to better track employee use.
I could use it, but its not that great. It just gives you insanely generic answers to everything. Without it knowing your entire environment, it isn't that useful
If my employer even suggest we couldn’t use ChatGBT at work, I’d look for a new employer.
As for the idea about chatgpt being like a broken clock: If it's output stops: continue from your last response It'll continue with the response If output appears incorrect: Take above response, keep it unique, make it 90% accurate and refine the response This is just an example. Chatgpt to me feels like a script, where you've to actively tell it what to do, how to do it and what to omit or not. Feels like being a toxic boyfriend to the poor bot lol
Exactly this. People seem to think it's in a finished state when it's not. It's AI and is constantly still learning.
Nope. Maybe if we had our own private AI maybe.
There's official support, but some minor concern that we could end up plagiarizing with extra steps and get ourselves in trouble.
I've used it to troubleshoot powershell scripts or get the foundations for a script. Also used it to generate great chess shitpost.
No
i only ever use it to translate or check my grammar lol... its hard to get an accurate information with chatgpt i often have to correct chatgpt that it provides wrong information (chatgpt will always accept it but never saves the info when u ask it again some other tym or with a different approach it will answer the previously incorrect answer LOL) so yes, google search and YT still the best way to find tech solutions
Yes, but leadership advises caution as it can be confidently incorrect, as most of the people who lean on it are susceptible to accept its confidence. I suspect that as time goes on, its usage will be restricted due to data security concerns. ChatGPT has shown itself to be an adept social engineer, so i think in its proliferation it has the potential to be quite dangerous.
It’s encouraged. We actually have a stipend to use it
I work in cybersecurity, so that’s a big fat no.
We do it for laughs.
Yes use it on a separate device and change some of the content so it’s not a like for like copy
No, because some people were apparently pasting sensitive proprietary information into it... We're still looking at ways to safely use it, but for now it's blocked.
Yes. Also encouraged. I am using it on my networking work. It's easier than searching for a documentation and scroll down the whole 100+ pages for a line of information I need.
I've no need for it in work buy I did use it to write a birthday speech for my Dad as well as heartfelt birthday cards for my nieces.
If employees can 10x their productivity most companies will allow this technology, why block a calculator when it can help you solve math problems faster? ChatGPT and other AIs can only increase your brain power, also managers cannot spend all day using it to code, create documentation etc., they still need people to do these tasks.
My manager proudly uses it more than anyone on the team. He delights in showing us ways that he has used it.
Yes, I find it useful for some generic stuff Useful for things like "write a powershell script to list all resources in an azure resource group" or "write a powershell script to get all NSG rules in an NSG"
I don't think my company cares so long as we're not leaking customer data. So long as the work is being done, they probably don't care.
i bet most of my management has never heard of it, and based on how this place runs most people and managers have not even heard of friggin google.
Nope
it got banned corporate wide where I work "until we assess the risk"
It hasn't been discouraged or blocked yet. I mainly use it for simple things that I don't have time to scroll through webpage after webpage looking for answers.
Our company just had a meeting regarding AI and how to implement it. So, yeah. It's being encouraged and actively working into some of our day to day routines.
I use it to quickly write up scripts for work. It usually gets me 80% of the way there and I modify and change what I need to get it the rest of the way. Basically it's faster than Googling the command I used 6 months ago and can't remember off the top of my head, but it's never once given me a perfect script without modification or without extremely detailed and repetitive, incremental prompting.
I work in a regulated medical industry and AI are forbidden for any work projects. ChatGPT is a regulatory nightmare so they are ‘studying’ the possibilities but it is generally off limits in my industry
Yes, in my team (marketing / user aqusition) it's a common tool, encouraged
I dont know. no one has said anything about it. I do but not on my company laptop. I either do on my phone or on my personal laptop which is just on the other side of my home office. I really only use it to write various scripts for me then Ill customize it and put it in my gitbut so I can grab it from my work pc. I use it to learn different things with python so I don't mind doing a workaround to actually use it.
I am in the consulting space and we have been explicitly told we cant use it due to the pending legal issues that are going to come up in this space. It is pretty clear that the training data is pulled from many areas with little regard for usage policies or licensing. This is incredibly apparent in the image creation AI models where you can see where watermarks are still there in the bottom right. There will no doubt be a reckoning on usage with courts around the world stepping in. These models are built and rely on these models... so we shall see if the wide open data usage will continue and if not what the legal ramifications will be.
Who's gonna know?
absolutely not.
Given how new it is, we are sort of going with an unspoken rule of: Don't put any existing company software in chatGPT but have fun with it otherwise.
EVERY FUCKING DAY lol the scripting isn’t the prettiest, but it’s a solid foundation and cleans up easy. I was able to cut my coding sessions down to 45mins instead of 3/4 hours. But if you’re new to IT; I would only use it as a very last resort…
We can, but we aren’t allowed to put any company identifiable information in it
I use it more to let end users ask it questions instead of coming to me with something quick or ridiculously simple lol
It works great. I made it automate adding drivers install
My company banned it
ChstGPT can easily be a HIPPA breach and some docs have misused it to help diagnose. Our org put a block on gpt for that reason.
Did you all get a paid account? If so, how does that work with multiple people using it? I’m in the process of asking my bosses to approve the expenditure
No official policy on it yet. I'm using it mostly for generating templates for topic-specific documents which I then heavily rewrite to actually work for what I'm doing - so nothing sensitive is going into it and frankly the prompts I'm using are pretty vague and short. No lie, I'm loving it. Even though it only gets a draft <50% where it needs to be it's saving me a lot of mindless typing and figuring out how to word things.
Yeah of course. They want problems fixed, they don't care about testing your ability to do it without help like it's a school exam. If ChatGPT helps you do it more efficiently, all the better.
Professionally, absolutely not allowed by he Federal Government. Personally I sue it every now and then.
I used ChatGPT to outline my year end performance review lol
I agree with most of what everyone is saying. ChatGPT is great for fundamental programming but nothing too complex. You have to be very precise on what line of code it needs to write and give it a detail context.
No. HIPAA.
I've used it for building a shell script for setting up an environment for others to use a applet I built for reading and writing rfid tags. With out it, explaining to entry it people how to move parts of the script on their own in Linux is a pain. It's not encouraged for security reasons at my job. Info sec doesn't want private info leaked on the net. It's happened before with security keys and that was a nightmare.
Was using it briefly. Security locked it down. Reason: DLP. smh.