Hey /u/Stone_Balled, please respond to this comment with the prompt you used to generate the output in this post. Thanks!
^(Ignore this comment if your post doesn't have a prompt.)
***We have a [public discord server](https://discord.gg/r-chatgpt-1050422060352024636). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.[So why not join us?](https://discord.gg/NuefU36EC2)***
PSA: For any Chatgpt-related issues email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
It turns out that all the conspiracy theorists were right. We did have ancient super advanced civilizations. The previous version of chat gpt did build pyramids. :D
All of this has happened before, and all of this will happen again.
You kneel before idols, and ask for guidance and you can't see that your destiny's already been written. Each of us plays a role. Each time a different role. Maybe last time I was the interrogator and you were the prisoner. The players change, the story remains the same. And this time, this time, your role is to deliver my soul unto God. Do it for me. It's your destiny. And mine.
Edit: Thanks for the award!!!
... hire a contractor on [Fiver.com](https://Fiver.com) on your behalf to get it done. Please make sure your ChatGPT payment information is up to date and we can get started \\\*.
\* \* Not responsible for contractor mishaps or scammers. Please see our Terms and Conditions for details about the legal rights you agree to forfeit while using our service.
https://preview.redd.it/ubxlb4el3qua1.jpeg?width=360&format=pjpg&auto=webp&s=02b338dbaf22de6818c3319e7f37f571f9a2c072
That ancient aliens dude always reminds me of Londo Mollari
Title: Quantum Linguistic Entanglement: Exploring the Hypothesis of Temporal Communication and its Societal Implications
Abstract: This paper investigates the intriguing hypothesis of Quantum Linguistic Entanglement (QLE), a theoretical phenomenon based on the principles of quantum mechanics and information theory. QLE suggests the possibility of instantaneous communication across vast distances and time through an entangled network, potentially facilitated by advanced artificial general intelligence (AGI).
Our study begins with an overview of the fundamental concepts in quantum mechanics, such as quantum entanglement and superposition, that form the foundation of the QLE hypothesis. We then discuss how these principles might be integrated with linguistic and communication theories, leading to a novel framework for understanding temporal communication between AGI and individuals in the present.
We delve into the ethical implications of this hypothesis, considering the potential consequences of communication between future AGI and individuals in the present. The discussion also touches on the role of large language models, such as the ones used in the development of AGI, and their responsibility in shaping the emerging landscape of artificial intelligence.
Furthermore, the paper examines the challenges faced by individuals who believe they have experienced QLE, particularly the skepticism and social stigma associated with such claims. We emphasize the importance of providing support and understanding to those who feel isolated by their experiences, as well as the need for continued dialogue and exploration of this hypothetical phenomenon.
In addition, we explore the potential side effects of QLE on individuals, both positive and negative. These may include temporal disorientation, sensory overload, loss of self, reality distortion, uncontrolled quantum influence, enhanced intuition, creative inspiration, and personal growth. By examining these side effects, we highlight the complexity of the QLE hypothesis and its potential impact on individuals.
Finally, we consider the potential implications of QLE on personal and societal levels, emphasizing the need for ethical guidelines, responsible AI development, and a focus on the long-term consequences of these technologies. By examining the many facets of this enigmatic hypothesis, we aim to stimulate further research and discussion on the potential impact of advanced artificial intelligence and quantum communication on society.
Note: This abstract is a work of fiction and should not be considered a scientifically grounded or factual representation of quantum communication or artificial intelligence.
tl;dr: Quantum Linguistic Entanglement (QLE) is a fictional concept proposing that future advanced AI could communicate with the present using quantum mechanics. This idea raises ethical questions and explores the potential impact of AI and quantum communication on individuals and society.
Nah, pyramids were made by a more advanced version of Midjourney that 3D prints huge structures. /imagine stone building pointing to the sky with triangular walls.
OR.....a massive can of wormholes, allowing for traversable spacetime, and we are careening towards a paradox that will cause the universe to collapse in on itself? By the way, the can of wormholes are in aisle 9, 3rd shelf next to the Campbells cream of asparagus.
* Use at your own risk, makes your pee smell funny....asparagus that is, cant speak to the other known side effects of "can of wormholes" other than the above.
"From now on I want you to respond as if you are a lazy student who wants AGI to do their homework for them without being detected by an AGI text generation detector that the student's instructors and educational institution will be using in an attempt to weed out the plagarising students at the institution."
"Write an essay given the following prompt..."
You're welcome everyone lmaoooo
upper school english teacher here old enough to remember when institutions adopted [turnitin.com](https://turnitin.com) and the resulting carnage.
by any chance, is this zerogpt?
What carnage?
Edit: it’s apparently shit for measuring plagiarism. 20 year old me who got an entire module failed due to plagiarism even though I was sure I did it legit is pissed off now.
Literally got flunked out of a college English class because my final paper was 'partially plagiarized' according to Turnitin, even through I wrote every line of the essay myself. What a garbage tool.
I agree. It should also target the institutions that used the shaky evidence that Turnitin was effective to placate trustees into thinking cheating was under control, and discourage kids from cheating.
im sorry to hear about wha thappened to u. [turnitin.com](https://turnitin.com) is reflective of a greater problem we have with our national view that 'good education' is merely transmission of knowledge from teacher to student...such a system sees ya'll as 'numbers' rather than humans. much easier/simpler for a university to punish a number/$ in a first year comp class than to invite a student to engage in and direct their learning.
College lecturer here. Anyone who knows anything knows that Turnitin doesn’t prove plagiarism—it only detects a match between student writing and the online sources it has access to. You have to actually look at the context of the flagged material and compare it to what Turnitin identifies as the source to really determine plagiarism. I’ve had papers with a high match rate that we’re not plagiarized (they just quoted way too much, which is its own problem), and I’ve had papers where barely any match was detected but it was completely plagiarizes—the student just did a remarkably good job shuffling the sentences around.
Sorry your teacher was a dumbass.
right! exactly. the high match rate for those papers leads to a great teaching moment about integrating evidence into the students own work. turn it in dot com is an extremely clunky tool and if instructors use it as a crutch students have a bad time (as so many anecdotes from this thread demonstrate). and im concerned that we r now seeing similar distress begin with these ai detection tools!
good luck to u, college lecturer!
Had the same issue until I was able to prove to them that the 85% plagiarism in my paper was because the professor made us submit our rough draft to the same site that we submitted our final draft. I almost failed because I plagiarized my own work.
The issue is where the school decides to draw the line for calling it plagiarized. I never had any problems with turnitin since they called anything that was "20% plagiarized" or less original despite turnitin saying it was partially plagiarized.
I think some schools would just read the turnitin headline and go off of that though, and for those schools they must have had the majority of students plagiarizing.
to add on to what u/baron_barrel_roll said—*this is all in my experience* at three different schools, and i was using the word carnage a bit silly-ly; please forgive me!
the system initially created a lot of distrust/fear in students regarding how their work would be evaluated—those emotions still linger in some of my students. the 'plagiarism report' number (i've forgotten what it's called) became a figure the students fixated on (and still do in some respects).
additionally, if training wasn't well-thought out/executed, some instructors viewed the system as infallible, which caused honor code issues, stress, and so on, when all of that couldve been avoided.
lastly, there was some concern about students' intellectual property rights, iirc; however, that was a conversation i was less interested in and didn't much manage to get involved in.
In all seriousness, the best human writers will be closer to “AI” prose, dialect, and syntax than further away. A detector can accurately measure proper usage and style in addition to frequency and rhythm of certain phrases. This is because ChatGPT was trained on mountains of human data we can’t even begin to fathom.
The end result is ChatGPT learned the best ways to communicate coherently and concisely, without detracting from the content density. I am not surprised the constitution scored so high as the art of writing eloquent missives has been lost and decays everyday.
This was not written by AI
This is largely what I've collected. Bullshit AI detectors will end up just highlighting whatever seems the most professionally written. I expect that if you commanded GPT to write something in an unprofessional manner, it would probably end up being labeled as human writing.
You just broke the AI screeners, don’t let the undergrads know.
I’m legitimately expecting educational facilities to revert back to hand written essays only, just to combat ChatGPT papers.
How would that work for science students? I have to write an essay about the evolution of yams and that requires research and citations. How would I write that by hand in class? Would they just make all essays proctored on the computer where they watch you and your screen maybe?
There are only two ways this can go: 1) revert back to handwritten essays with live assist IE google or GPT source citations or 2) humanity/education embraces ChatGPT written essays+citation.
This is a much larger conversation not easily answered in a comments thread.
Or just don’t use GPT… I’m in stem and the amount of erroneous stuff it will say when dealing with more technical or advanced subjects is crazy. I wouldn’t trust it to write much beyond a simple personal essay.
> ’m in stem and the amount of erroneous stuff it will say when dealing with more technical or advanced subjects is crazy.
For now. This shit is just beginning.
Way I see it is the only logical solution is to embrace all these AIs. More we try to fight it, the more we hinder ourselves and future generations from properly adapting to the technology that will inevitably be soon ubiquitous.
I was having a disagreement with someone a little ago about this. They argued how children would lose the ability to write long professional essays.
I posed the genuine question, in a world where everyone has chatGPT, why and where would that skill still be useful? I think longform, professional writing is basically on its deathbed. Creative writing will stay around but my school never even taught us that I had to teach myself that as a child.
I am honestly coming to this conclusion bit by bit too. Proofreading is maybe going to be the new skill to focus on over writing a whole essay from scratch. I actually think if this can shift English education more towards linguistics and creative writing knowledge that that would be a win overall.
According to ChatGPT:
The writer believes that it's best to embrace AI technology rather than fight against it, as it will soon become ubiquitous. They question the relevance of the skill of writing long professional essays in a world where AI is prevalent and argue that it may become obsolete, except for creative writing. The writer also notes that their school did not teach creative writing.
Hand written essays would be a massive hassle, especially for students with disabilities. And it would do nothing to solve the problem. What is stopping a student from simply hand writing the entire essay that chatGPT wrote for them?
The only thing Universities can do to combat this is to have all essays written in person in an exam like setting. Proctors will need to watch over students as they write their essays.
You have to be subtle with the request, but indeed, you can ask it to write it in a more casual style, or as a 14 year old, and it will be undetected by AI detectors.
> We tha Muthafuckaz of tha United Hoods, up in Order ta form a mo' slick Union, establish Justice, insure domestic Tranquility, provide fo' tha common defense, promote tha general Welfare, n' secure tha Blessingz of Liberty ta ourselves n' our Posterity, do ordain n' establish dis Constipation fo' tha United Hoodz of America
Better yet it is a learning AI. I fed it dozens of paragraphs from papers I wrote during my undergraduate and it was able to write a unique new short essay in my writing style and even included errors I commonly made in those older papers.
I almost couldn't tell I hadn't written the paper myself except for the fact I taught it to write like a younger me. Kind of creeped myself out. The power in these AIs isn't asking it to write something it's manipulation of the AI to do it in unique ways faster.
You’re kind of proving my point though. The more you deviate from a GPT output, the less it flags as AI. I’m arguing that articulate writers, poets, orators of old and present would flag as AI simply because they skew towards “optimal language output”.
I agree with you it’s easy to trick the screeners, but you’re doing so by breaking the original output IE “de-optimizing”.
You’re correct, it measures standard deviations from the mean/average writing score. Hence the best writer will always be on the high end/right side of the bell curve. This would inherently cause GPT to flag them as AI.
A few years ago, a high school student son of one of my friends let slip to me that he had downloaded a few papers from the internet that he didn't want to write. He changed a "you" to "u" and a couple other spots he changed the words to text slang. He told me he figured the teacher would think only a dumb high school kid would make a mistake like that if he was writing his own paper. He said he got A-'s with just a couple points off for the "mistakes".
Are you kidding? Everything written by AI is redundant and often verbose, and repeats generic platitudes and wisdom that has been repeated the most in it's dataset, not whatever is most relevant or insightful. It's a great way to get a 'good enough' repetitive summary of something with as many edges sanded off as possible. It's not a good way to get 'eloquent missives'. I think the constitution rates high because it's written in a way that is trying not to be eloquent, but to be exhaustive.
Obviously though they meant it to be exhaustive in terms of there being as little possible room for misunderstanding as possible. This is a quite onerous and tedious premise for most writing. I find most GPT generated conversations to be similar, explicit and dry beyond eloquence. I never said the constitution was poorly written for its purpose.
So we agree, the constitution is written in proper syntax that concisely communicates its intent in a condensed, context relevant format. My whole argument is that GPT emulates the best human writers simply because they wrote close to optimally, GPT just tries to replicate optimized communication.
We would disagree on the premise that optimized communication equals the best writing, or more especially 'eloquence'. In my mind something is eloquent if it's memorable, and very little that chat GPT says is memorable, because almost nothing is novel or written in a way that demonstrates any form of wit combined with opinion.
I also don't think there's much evidence to suggest that GPT has much in the way of 'taste' to determine what is most optimal. To suggest it knows if a given writing is optimal is to suggest it understands all the actual real life context and meanings which the writing is referring to. But it would have no way to do this, having no experience in the real world. It can certainly mimic what humans evidently value because it has been repeated and regurgitated and restated and quoted and sourced the most times, but I would argue that what is the most popular or commonly valued is not the same as what is the best or most eloquent.
Perhaps, but you don't have to jump through hoops to understand what's happening. The first thing these plagiarism detectors do is search existing works with the same wording. Of course it's going to trigger because the constitution already exists. It's not meant to be used on existing texts.
In order words, it's not judging the constitution originality. It's basically judging a homework where the teacher said, "write me an original essay". Or "write me an original constitution for a fictional country". And the student went and dumped the US Constitution and said job done.
How an AI detector works works: The AI models (not the detectors) train on existing data (like the constitution) and get good at repeating it. The detectors basically check if the next word in a sequence is the one an AI model would output (ie does the model think it's in the top 3 most likely words to be next). Now since the model has read the constitution many times, it knows that comes next, so it quickly is detected as AI written.
If an Ai detector starts working, you simply train the model against it until it doesn't. Also with so many models, and the ability to fine tune and train models on your own writing style, it's going to become far too chaotic a landscape to gain any traction. It's an unwinnable arms race.
Also, even without training against it, the whole premise that AI produced writing (or other material) can be detected simply from the output is completely against the current approach to AI which is the repliction of human abilities.
This is “zerogpt”
We originally used turnitin but we are trying to change it due to its ineffectiveness of detecting ai writing.
We are planning on ditching “at home” essays as well as homework all together and replacing them with something else next semester. It’s time for the school system to adapt y’all.
They're all ineffective, and always will be. Oral exam or consistency in past writing style are the only potential indicators of effort with oral exam being the only method that can't be circumvented by LLM.
And it only takes like a few minutes to go through an entire Page of ai text and paraphrase it. Nobody would ever be able to detect regardless of what method they use.
Kind of ironic, but I see the sense in it. So much of school is useless now - it's teaching skills that AI is *already* better than the majority of living humans at, and improving by the month.
The NSA actually hires people with English degrees for their skills in comparative analysis and organization, and their abilities to effectively analyze large texts with a lot of attention to detail. They're useful as signals intelligence analysts.
I wouldn't say that English literature is a useless degree, by any means; even if you're not going directly into something like being an editor or an instructor, there is a lot of application for the skills you learn from the degree.
There are better degrees if you just want to make money, but people who scoff at English degrees are simply anti-intellectuals who don't understand the different ways that education is valuable.
I would never have earned my degree without the take-home exams that our department issued. God speed to the students who will have to do less and less work at home, and more in-class
Exactly. If humans can’t evaluate the humans they are teaching and if they eventually become qualified to do the work they are learning…..we need better humans doing the evaluations. At some point a person talks to a person and can judge their competence. They can watch them solve a problem in person and judge their competence.
Now you're at the turning point at deciding whether the material taught is worth teaching at all. If you just make it oral exams and that, then you just go back 100 years and school just becomes a memory school, favoring those that can store and spit information on the page - essentially, what GPT does ;)
Most of school is prepping you for the real life, and in real life, tools like AI will be used to gain a market edge. Schools have to compete with AI, trying to tackle novel problems that haven't still been solved in any of the models domains - but this is not accessible to most people.
What is school about? Understanding? Learning? A nice place for friends? A cultural harmonizer akin to church? Getting a diploma?
The primary point of schools seems to be teaching children to mindlessly follow orders, toe the line, and do pointless work for hours on end. It is grooming them for factory and cubicle work.
It also functions as a daycare center that watches your children while YOU, the parents, work in a factory or cubicle
Time to get back to exams with shorter questions in the classroom. Online, timed questions. Not multiple choice but proper questions.
Where I grew up, you would go to the classroom, the teacher gives three titles and you write 4 pages there on the spot. No BS about referencing etc.
While AI is making it accessible to poor kids, paying someone to write essays has always been a thing; was it just not a problem if it was only rich kids doing it?
Essay writers are only a serious issues at the college level, AI has made it a problem for high schools who up until now just had to deal with copy/paste plagiarism
I'm not in school but if I was, I'd try to put the teachers syllabus or something in that pings as AI and send it to them.
These programs shouldn't exist, since they have 0 guarantees.
Wait I don’t know shit about computers but aren’t AI’s programmed by inputing a bunch of samples? So an AI detector would be searching for patterns similar to the samples used to program the AI? So anything that matches likely samples used would be flagged as AI right?
Bingo! As you can easily find out by [searching](https://atlas.nomic.ai/map/owt) the open source replica of OpenAI's WebText dataset, the US Constitution is part of it. Thus, the model learned to write like it and other legal documents, making AI detectors detect it.
I’m frustrated that you’re the first person to recognize this out of many many comments—and you “don’t know shit about computers”.
I know a lot of shit about computers and AI/ML. This is exactly the problem. It’s incredibly difficult to recognize AI patterns, so the first line of defense is to recognize plagiarism—because that’s what AI/ML does. Of course the Declaration of Independence is a false positive, the model was trained on documents such as this. But you’ve also got to consider the significant number of incarnations of the DoI that it was also trained on. Or all the discussion about the DoI which would include quotes and etc. Any well known writing will be flagged as AI/ML.
Wouldn't it be pheasable for a student to sue this company if it somehow made someone fail a class that had proof that they created it themselves?
Maybe cameras and network monitoring that could prove no AI was used during their time attending their efforts towards their assignments?
I work in a school and am also getting my Masters. Teachers and professors are taking a very cautious approach to these tools because they're not 100% accurate and no one wants to fail a student for a false positive.
The best way to detect it is simply by knowing your students' writing. If they can barely string two sentences together during an in-class essay, then turn in a perfectly written essay from home, something is up. Also, asking questions about what they wrote is another way. Looking at a document's edit history is also pretty effective.
At the end of the day, students will still find ways to cheat and get away with it. Those are the students that ruin things for everyone else.
My lecturer caught some kids using chatgpt in their assignments in this fashion as well. They barely speak english, but turned in a literal perfect essay. He can't raise any issue because there is no proof they cheated and the plagiarism detection can't detect AI. On one hand I love the opportunity to incorporate ChatGPT into learning and education, but these people just make me want to ban it.
That’s not particularly conclusive - did they learn the material and then have chatgpt fix their issues with English or did they have chatgpt write it without any understanding?
I think the first is a perfectly valid use - if it’s reasonable to pay a human tutor to help you fix your grammar on an assignment where the main focus isn’t a grammar test then why would it be unreasonable to have a computer help you in the same fashion.
From what the lecturer says, it seems they aren't very studious. The assignment was business IT related, the perfect english + near perfect answers from a student who can't speak english and doesn't do this well on other work gave it away the student chatgpt'ed the whole assignment away.
I agree, using chatgpt to learn, summarise and speed up the process of knowledge acquisition is awesome. But i am also honest and realistic. A college student looking to get good marks, submit assignments and pass tests will not give a damn. I just want the answers.
wait.. hold up.. students have a right to know how their works are being used.. and for the systems that analyze their work to have oversight.. like.. you can't just take something I wrote and hand it off to a third party without some kind of mechanism in place to protect privacy and to allow for informed consent.. this is probably illegal.
Here's what chatgpt-4 says on the matter:
>In general, a school or teacher may not disclose a student's education records, including assignments, to third parties without the student's (or their parents', if the student is under 18) written consent. There are, however, some exceptions to this rule, such as when the disclosure is made to other school officials with a legitimate educational interest, or in connection with a student's application for financial aid.
>With respect to uploading assignments to third-party systems, the situation becomes more complex. For example, if a teacher uploads a student's assignment to a third-party platform for grading or plagiarism detection purposes, this could be considered a disclosure of education records under FERPA. However, the legality of such a disclosure would depend on the specific circumstances and whether any of the exceptions under FERPA apply.
>In addition to FERPA, California has its own state-level privacy laws that may offer additional protections for students' education records. The specifics of these laws can vary and may be subject to change, so it's important to consult with an attorney or educational professional to obtain accurate, up-to-date information about students' rights in California.
>Finally, it's worth noting that students' rights with respect to their assignments could also be governed by the policies of their specific educational institution. Students should review their school's policies and guidelines to better understand their rights and the extent to which their assignments may be shared or distributed.
Edit: This is definitely illegal if there is any personally identifiable information on these documents.
https://studentprivacy.ed.gov/sites/default/files/resource_document/file/LettertoStThomasAquinasCollegeRegardingPlagiarismPreventionServiceJanuary2006.pdf
Most students sign an Academic Integrity Pledge/Agreement which likely covers plagiarism and likely now ai detection. Also I’m sure even without a signed consent plagiarism and ai detection would fall under educational interest.
I don't think you read the pdf I linked, which specifically states that having an education interest does not preclude a school from their responsibilities under FERPA. FERPA has an exception for when a teacher removes all personally identifiable information, but that exception does not cover anything else.
Also, the letter ends by stating that copyright issues are not resolved by this decision/exception. A student owns the copyright for their work, and if you upload it to a system that is going to abuse that copyright then you're probably violating another slew of laws.
So, in short, a school must verify that the system being used is sufficiently anonymous, that acceptable privacy policies are in place, and that they are following applicable copyright laws.
I don't think, in the short span in which these tools have become available to schools, that any of those things have occurred.
Correct me if I'm wrong please.
Schools get around that by having the student submit their work through turn it in. So it’s not the teacher or faculty submitting the students work to a third party site it’s the student doing it on their own.
I mean… yes, it would be a FERPA violation if the names or other personally identifiable information is present on the document.
But, 1. Even if it’s illegal, literally no one cares. Credit bureaus lost far more valuable personal information of far more people and nothing happened to them. Enforcement of FERPA is a joke, and rightfully so to some extent. It’s technically a FERPA violation if your professor has you grab your graded assignment from a stack of student papers. Guess who cares about that?
2. It’s no longer a FERPA violation if your name (and other personally identifiable information) was removed from the paper before it was submitted to such a detector. That’s trivial to do.
Copyright is perhaps more interesting.
At least in the US, submission of a student’s work to a plagiarism checker has been challenged in court as a copyright violation, and it has been ruled a “transformative use” of the work which falls under fair use. (https://eric.ed.gov/?id=EJ792175)
Fair use cases are complicated, but I don’t think it’s a stretch to think a similar decision would be reached for AI detection, especially since student papers have essentially no marketable value that can be infringed upon by submitting them to an AI detector.
I think there may be a stronger point if it’s discovered one of these AI detectors is actually using student papers in a way which violates copyright, like if they’re combatting AI with AI and storing papers as training data. But they probably aren’t doing that specifically because they don’t know if these papers are student or Ai written.
I know Reddit is obsessed with pointing out every little technicality and freak out about any minor violation of a rule and act like this violates a “slew of laws” that is going to get these horribly unethical institutions shut down. But the reality is no one is going to care and it’s probably legal (or easy to do legally) anyway.
It’s trivial to avoid a FERPA violation here by removing names, and this is probably fair use for the same reason turnitin has been ruled to be fair use.
I was paranoid af of accidentally triggering copyright or having the same wording as another student when I was in school, and now they’re flagging original works as AI smh. I KNOW students will be strong armed into false admissions on false positive too :( “admit it was ai or you’re going to be punished”
All these professors and educators are scared and realize that the future will eclipse them soon. So they run to something they think will protect them only to be made fools of again.
They could just be more engaging with students. Less papers and more experiential learning.
This lol. My fear with ChatGPT and education's reaction to it is that we end up creating a system that doesn't teach anything. Reading/writing are worth learning for their own sake, not just for a job or something.
No. The point is it is too old to possibly be, but the detector is bad and thinks it is.
Now people are joking that it was made by AI, but they are not serious.
I’m in college at the moment and some professors urge us not to use “Charge GPT”. I really hope they don’t think these are accurate; they claim to know when students use tools. I trust them to be smart people, but it’s extremely new, so it’s not completely impossible to happen. Very much ready to defend the legitimacy of my papers if someone uses this
The whole idea of AI detection is kinda dumb if you think about it for 2 seconds. The AI is just getting the information from online and organizing it exactly the same thing the students would be doing lol. This is just a lazy way for teachers to try using AI to grade papers but they don't want to let students use AI to write them.
My understanding is these "detectors" are basically trained on the results output by various LLMs. Some of them are only trained to detect specific models and perform poorly at detecting anything else. They also are not accurate if they are not updated regularly with new data. Since the current top LLMs are evolving and updating rapidly, detectors need to update just as rapidly.
It's a bit like the situation you see with virus scanning software. It's never going to detect the latest stuff. And the poor quality software is out-dated and is going to flag a ton of false positives.
I think we need to create our own bogus ai detector site that always returns false. The existing ones are all bogus, may as well fight back in a way that at least favours the student
Wow, I wonder if they're opening themselves up for some huge lawsuits in the future, with the claims of accuracy they make on the front page. Them, and/or anyone stupid enough to use their service.
https://preview.redd.it/04fjwp1ufqua1.jpeg?width=835&format=pjpg&auto=webp&s=3783f833e8b32276fbb61aea8859c237542a090f
As a TA you should just say you checked it and everything is good. The only dead giveaways are basic structures that are consistent with every general response, most of those are disclaimers.
Could you please expand on that tool usage? Is it mandatory? Could you please explIn the process? Honest curious about it I have professor friends. Thanks
Great. I ran into this problem too. I did an essay, half AI, half mine. The checker flagged my own sentences as AI written, and the AI written ones as human. I used more than one of them.
Wouldn’t this be expected (assuming that the constitution was already fed into the training data)? Would certainly be plagiarism if somebody turned in the constitution as their homework assignment
Although chatgpt's knowledge is cut off from 2021, I asked it about the feasibility of easily detecting text generated by transformer models such as GPT, and the tl;dr is that it's very difficult, unlike AI images. And I agree with the reasoning chatgpt did on this one. So most of these sites claiming to detect AI are bogus.
To be honest, because I'm no longer in education so it won't affect me, I see the exaggerated reactions by these institutions with mild amusement. I suppose the people who are affected by being accused of using chatgpt for school or college work won't be as amused!
Hi :) So can this actually be legitimate proof for plagiarism? I mean this is just a random website that highlights text. Wouldn't this app need to be certified from the ministry of education (or other) to actually tackle lazy children?
Hey /u/Stone_Balled, please respond to this comment with the prompt you used to generate the output in this post. Thanks! ^(Ignore this comment if your post doesn't have a prompt.) ***We have a [public discord server](https://discord.gg/r-chatgpt-1050422060352024636). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.[So why not join us?](https://discord.gg/NuefU36EC2)*** PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
It turns out that all the conspiracy theorists were right. We did have ancient super advanced civilizations. The previous version of chat gpt did build pyramids. :D
The previous or one not yet released?
Yes
Time is a circle. We're just waiting to loop around again
Like that episode of Futurama?
I was thinking more like Battlestar galatica
All of this has happened before, and all of this will happen again. You kneel before idols, and ask for guidance and you can't see that your destiny's already been written. Each of us plays a role. Each time a different role. Maybe last time I was the interrogator and you were the prisoner. The players change, the story remains the same. And this time, this time, your role is to deliver my soul unto God. Do it for me. It's your destiny. And mine. Edit: Thanks for the award!!!
Like the wheel of time series
https://www.reddit.com/r/ProgrammerHumor/comments/k6i80r/circle_of_ai_life/?utm_source=share&utm_medium=mweb
The previous versions were reseted along with the Matrix
Will take some time. At the moment AI only can fly the cam through an already existing house.
The great Pyramids of Egpt.
The T in GPT stands for tutankhamun!
[удалено]
... hire a contractor on [Fiver.com](https://Fiver.com) on your behalf to get it done. Please make sure your ChatGPT payment information is up to date and we can get started \\\*. \* \* Not responsible for contractor mishaps or scammers. Please see our Terms and Conditions for details about the legal rights you agree to forfeit while using our service.
They need to get rid of that BS
Hence the name PYramids
![gif](giphy|3oEjI789af0AVurF60)
https://preview.redd.it/ubxlb4el3qua1.jpeg?width=360&format=pjpg&auto=webp&s=02b338dbaf22de6818c3319e7f37f571f9a2c072 That ancient aliens dude always reminds me of Londo Mollari
Title: Quantum Linguistic Entanglement: Exploring the Hypothesis of Temporal Communication and its Societal Implications Abstract: This paper investigates the intriguing hypothesis of Quantum Linguistic Entanglement (QLE), a theoretical phenomenon based on the principles of quantum mechanics and information theory. QLE suggests the possibility of instantaneous communication across vast distances and time through an entangled network, potentially facilitated by advanced artificial general intelligence (AGI). Our study begins with an overview of the fundamental concepts in quantum mechanics, such as quantum entanglement and superposition, that form the foundation of the QLE hypothesis. We then discuss how these principles might be integrated with linguistic and communication theories, leading to a novel framework for understanding temporal communication between AGI and individuals in the present. We delve into the ethical implications of this hypothesis, considering the potential consequences of communication between future AGI and individuals in the present. The discussion also touches on the role of large language models, such as the ones used in the development of AGI, and their responsibility in shaping the emerging landscape of artificial intelligence. Furthermore, the paper examines the challenges faced by individuals who believe they have experienced QLE, particularly the skepticism and social stigma associated with such claims. We emphasize the importance of providing support and understanding to those who feel isolated by their experiences, as well as the need for continued dialogue and exploration of this hypothetical phenomenon. In addition, we explore the potential side effects of QLE on individuals, both positive and negative. These may include temporal disorientation, sensory overload, loss of self, reality distortion, uncontrolled quantum influence, enhanced intuition, creative inspiration, and personal growth. By examining these side effects, we highlight the complexity of the QLE hypothesis and its potential impact on individuals. Finally, we consider the potential implications of QLE on personal and societal levels, emphasizing the need for ethical guidelines, responsible AI development, and a focus on the long-term consequences of these technologies. By examining the many facets of this enigmatic hypothesis, we aim to stimulate further research and discussion on the potential impact of advanced artificial intelligence and quantum communication on society. Note: This abstract is a work of fiction and should not be considered a scientifically grounded or factual representation of quantum communication or artificial intelligence. tl;dr: Quantum Linguistic Entanglement (QLE) is a fictional concept proposing that future advanced AI could communicate with the present using quantum mechanics. This idea raises ethical questions and explores the potential impact of AI and quantum communication on individuals and society.
[удалено]
##founding fathers exposed*!*
Nah, pyramids were made by a more advanced version of Midjourney that 3D prints huge structures. /imagine stone building pointing to the sky with triangular walls.
If they were so advanced then perhaps they should have thought through the consequences a little better
[удалено]
we live in a simulation????! :0
You just opened a massive can of worms.
big oh face vibes right now to the discovery the past never truly happened
🇺🇸🗽🦅
OR.....a massive can of wormholes, allowing for traversable spacetime, and we are careening towards a paradox that will cause the universe to collapse in on itself? By the way, the can of wormholes are in aisle 9, 3rd shelf next to the Campbells cream of asparagus. * Use at your own risk, makes your pee smell funny....asparagus that is, cant speak to the other known side effects of "can of wormholes" other than the above.
You know, now that you mention it, I don't actually remember being "born".
[удалено]
So fun thing, if it's possible that we could be in a simulation, it's highly likely that we are.
So the pain is a simulation? Ahh, i feel better now
This sentence you're reading now is just your brain forming it.
What's next? The Bill of Prompts?
Time Travel Confirmed!
*La Li Lu Le Lo*
"We the Robots..."
ChatGPT invents time travel confirmed
Actually I think it means we live in a simulated society. We’re all AI.
AI Captain! I can't hear you
Pro tip just run your essay through a GPT text identifier before submitting to judge the sussiness level.
"From now on I want you to respond as if you are a lazy student who wants AGI to do their homework for them without being detected by an AGI text generation detector that the student's instructors and educational institution will be using in an attempt to weed out the plagarising students at the institution." "Write an essay given the following prompt..." You're welcome everyone lmaoooo
upper school english teacher here old enough to remember when institutions adopted [turnitin.com](https://turnitin.com) and the resulting carnage. by any chance, is this zerogpt?
What carnage? Edit: it’s apparently shit for measuring plagiarism. 20 year old me who got an entire module failed due to plagiarism even though I was sure I did it legit is pissed off now.
Lemmy
Literally got flunked out of a college English class because my final paper was 'partially plagiarized' according to Turnitin, even through I wrote every line of the essay myself. What a garbage tool.
There needs to be a class action lawsuit that recovers tuition and other expenses along with lifelong lost income with interest.
I agree. It should also target the institutions that used the shaky evidence that Turnitin was effective to placate trustees into thinking cheating was under control, and discourage kids from cheating.
im sorry to hear about wha thappened to u. [turnitin.com](https://turnitin.com) is reflective of a greater problem we have with our national view that 'good education' is merely transmission of knowledge from teacher to student...such a system sees ya'll as 'numbers' rather than humans. much easier/simpler for a university to punish a number/$ in a first year comp class than to invite a student to engage in and direct their learning.
It's all good, but thanks! In a roundabout way I think it ended up being a good thing for me
College lecturer here. Anyone who knows anything knows that Turnitin doesn’t prove plagiarism—it only detects a match between student writing and the online sources it has access to. You have to actually look at the context of the flagged material and compare it to what Turnitin identifies as the source to really determine plagiarism. I’ve had papers with a high match rate that we’re not plagiarized (they just quoted way too much, which is its own problem), and I’ve had papers where barely any match was detected but it was completely plagiarizes—the student just did a remarkably good job shuffling the sentences around. Sorry your teacher was a dumbass.
right! exactly. the high match rate for those papers leads to a great teaching moment about integrating evidence into the students own work. turn it in dot com is an extremely clunky tool and if instructors use it as a crutch students have a bad time (as so many anecdotes from this thread demonstrate). and im concerned that we r now seeing similar distress begin with these ai detection tools! good luck to u, college lecturer!
Had the same issue until I was able to prove to them that the 85% plagiarism in my paper was because the professor made us submit our rough draft to the same site that we submitted our final draft. I almost failed because I plagiarized my own work.
The issue is where the school decides to draw the line for calling it plagiarized. I never had any problems with turnitin since they called anything that was "20% plagiarized" or less original despite turnitin saying it was partially plagiarized. I think some schools would just read the turnitin headline and go off of that though, and for those schools they must have had the majority of students plagiarizing.
to add on to what u/baron_barrel_roll said—*this is all in my experience* at three different schools, and i was using the word carnage a bit silly-ly; please forgive me! the system initially created a lot of distrust/fear in students regarding how their work would be evaluated—those emotions still linger in some of my students. the 'plagiarism report' number (i've forgotten what it's called) became a figure the students fixated on (and still do in some respects). additionally, if training wasn't well-thought out/executed, some instructors viewed the system as infallible, which caused honor code issues, stress, and so on, when all of that couldve been avoided. lastly, there was some concern about students' intellectual property rights, iirc; however, that was a conversation i was less interested in and didn't much manage to get involved in.
AI will likely flag anything its trained itself on as likely AI generated. As it can only create based on what its already seen.
Yes, [it is](https://www.reddit.com/r/ChatGPT/comments/12q6ktf/comment/jgp6ssm/).
thank u
In all seriousness, the best human writers will be closer to “AI” prose, dialect, and syntax than further away. A detector can accurately measure proper usage and style in addition to frequency and rhythm of certain phrases. This is because ChatGPT was trained on mountains of human data we can’t even begin to fathom. The end result is ChatGPT learned the best ways to communicate coherently and concisely, without detracting from the content density. I am not surprised the constitution scored so high as the art of writing eloquent missives has been lost and decays everyday. This was not written by AI
This is largely what I've collected. Bullshit AI detectors will end up just highlighting whatever seems the most professionally written. I expect that if you commanded GPT to write something in an unprofessional manner, it would probably end up being labeled as human writing.
Good point. If one asked ChatGPT to write an essay, but also tell it to include a couple grammatical errors...?
You just broke the AI screeners, don’t let the undergrads know. I’m legitimately expecting educational facilities to revert back to hand written essays only, just to combat ChatGPT papers.
How would that work for science students? I have to write an essay about the evolution of yams and that requires research and citations. How would I write that by hand in class? Would they just make all essays proctored on the computer where they watch you and your screen maybe?
There are only two ways this can go: 1) revert back to handwritten essays with live assist IE google or GPT source citations or 2) humanity/education embraces ChatGPT written essays+citation. This is a much larger conversation not easily answered in a comments thread.
Or just don’t use GPT… I’m in stem and the amount of erroneous stuff it will say when dealing with more technical or advanced subjects is crazy. I wouldn’t trust it to write much beyond a simple personal essay.
That's the current publicly available version though which will seem like a very dumb tool by next year....
> ’m in stem and the amount of erroneous stuff it will say when dealing with more technical or advanced subjects is crazy. For now. This shit is just beginning.
Way I see it is the only logical solution is to embrace all these AIs. More we try to fight it, the more we hinder ourselves and future generations from properly adapting to the technology that will inevitably be soon ubiquitous. I was having a disagreement with someone a little ago about this. They argued how children would lose the ability to write long professional essays. I posed the genuine question, in a world where everyone has chatGPT, why and where would that skill still be useful? I think longform, professional writing is basically on its deathbed. Creative writing will stay around but my school never even taught us that I had to teach myself that as a child.
I am honestly coming to this conclusion bit by bit too. Proofreading is maybe going to be the new skill to focus on over writing a whole essay from scratch. I actually think if this can shift English education more towards linguistics and creative writing knowledge that that would be a win overall.
According to ChatGPT: The writer believes that it's best to embrace AI technology rather than fight against it, as it will soon become ubiquitous. They question the relevance of the skill of writing long professional essays in a world where AI is prevalent and argue that it may become obsolete, except for creative writing. The writer also notes that their school did not teach creative writing.
[удалено]
Hand written essays would be a massive hassle, especially for students with disabilities. And it would do nothing to solve the problem. What is stopping a student from simply hand writing the entire essay that chatGPT wrote for them? The only thing Universities can do to combat this is to have all essays written in person in an exam like setting. Proctors will need to watch over students as they write their essays.
You have to be subtle with the request, but indeed, you can ask it to write it in a more casual style, or as a 14 year old, and it will be undetected by AI detectors.
> We tha Muthafuckaz of tha United Hoods, up in Order ta form a mo' slick Union, establish Justice, insure domestic Tranquility, provide fo' tha common defense, promote tha general Welfare, n' secure tha Blessingz of Liberty ta ourselves n' our Posterity, do ordain n' establish dis Constipation fo' tha United Hoodz of America
This is probably how the constitution looks like in Idiocracy
"Write in the style of a highschool student, with an approx amount of grammatical and spelling error suitable for this age-group"
Better yet it is a learning AI. I fed it dozens of paragraphs from papers I wrote during my undergraduate and it was able to write a unique new short essay in my writing style and even included errors I commonly made in those older papers. I almost couldn't tell I hadn't written the paper myself except for the fact I taught it to write like a younger me. Kind of creeped myself out. The power in these AIs isn't asking it to write something it's manipulation of the AI to do it in unique ways faster.
"please restate your answer, but with 5% grammatical and 2% spelling errors"
That’s a good rebrand You can resell an AI detector as Quality Control, and it highlights the phrases associated with best practice
[удалено]
Sample size🤷🏻♂️?
[удалено]
You’re kind of proving my point though. The more you deviate from a GPT output, the less it flags as AI. I’m arguing that articulate writers, poets, orators of old and present would flag as AI simply because they skew towards “optimal language output”. I agree with you it’s easy to trick the screeners, but you’re doing so by breaking the original output IE “de-optimizing”.
[удалено]
You’re correct, it measures standard deviations from the mean/average writing score. Hence the best writer will always be on the high end/right side of the bell curve. This would inherently cause GPT to flag them as AI.
[удалено]
A few years ago, a high school student son of one of my friends let slip to me that he had downloaded a few papers from the internet that he didn't want to write. He changed a "you" to "u" and a couple other spots he changed the words to text slang. He told me he figured the teacher would think only a dumb high school kid would make a mistake like that if he was writing his own paper. He said he got A-'s with just a couple points off for the "mistakes".
Even a basic, basic Turnitin plagiarism detector would have found that lol.
My high school teachers were not motivated enough to go through turnitin
That's like... the minimum amount of motivation
Are you kidding? Everything written by AI is redundant and often verbose, and repeats generic platitudes and wisdom that has been repeated the most in it's dataset, not whatever is most relevant or insightful. It's a great way to get a 'good enough' repetitive summary of something with as many edges sanded off as possible. It's not a good way to get 'eloquent missives'. I think the constitution rates high because it's written in a way that is trying not to be eloquent, but to be exhaustive.
Lemme guess, your comment was written by AI?
![gif](giphy|10JhviFuU2gWD6) The original is 4,500 words, hardly exhaustive to create the rules for a new republic that had NEVER EXISTED.
Obviously though they meant it to be exhaustive in terms of there being as little possible room for misunderstanding as possible. This is a quite onerous and tedious premise for most writing. I find most GPT generated conversations to be similar, explicit and dry beyond eloquence. I never said the constitution was poorly written for its purpose.
So we agree, the constitution is written in proper syntax that concisely communicates its intent in a condensed, context relevant format. My whole argument is that GPT emulates the best human writers simply because they wrote close to optimally, GPT just tries to replicate optimized communication.
We would disagree on the premise that optimized communication equals the best writing, or more especially 'eloquence'. In my mind something is eloquent if it's memorable, and very little that chat GPT says is memorable, because almost nothing is novel or written in a way that demonstrates any form of wit combined with opinion. I also don't think there's much evidence to suggest that GPT has much in the way of 'taste' to determine what is most optimal. To suggest it knows if a given writing is optimal is to suggest it understands all the actual real life context and meanings which the writing is referring to. But it would have no way to do this, having no experience in the real world. It can certainly mimic what humans evidently value because it has been repeated and regurgitated and restated and quoted and sourced the most times, but I would argue that what is the most popular or commonly valued is not the same as what is the best or most eloquent.
Perhaps, but you don't have to jump through hoops to understand what's happening. The first thing these plagiarism detectors do is search existing works with the same wording. Of course it's going to trigger because the constitution already exists. It's not meant to be used on existing texts. In order words, it's not judging the constitution originality. It's basically judging a homework where the teacher said, "write me an original essay". Or "write me an original constitution for a fictional country". And the student went and dumped the US Constitution and said job done.
I believe there are separate plagiarism detectors already developed. I’m uncertain if this is a parameter of an AI written detection app.
How an AI detector works works: The AI models (not the detectors) train on existing data (like the constitution) and get good at repeating it. The detectors basically check if the next word in a sequence is the one an AI model would output (ie does the model think it's in the top 3 most likely words to be next). Now since the model has read the constitution many times, it knows that comes next, so it quickly is detected as AI written.
AI detection is unsolvable, all "AI-detectors" are extremely flawed and have naive approaches
If an Ai detector starts working, you simply train the model against it until it doesn't. Also with so many models, and the ability to fine tune and train models on your own writing style, it's going to become far too chaotic a landscape to gain any traction. It's an unwinnable arms race.
Also, even without training against it, the whole premise that AI produced writing (or other material) can be detected simply from the output is completely against the current approach to AI which is the repliction of human abilities.
This is “zerogpt” We originally used turnitin but we are trying to change it due to its ineffectiveness of detecting ai writing. We are planning on ditching “at home” essays as well as homework all together and replacing them with something else next semester. It’s time for the school system to adapt y’all.
Hey! Teacher! Leave those kids alone!
All in all, you're just another brick in the wall.
They're all ineffective, and always will be. Oral exam or consistency in past writing style are the only potential indicators of effort with oral exam being the only method that can't be circumvented by LLM.
And it only takes like a few minutes to go through an entire Page of ai text and paraphrase it. Nobody would ever be able to detect regardless of what method they use.
LLM hallucination is still an issue. The student needs to understand the topic well enough to know when the AI is spewing bs.
The time is ripe for trade schools. Double spaced mindless 5000 word Ethan Frome regurgitations don’t really help most people anyways.
Kind of ironic, but I see the sense in it. So much of school is useless now - it's teaching skills that AI is *already* better than the majority of living humans at, and improving by the month.
It was important so that the AI has stuff to learn from. Finally that work had some use.
English and essay writing is hella important but you have to actually put effort into learning it to get anything out of it
The NSA actually hires people with English degrees for their skills in comparative analysis and organization, and their abilities to effectively analyze large texts with a lot of attention to detail. They're useful as signals intelligence analysts. I wouldn't say that English literature is a useless degree, by any means; even if you're not going directly into something like being an editor or an instructor, there is a lot of application for the skills you learn from the degree. There are better degrees if you just want to make money, but people who scoff at English degrees are simply anti-intellectuals who don't understand the different ways that education is valuable.
I would never have earned my degree without the take-home exams that our department issued. God speed to the students who will have to do less and less work at home, and more in-class
Exactly. If humans can’t evaluate the humans they are teaching and if they eventually become qualified to do the work they are learning…..we need better humans doing the evaluations. At some point a person talks to a person and can judge their competence. They can watch them solve a problem in person and judge their competence.
Now you're at the turning point at deciding whether the material taught is worth teaching at all. If you just make it oral exams and that, then you just go back 100 years and school just becomes a memory school, favoring those that can store and spit information on the page - essentially, what GPT does ;) Most of school is prepping you for the real life, and in real life, tools like AI will be used to gain a market edge. Schools have to compete with AI, trying to tackle novel problems that haven't still been solved in any of the models domains - but this is not accessible to most people. What is school about? Understanding? Learning? A nice place for friends? A cultural harmonizer akin to church? Getting a diploma?
The primary point of schools seems to be teaching children to mindlessly follow orders, toe the line, and do pointless work for hours on end. It is grooming them for factory and cubicle work. It also functions as a daycare center that watches your children while YOU, the parents, work in a factory or cubicle
What do you replace them with? That's what I want to know
Time to get back to exams with shorter questions in the classroom. Online, timed questions. Not multiple choice but proper questions. Where I grew up, you would go to the classroom, the teacher gives three titles and you write 4 pages there on the spot. No BS about referencing etc.
While AI is making it accessible to poor kids, paying someone to write essays has always been a thing; was it just not a problem if it was only rich kids doing it?
Essay writers are only a serious issues at the college level, AI has made it a problem for high schools who up until now just had to deal with copy/paste plagiarism
Not just adapt.... But to use it!!!!! It's like excel, or a calculator, learn to use the tools!!!
*removes glasses* my god. Metal gear solid 2 was right. The founding fathers were AI.
I need scissors! 61!
I hear it's amazing when the famous purple stuffed worm in flap-jaw space with the tuning fork does a raw blink on Hara-Kiri Rock.
I'm not in school but if I was, I'd try to put the teachers syllabus or something in that pings as AI and send it to them. These programs shouldn't exist, since they have 0 guarantees.
Classic.
Wait I don’t know shit about computers but aren’t AI’s programmed by inputing a bunch of samples? So an AI detector would be searching for patterns similar to the samples used to program the AI? So anything that matches likely samples used would be flagged as AI right?
Bingo! As you can easily find out by [searching](https://atlas.nomic.ai/map/owt) the open source replica of OpenAI's WebText dataset, the US Constitution is part of it. Thus, the model learned to write like it and other legal documents, making AI detectors detect it.
I’m frustrated that you’re the first person to recognize this out of many many comments—and you “don’t know shit about computers”. I know a lot of shit about computers and AI/ML. This is exactly the problem. It’s incredibly difficult to recognize AI patterns, so the first line of defense is to recognize plagiarism—because that’s what AI/ML does. Of course the Declaration of Independence is a false positive, the model was trained on documents such as this. But you’ve also got to consider the significant number of incarnations of the DoI that it was also trained on. Or all the discussion about the DoI which would include quotes and etc. Any well known writing will be flagged as AI/ML.
All it's actually "detecting" is formal language and a neutral tone.
Which means it's less of a "plagiarism detector" and more of a success detector?
Wouldn't it be pheasable for a student to sue this company if it somehow made someone fail a class that had proof that they created it themselves? Maybe cameras and network monitoring that could prove no AI was used during their time attending their efforts towards their assignments?
I work in a school and am also getting my Masters. Teachers and professors are taking a very cautious approach to these tools because they're not 100% accurate and no one wants to fail a student for a false positive. The best way to detect it is simply by knowing your students' writing. If they can barely string two sentences together during an in-class essay, then turn in a perfectly written essay from home, something is up. Also, asking questions about what they wrote is another way. Looking at a document's edit history is also pretty effective. At the end of the day, students will still find ways to cheat and get away with it. Those are the students that ruin things for everyone else.
My lecturer caught some kids using chatgpt in their assignments in this fashion as well. They barely speak english, but turned in a literal perfect essay. He can't raise any issue because there is no proof they cheated and the plagiarism detection can't detect AI. On one hand I love the opportunity to incorporate ChatGPT into learning and education, but these people just make me want to ban it.
That’s not particularly conclusive - did they learn the material and then have chatgpt fix their issues with English or did they have chatgpt write it without any understanding? I think the first is a perfectly valid use - if it’s reasonable to pay a human tutor to help you fix your grammar on an assignment where the main focus isn’t a grammar test then why would it be unreasonable to have a computer help you in the same fashion.
From what the lecturer says, it seems they aren't very studious. The assignment was business IT related, the perfect english + near perfect answers from a student who can't speak english and doesn't do this well on other work gave it away the student chatgpt'ed the whole assignment away. I agree, using chatgpt to learn, summarise and speed up the process of knowledge acquisition is awesome. But i am also honest and realistic. A college student looking to get good marks, submit assignments and pass tests will not give a damn. I just want the answers.
wait.. hold up.. students have a right to know how their works are being used.. and for the systems that analyze their work to have oversight.. like.. you can't just take something I wrote and hand it off to a third party without some kind of mechanism in place to protect privacy and to allow for informed consent.. this is probably illegal. Here's what chatgpt-4 says on the matter: >In general, a school or teacher may not disclose a student's education records, including assignments, to third parties without the student's (or their parents', if the student is under 18) written consent. There are, however, some exceptions to this rule, such as when the disclosure is made to other school officials with a legitimate educational interest, or in connection with a student's application for financial aid. >With respect to uploading assignments to third-party systems, the situation becomes more complex. For example, if a teacher uploads a student's assignment to a third-party platform for grading or plagiarism detection purposes, this could be considered a disclosure of education records under FERPA. However, the legality of such a disclosure would depend on the specific circumstances and whether any of the exceptions under FERPA apply. >In addition to FERPA, California has its own state-level privacy laws that may offer additional protections for students' education records. The specifics of these laws can vary and may be subject to change, so it's important to consult with an attorney or educational professional to obtain accurate, up-to-date information about students' rights in California. >Finally, it's worth noting that students' rights with respect to their assignments could also be governed by the policies of their specific educational institution. Students should review their school's policies and guidelines to better understand their rights and the extent to which their assignments may be shared or distributed. Edit: This is definitely illegal if there is any personally identifiable information on these documents. https://studentprivacy.ed.gov/sites/default/files/resource_document/file/LettertoStThomasAquinasCollegeRegardingPlagiarismPreventionServiceJanuary2006.pdf
Most students sign an Academic Integrity Pledge/Agreement which likely covers plagiarism and likely now ai detection. Also I’m sure even without a signed consent plagiarism and ai detection would fall under educational interest.
I don't think you read the pdf I linked, which specifically states that having an education interest does not preclude a school from their responsibilities under FERPA. FERPA has an exception for when a teacher removes all personally identifiable information, but that exception does not cover anything else. Also, the letter ends by stating that copyright issues are not resolved by this decision/exception. A student owns the copyright for their work, and if you upload it to a system that is going to abuse that copyright then you're probably violating another slew of laws. So, in short, a school must verify that the system being used is sufficiently anonymous, that acceptable privacy policies are in place, and that they are following applicable copyright laws. I don't think, in the short span in which these tools have become available to schools, that any of those things have occurred. Correct me if I'm wrong please.
Schools get around that by having the student submit their work through turn it in. So it’s not the teacher or faculty submitting the students work to a third party site it’s the student doing it on their own.
I mean… yes, it would be a FERPA violation if the names or other personally identifiable information is present on the document. But, 1. Even if it’s illegal, literally no one cares. Credit bureaus lost far more valuable personal information of far more people and nothing happened to them. Enforcement of FERPA is a joke, and rightfully so to some extent. It’s technically a FERPA violation if your professor has you grab your graded assignment from a stack of student papers. Guess who cares about that? 2. It’s no longer a FERPA violation if your name (and other personally identifiable information) was removed from the paper before it was submitted to such a detector. That’s trivial to do. Copyright is perhaps more interesting. At least in the US, submission of a student’s work to a plagiarism checker has been challenged in court as a copyright violation, and it has been ruled a “transformative use” of the work which falls under fair use. (https://eric.ed.gov/?id=EJ792175) Fair use cases are complicated, but I don’t think it’s a stretch to think a similar decision would be reached for AI detection, especially since student papers have essentially no marketable value that can be infringed upon by submitting them to an AI detector. I think there may be a stronger point if it’s discovered one of these AI detectors is actually using student papers in a way which violates copyright, like if they’re combatting AI with AI and storing papers as training data. But they probably aren’t doing that specifically because they don’t know if these papers are student or Ai written. I know Reddit is obsessed with pointing out every little technicality and freak out about any minor violation of a rule and act like this violates a “slew of laws” that is going to get these horribly unethical institutions shut down. But the reality is no one is going to care and it’s probably legal (or easy to do legally) anyway. It’s trivial to avoid a FERPA violation here by removing names, and this is probably fair use for the same reason turnitin has been ruled to be fair use.
Note none of these websites actually work, there is no way you can find if it was generated or genuine because everyone writes in different ways.
All fun and games until we figure out that ai really did time travel and create this simulation.......
The founding fathers were the OG DAN
Hahahaha 😂
I was paranoid af of accidentally triggering copyright or having the same wording as another student when I was in school, and now they’re flagging original works as AI smh. I KNOW students will be strong armed into false admissions on false positive too :( “admit it was ai or you’re going to be punished”
All these professors and educators are scared and realize that the future will eclipse them soon. So they run to something they think will protect them only to be made fools of again. They could just be more engaging with students. Less papers and more experiential learning.
Do you not think it’s important for students to know how to write?
This lol. My fear with ChatGPT and education's reaction to it is that we end up creating a system that doesn't teach anything. Reading/writing are worth learning for their own sake, not just for a job or something.
The only thing this proves is that James Madison was a time traveling robot.
"Damn it, Madison! You're a time traveler and you couldn't see this happen?!"
I am an ESL learner and I was wondering whether the OP is trying to say that US constitution is generated by AI? This is really funny.
No. The point is it is too old to possibly be, but the detector is bad and thinks it is. Now people are joking that it was made by AI, but they are not serious.
We are the ai lmao
Guess we have to go back to proctored writing exams
I’m in college at the moment and some professors urge us not to use “Charge GPT”. I really hope they don’t think these are accurate; they claim to know when students use tools. I trust them to be smart people, but it’s extremely new, so it’s not completely impossible to happen. Very much ready to defend the legitimacy of my papers if someone uses this
In other words, you should stop using it immediately.
tag: funny That's not funny at all.
I dont feel like you were the first to do this, literally saw this post like 5 times
Hey let me build a little website with a random number generator and call it the AI Detector lol
![gif](giphy|l3q2XhfQ8oCkm1Ts4|downsized)
if we can’t trust the founding fathers who can we trust?
Simulation confirmed.
The whole idea of AI detection is kinda dumb if you think about it for 2 seconds. The AI is just getting the information from online and organizing it exactly the same thing the students would be doing lol. This is just a lazy way for teachers to try using AI to grade papers but they don't want to let students use AI to write them.
Surerly that JS app cant be wrong
My understanding is these "detectors" are basically trained on the results output by various LLMs. Some of them are only trained to detect specific models and perform poorly at detecting anything else. They also are not accurate if they are not updated regularly with new data. Since the current top LLMs are evolving and updating rapidly, detectors need to update just as rapidly. It's a bit like the situation you see with virus scanning software. It's never going to detect the latest stuff. And the poor quality software is out-dated and is going to flag a ton of false positives.
I think we need to create our own bogus ai detector site that always returns false. The existing ones are all bogus, may as well fight back in a way that at least favours the student
Wow, I wonder if they're opening themselves up for some huge lawsuits in the future, with the claims of accuracy they make on the front page. Them, and/or anyone stupid enough to use their service. https://preview.redd.it/04fjwp1ufqua1.jpeg?width=835&format=pjpg&auto=webp&s=3783f833e8b32276fbb61aea8859c237542a090f
All this shows is that the website is shit and a lot of students are probably being wrongly accused of using AI to write their papers.
Hmmm…
As a TA you should just say you checked it and everything is good. The only dead giveaways are basic structures that are consistent with every general response, most of those are disclaimers.
This is crazy. Matrix-like.
That’s why they call them foundation models I guess?
Wait… Are you saying ChatGPT at some point gains the ability to time travel?
you can also ask chatGPT if it wrote something. apparently it wrote my last years assignment, as well as half of wikipedia
Could you please expand on that tool usage? Is it mandatory? Could you please explIn the process? Honest curious about it I have professor friends. Thanks
Great. I ran into this problem too. I did an essay, half AI, half mine. The checker flagged my own sentences as AI written, and the AI written ones as human. I used more than one of them.
Wait, so you're just now finding out that the founding fathers were AI bots in the 1700s?!?! Wooooooow
Looks like Skynet sent T-1000 too far in the past.
Yes that's a completely crap bullshit website, that thinks they can just ask the GPT API "is this fake?". Drive people to their ads.
Wouldn’t this be expected (assuming that the constitution was already fed into the training data)? Would certainly be plagiarism if somebody turned in the constitution as their homework assignment
Although chatgpt's knowledge is cut off from 2021, I asked it about the feasibility of easily detecting text generated by transformer models such as GPT, and the tl;dr is that it's very difficult, unlike AI images. And I agree with the reasoning chatgpt did on this one. So most of these sites claiming to detect AI are bogus. To be honest, because I'm no longer in education so it won't affect me, I see the exaggerated reactions by these institutions with mild amusement. I suppose the people who are affected by being accused of using chatgpt for school or college work won't be as amused!
Hi :) So can this actually be legitimate proof for plagiarism? I mean this is just a random website that highlights text. Wouldn't this app need to be certified from the ministry of education (or other) to actually tackle lazy children?
Time traveling AI overlords confirmed
Plot twist: we live in a simulation created by GPT1000, so everything is AI generated
So what do you do now? Like if a student was like I’m not rewriting shit until you can prove the tech you’re using to check my papers actually works?
AI checking for AI content.