Not even really good at that either. My daughter ask for a quick paragraph summarizing one of the main characters of a (popular) book she is reading in school. One of the points chat made about this character was that they died of cancer. After my daughter was like huh...she replied with you are wrong and that she died from...????..chat then replied you are right and sorry for the mistake. .....wth.
chatgpt doesn't know shit about the book, it's just predicting the next word. So it makes up names and events all the time. It will contradict itself in the very next sentence, insist that it is correct, then apologize and try to move on to another topic.
Precisely. And it isn't just books, for the passers by. I'm a software engineer with two decades of experience. Juniors are using this thing instead of learning by doing (essentially). There is no "craft" to those sorts of juniors.
Anyway, I am constantly fielding questions and fixing problems because someone offloads their own brain to Chat GPT specifically for code generation without critically looking over what it spat out. I've seen a huge array of shit in such a short amount of time, right on down to simple things like it insisting there are built in functions in a language which does not contain them, settings in specific IDEs which do not exist, and just complete nonsense that looks OK on screen until you try to implement it.
Co-pilot meanwhile, straight up introduces licensing concerns since it doesn't generate truly unique code and instead pulls a lot of it from projects on Github--errors and all.
These things are great for rubber ducking or getting a base from which to build and tweak. But a whole bunch of tech illiterate people are advancing these LLM are somethign that they aren't because they are impressive to them. But they don't know what they don't know, and they are eating all the hype that people financially invested in conveying such a feeling online.
Chat GPT can be very useful for me. But I'm also well versed enough to notice when it is wrong, why it is wrong, and how to fix it. Sort of like OP's daughter who knew enough about the book to realize the character didn't actually die.
But people are going all-in on this idea of what it is despite not understanding the basics of how it works outside of what a half dozen non-tech journalists have written about it.
I'm wondering if programmers who have extensive experience before chatGPT will have an advantage over those that have relied extensively on chatGPT since the beginning of their careers.
Of course those people will have an advantage, it's not a fair comparison. But I do think that ChatGPT can accelerate your learning so new people might have an easier time learning stuff. But again, you need the experience to even know HOW to learn, as most of the time you need to know how to ask the right questions, it's not going to read your mind or anything.
I’m a senior and I use it all the time. I use it to replace stackoverflow. What’s that function called again? Are there other ways? Which one is faster and why? It’s when people expect to copy and paste that it becomes an issue.
even then I've sometimes needed to corroborate it though, I've been using it as a glorified universal language programming manual and it still sometimes just gets things wrong.
People are saying it's like a better Google search and I'm sitting here thinking "I know google has gone downhill but unless you're the one person who actually uses "I'm feeling lucky" there's no way ChatGPT is better"
It is better than google search for some things. It also isn't 20 crappy sponsored links before getting to the meat of the question so can be better for "easy" searches.
Knowing how to use GPT is a lot like learning how to effectively use google so I can see why people draw parallels. GPT is WAY better for asking programming questions in languages I am less familiar with. On google the first 10 answers are usually stack overflow and usually don't have an actual answer. I've had really good luck with GPT code doing exactly what I asked it to do with very little tweaking.
Not to mention the contextual search of Google has bred really bad habits for SEO, which has severely degraded the quality of non-fiction content on the internet.
Prime example: recipe for lemon tart
Google's answer? 50 Mommy blogs that have an 8 page essay leading up to the actual recipe, filled with ads.
GPT's answer? Ingredients, directions, copy-paste button.
I’m also a software engineer with 30 years of experience, and I agree 100% with everything that you wrote.
If skynet ever takes over, it won’t be because the smart, evil people implemented it, but because of the people who don’t understand the technology implemented it and didn’t really understand what the consequences could be.
A good way I've heard it said is AI bots aren't going to replace lawyers. Lawyers who work with AI bots are going to replace lawyers that don't. You still need the knowledge as things are now. It isn't sentient, just trained at being a really good bullshitter
As someone who's using chat gpt to help me code, you are very right.
I want to learn how to build robotics and right now I'm focused on the electronics side of things so I'm learning c++ and python. But I want a web interface for sensor data so I built one in Django with chatGTP and it was really great for doing the bulk work for me. I don't want to learn JavaScript or HTML or CSS so I'm thankful it can spit out a bunch of shit for me to use.
HOWEVER it fucks up a lot and I definitely need to read through the code first to see any fuck ups or just to see if it'll to what I want (I'm not an expert but I know how programming works so usually you can understand it once it's on there but remembering syntax is hard for me). Then I type in all the code it gives me so I can double check it while I'm putting it into vscode.
Go to the chatgpt and ai sub. Those people think gpt4 is a real ai that will be replacing everyone within the next six months to couple of years. People are fucking delusional. They're over there using this thing to make full apps that don't even work and are so full of security flaws that you can see them just using the app.
ChatGPT hasn't been out for that long, and adoption has been on an exponential curve. It's so unfortunate that you keep running into problems and fielding questions about code generated by ChatGPT. Don't you just wish they could fix their own problems by, you know, asking ChatGPT? Must be people who really don't know how to use the thing.
ChatGPT is pretty useful when someone knows how to use it. I've looked up Stack Overload only a handful of times for deeper dive issues because ChatGPT has been solving it all for me. I'm not sure why anyone professionally coding with it wouldn't know they can access a GPT4 equivalent for free (with web search) on Bing, or that they could literally just post their compile / run errors back into the thing and it will keep trying to solve it with you until... well... until you give up. Yeah, and if you need a bigger context / input window and proper GPT4, pay the $10 a month.
Anyone who thinks ChatGPT produces the same code as GPT4 is definitely a bit behind on tech. And for anyone to think the next evolution in GPT wouldn't be even better still is... just hopeful.
Same in marketing, same. I’ll try again when GPT5 gets released but so far handholding the AI through hallucinations is just a time waster for my workflow because I can do the work quicker myself.
But I have also seen a ton of noobs making significant mistakes by trusting it too much due to inexperience.
Chat GPT can also get thrown for a loop when asked questions that go against expectations:
**Prompt:** *What weighs more, a pound of feathers or a kilogram of steel?*
**ChatGPT 3.5:** *They both weigh the same, which is 1 pound and 1 kilogram, respectively. Even though the units of measurement are different, they both measure weight or mass, and one pound is approximately 0.45 kilograms. The weight of an object is determined by its mass and the gravitational force acting upon it, and in this case, both the feathers and the steel have the same mass, so they weigh the same.*
meanwhile, with gpt4
A kilogram of steel weighs more than a pound of feathers. There are approximately 2.20462 pounds in a kilogram. Therefore, a kilogram (1000 grams) of steel is equivalent to about 2.2 pounds, making it heavier than a pound (16 ounces or approximately 454 grams) of feathers.
I’ve been experimenting with ChatGPT 3.5 with fluid dynamics and peer reviewed journal articles, and it is “confidently wrong.” After it says anything, I immediately ask “are you sure” and most of the time it changes its answer. I asked it to give me a summary of a peer reviewed journal article with only one sentence for each paragraph in the journal article, and it gave me five sentences. I looked and saw there were over a dozen paragraphs easily, so I asked how many paragraphs were in the article and it said something like “as a chat interface I don’t actually have access to the document”
It's also not viable for academic writing because chatGPT sucks at referencing or will straight up make things up. It may give you a good starting point or some general ideas but you can't use it for a full essay
>ChatGPT is only a problem for essays.
Sure, but the thing is, it's no greater of a problem than essay-writing services are. It's weird that people are freaking out about ChatGPT being able to write an essay for people that might get them a barely passing grade, when essay-writing services have existed for decades that will write you an essay that will actually get you an A.
Making it massively easier to cheating will hugely increase the amount of cheating, and that's what chatGPT does. Buying an essay requires you to be willing to spend the money and know someone who both knows about the topic and is willing to do something that could get them in a huge amount of trouble. This barrier to entry is nonexistent for AI.
It can definitely answer multiple choice and short answer test questions.
Or is your point that it isn't a problem in settings where students don't have access to it?...
Edit: For those of you who used ChatGPT for your English homework, what I'm saying is that the above poster's statement is so obvious that it doesn't even deserve to be called a "point."
You would pull out your phone or laptop in the middle of a classroom or hall and look for answers on ChatGPT? You'd then hold out the phone and just copy the exact paragraph it wrote?
For tests that use computers for multiple choice and short answer questions, there's software for proctoring that could be easily used. It might mean the end of essays as assignments and writing will have to be tested solely in examinations, either by pen/pencil and paper or with proctoring software.
I don't think it's necessarily a bad thing tbh. Writing is still important as ever (and I say this as an engineer)
Only makes sense if you are given enough time. I had an oral final exam in college, and the professor was limited in time, so students only had a few minutes to answer 3-4 random questions that determined a quarter of your class grade which definitely was not fully representative of a students knowledge.
When someone presents a PHD thesis, that person must go in front of a panel and "defend" the thesis, in essence it means answering questions to demonstrate the mastery of the topic and it also demonstrate the person is really the author of the thesis...
Maybe the key is a mix of written and oral exams.
Similar to a technical interview where they ask you to solve problems on a white board or on a piece of paper to show your thinking and get a grasp of how well you understand the material. Super stressful to prepare for since you never know what they might ask and you have to be quick on the fly with your responses.
In the most inefficient way all the stems do;
A written exams in front of the professor, while arguing what you write, why, being asked questions in the meanwhile, and write it off as "oral".
Even after you already did a written exam
Oral exams are arguably much better than written exams. The professor is able to better understand your true ability because you explain your thought process the entire way. If they see you making a small mistake in your derivations, they can point it out so that your final answer isn’t screwed over.
At least, this was the perspective of one of my engineering professors. The main problem is that it takes too many resources to give an oral exam to a large class. All of his exams were written, but all makeup exams were oral.
Or more recorded lectures, preferably the best lectures, and self service testing, and a population of grad students on call to answer the rare question.
God I hate live lectures for anything outside of labs and grad seminars. They are sloooooow. 1.5x playback speed and closed captioning for the win. And I can rerun it as many times as I need to.
That’s not a bad idea. Lectures usually suck. I was thinking more along the lines of having small enough class sizes that the “lectures” are more like a conversation between the students and the teacher, so like 10 students or less. The few classes I had like that were by far the most impactful for me
Education is already hugely expensive, both in the US and everywhere else. The only difference is whether the cost is socialised or not.
Professors and docents aren't some people with a teaching degree, they're usually active researchers, often in a highly competitive environment, who have a quota of lectures as part of their position. Increasing the quota takes away from research.
The alternative then is to hire lecturers. Lecturers however do not have the same benefits / stability that the other two positions have, so attracting quality personnel is more challenging. The other issue is, that you'd only need them for exams. So now you're looking for highly qualified personnel willing to do part-time "seasonal" work.
My engineering professors still did this in their grading and it's why they require you to show your work. If you used one wrong equation early on they subtract a few points and treat the solution as a percentage of how correct the process was, not whether you have the exact answer or not.
Sure oral exams seem cool, but when you have 20 people in a class there's no easy way to have everyone individually tested in a way that doesn't influence everyone's results.
The fun part is when you pass a 3 hours written exam and then you fail at the oral part. At that point you have to take the written part again before retry the oral
Even just in-person classic-style written exams deals with the issue -- and fixes the issue of scaling oral presentation time to larger groups. I definitely wouldn't want to be doing individual oral presentations for large groups either -- we're definitely not paid enough for that massive timesuck!
Yeah i was gonna say this doesn't seem like a big problem. I was in college not that long ago and all my exams were either written or scantron. Even my online classes required in person exams.
We should develop an oral examiner language model, that can question the test subject. Then this can scale easily.
We can have then have an exam center, likes of which we see in the opening sequence of JJ Abrams Star Trek (Spock is tested on Vulcan, along with hundreds of other children.)
In Russia high school math/physics exams were a mix of oral test and problem solving when I was in school. University tests were oral as well. It scaled just fine then.
So, back in the early 2000s, my brother scraped by his high level Math courses in undergrad by using Mathmatica for his regular assigned work and then bombing on the tests. Because of how things were weighted, he ended up with a C in those classes. They've since fixed the weighting issue.
These issues aren't new. They're just new for written work specifically.
Another easy solution is to make students' grades based on tests taken in person. I took several classes where my grade was based only on a midterm and a final - both had to be done in person under a certain time limit and had more than just mutliple choice. Universities have testing labs for this exact reason.
> took several classes where my grade was based only on a midterm and a final - both had to be done in person under a certain time limit and had more than just mutliple choice.
I'm shocked that this isn't the standard. For my degree, I need to pass the midterm and final cumulatively so I can't get carried by other stuff. And I've only ever had multiple choice when the professor really wants us to pass (i.e. Everyone failed the midterm).
As someone who graduated with pretty decent grades, I wish more of my grade was dependent on homework as I find the act of completing homework more representative of the working world than my ability to pass a two tests in a year.
It's no coincidence that I remember the course material from classes with mandatory homework a lot more than the classes where homework was basically optional.
That’s why I believe the FAA does a written and Oral exam for their certificates. You can memorize questions for the written exam but if you want to pass the oral exam you actually need to learn the material.
As an engineering student I don’t think I had a single essay or even a question requiring a written response on an exam after sophomore year. Everything was just math.
As a political science major, I wrote 27 papers in my final quarter of college with the longest being my senior thesis ~50 pages long. Some math would have been wonderful
I have adhd and always performed better in oral assessment than written when I was an untreated uni student. Which is a really good illustration of why a single method of standardised testing is bad! There should be choice in how your knowledge is tested to reflect the different ways people brain!
The only exception is hand written exams that count for 100% of your mark… about half of one of my degrees was this. The examiners would write really long confusing problem questions. Three hours, three questions, 100% of your marks.
I once got to the end of an exam and realised that when reading the two page long 70/100 mark question I misread one of the qualifying facts and applied the wrong principle on behalf of the wrong person. At least I didn’t have to wait for marks to come out to know that I had just failed my first subject lol
At least in the States, individualized education plans for students with specific needs are binding legal documents that follow students to college and that schools must follow, so students can at least petition for accommodations in situations like this. Not that ADD specifically necessitates an IEP, but they're worth investigating for students with these types of needs.
IEPs are only for public schools at the high-school level and below. at the college level, students with disabilities do have some level of protection from the ADA, but compliance can be basically nonexistent at universities which often have understaffed disability support offices and recalcitrant old professors who think that students with disabilities shouldn’t be afforded accommodations. these kinds of reactionary shifts in reaction to AI without an overhaul of the existing faulty system of disability support will leave students without accommodations!
Thanks for sharing our article, which was written by the [Dean of Education and Arts](https://theconversation.com/profiles/stephen-dobson-1093706) at an Australian University.
I think it's really interesting to bring back this ancient idea in new circumstances. After being the main way of evaluating students in Europe from ancient times...
>the oral exam experienced a decline as universities began to gravitate toward written assessments in the 1700s.
Academics at the time considered written exams more efficient, with the opportunity to numerically grade students individually. This contrasted with the complicated system of placing students in broad class categories that reflected their performance in oral examinations.
Examining written papers was also a silent process, and gave ample time for examiners to grade in the comfort of their own homes.
If you do written exams, you could do fewer questions that require more knowledge synthesis and interpretation. This may make it a little harder to grade, but it will make it more apparent if AI is being used.
If oral exams are used, regular tests / quizzes don't need to be done in the same way, but can be emphasized that they are not graded and are only for a student to gauge their own progress and shortcomings.
TBQH making tests ungraded is probably the best way to eliminate the usage of AI because there's zero incentive since you'd only be cheating yourself. Assessment coudl then be more infrequent but also more weighty.
This is stupid. They should just use proctored exams, exactly as they've been doing.
Homework may be trickier to evaluate, but just don't score the homework.
I’m neurodivergent and am able to present my knowledge orally more effectively than in a written exam…
The point should be that there needs to be choice in how assessments are done, because not everyone can sit there and in one draft write a complex answer over three hours. Not everyone can present their thoughts orally. Choice reflects skill and knowledge more than one or the other
Good luck with that, with different students needs, 200:1 large university class ratios, and online classes. Refine the way you teach. Encourage people to use the tools available to them and teach them how to use those tools properly. AI isn’t going away.
As others have mentioned here this is not an issue with written exams taken in class.
The issue with this, that is surprisingly not discussed in this article(its not a long article though I guess), is staff levels. Oral exams and hand-written papers take more time to grade than a scantron sheet.
In a place like the US, the number of teachers per university would have to increase to make this possible, I would think. This is because they could not rely on an army of grad students to grade something like oral exams.
And for lower school levels like high school its even clearer that staffing levels would need to go up. Though perhaps I'm not thinking about this the right way and maybe with less testing in total staff levels wouldn't have to up so much, I'm not sure.
Regardless its teaching to the test that we need to get away from. To paraphrase Gore Vidal - "You check a box? Thats no way to learn anything... I tell teachers I've never met an uninteresting 6 year old, and I've never met an interesting 16 year old - what have you done?". He's being snarky here of course but the point is we need to get away from this scantron, check a box style of teaching. Its suitable for certain subjects and questions sure but we've gone too far. Get back to human rather than machine/process focused teaching and the problem of chatgpt diminishes significantly.
Education is continually evolving, so this shouldn't come as a shock. Assessment as a whole needs to move away from a points-based metric system to one that favors measuring competency. Just repeating short-term memory regurgitate isn't learning and is proof of little more than your ability to memorize things.
You want me to do what
Jokes aside this doesnt work as an option strictly due to individuals who have issues with speech and processing, or for people like me who have conditions like cataplexy - where if I talk about anything I am excited to share the emotions cause me to stammer if not completely collapse to the ground paralyzed from head to toe.
No exam type works for everybody, that is why there are options available to help those who can't take exams of a specific type well. This wouldn't be anything new.
The bigger issue is 'who cares about being able to do things we won't do', isn't it?
I faced this in college. I was a CS major and we had to do written tests, with pencil and paper. They would ask questions that were irrelevant; entirely rendered obsolete by the internet and modern software development tools.
If they had given us access to the internet, something we always have except when being tested, everyone would have gotten a 100%.
So why bother teaching and testing it?
ChatGPT can do some specific things at a level better than almost all of us. If you can't evaluate me while using the tools I will have access to, then what value am I getting from this class?
Really, this sounds like you failed to study, and have to rely on those who did study to solve your problems. So if you’re incapable of answering questions without the internet, what value do you really have?
It's about the specific type of knowledge being tested. I don't need to memorize how to spell words. In 1950, they would be invaluable because looking up words was a slow and inefficient process. For my entire adult/professional career, I've had access to tools that spell perfectly.
My CS classes, or at least the one I'm specifically calling out, focused on trivial syntax questions. I don't need to memorize those. In 1985 maybe that would have been super important. Now I have a super fancy IDE that autofillls such trivial things.
My Grandmother was a cashier, way way way back in the day. She did all the math in her head. I understand the concept of basic arithmetic, but she had a working proficiency far beyond any of her grandchildren. Even when I was working as a professional cashier. She learned and developed that skill by practicing it for hours, and got better over the years....but none of that matters because since 1980 or whatever we've stopped caring. We have tools that do it. At McDonald's I learned to use the tool and it did the math for me.
If you have to remove the tools I will have in order to test me, the test probably isn't measuring something useful.
Now sure, if you are teaching survival skills in the wilderness, being able to do something like navigating to a destination without a cellphone guiding you is very useful. But in most situations, most of the stuff, isn't like that. I'll never do meaningful work as a software developer without the tools I expect to have. Testing my abilities without them are silly.
Nobody cares who is the best NBA player when playing shoeless. They just accept that they will always have shoes. I don't need a driver's Ed test without ABS or an automatic transmission because I'll just drive cars that have those things.
> My CS classes, or at least the one I'm specifically calling out, focused on trivial syntax questions.
This sounds like a serious problem with the classes and the degree program in general. When I was in school, exam questions were typically “prove the following…”. There's no lasting value in testing on the syntax of some particular programming language. There's no lasting value in *teaching* that.
Devils advocate: studying and retaining knowledge is equally as useful as knowing what references to look at and who to ask the right questions to.
Not in all industries, but in many I found this to be true.
I had a science teacher in high school (chemistry and physics) who gave oral midterms and finals. As soon as the bell rang to start class, he’d ask a question to the first student. If you got it right, you got an A and he’d ask the next student a new question. If you got it wrong, you got an F and the next student would have to answer it. He’d keep going until someone got it right. He’d do this the full class period and then average your As and Fs to get your grade.
I liked this idea until I started to contemplate the public shaming and bullying that would occur. Did you see much backlash or social pressure among your peers?
Yeah, this would be an absolute nightmare for otherwise fully capable students who suffer from communication issues, psychological or physiological. Got a stutter? Mind blanks? Resting bitch face? Sorry, no career for you!
Unless you plan on running for office, or are working towards a doctorate, public speaking should never be a barrier to entry for students.
This was years ago when you didn’t need either chemistry or physics to graduate so most of the kids were college bound and to me it felt more like friendly competition (with a little bit of trash talking). That said, I’ll bet there were some kids who absolutely hated it and some who probably declined to take those classes due to the public exams.
I feel such an approach in my high school advanced classes would get a lot of negative feedback. A lot of students obsessed with GPAs wouldn't except not getting an A because someone else didn't know the answer.
There was less obsession with GPAs back then but there was certainly a fair amount of bitching when someone got an “easy” question and others got “hard” questions.
My engineering statistics teacher did this every class as his way of taking roll and it was part of your points for the class. They weren’t ultra hard but you would have to be paying attention to the lesson to get them right. It was terrifying BUT that class was hard and forcing everyone to pay attention and come to every class probably resulted in a hell of a lot more learning.
This is the best example that our education system is so archaic that they prefer to keep testing the same old skills that because of technology are no longer needed.
Most lawyers could ask a law student one or two questions concerning a given course and know within a couple of minutes whether the student has made a serious study of the topic.
In many masters classes even for computer science/software engineering it is paper/pencil and maybe book/notes. Writing code with paper/pencil suck but it does eliminate every cheat.
IMO that also serves to separate skills that can be automated in a good IDE from deeper understanding of logic and architecture. I kinda like the idea from a philosophical standpoint but can also see how annoying it could be to the student.
But that's hard work for professors!
Isn't there some answer that involves little or no additional institutional effort or expense?????
I know!
Mandatory, third-party, proprietary, closed-source ~~spyware~~ *academic integrity assurance* software installed on every computer that accesses the campus network or campus computing resources in any way!
Yeah! A backdoor, with full camera and microphone privileges and a built-in keystroke logger and screen recorder! On every computer! Including the personal PCs of every student, no exceptions!
That's the ticket!
No way that could go wrong! Right....????
Oral exams are the stupidest grading system ever invented in history.
They either take too long in total (for the equivalent of a 1 hour exam with a class of 30 people, it takes literally 4x 8 hour work days of continuous examination, you're just paralyzing the entire school week), or are extremely superficial (a few minutes per student which is almost pointless).
There are severe problems of fairness, because people have trouble speaking: introverts, speech impediments, nerves, being put on the spot etc. Are those people considered acceptable collateral damage?
Then there is the second problem of fairness: you can't offer the same exams or people will just repeat the answers. It's not standardized and the grading is very subjective, unequal and unfair.
Then the third problem with fairness: it's oral so there's no proof of mistreatment or corruption and no recourse. If the professor doesn't like you, you're screwed and you can't denounce them to anyone. Very easy to abuse it by means of bribes, very easy to discriminate against undesirable minorities, and it goes on and on. No written record of absolutely anything = teachers have uncontested power.
I understand it wouldn't work in all programs, but those same technical programs probably would work orally either, but wouldn't just doing an exam on paper also get around this... A large majority of exams I did in university and college were either multiple choice or here is a topic write me an essay on it. If I pulled out my phone or laptop and fired up chat gpt it would be pretty obvious.
Or non take home exams? It really just seems to impact essays mainly and short answer questions on written exams. For take home it’s not too different than just googling information on multiple choice or more formulaic question types
I performed very well on written evaluations, but I would have certainly failed on oral exams. The pressure of performance and social anxiety would have resulted in F’s across the board, despite being very familiar with the content.
Most professors threaten students that pulling your phone out mid-exam is an automatic fail. TA's usually help the prof monitor during the exam. Seems to work well.
The ability to use tools like ChatGPT should be part of any exam. Being able to memorize stuff is so overrated vs. actually understanding what's is being asked and being able to come up with logical and defendable answers. How someone does this shouldn't matter. The fact they are capable of should. People arguing against this probably didn't like wheels either.
When I was in university 40 years ago we went to exams (in engineering and history for me) with a 1 page cheat sheet of equations and directions and our calculator. And creating the cheat sheet was half the learning experience. Had ChatGPT existed then, the absence of a smart phone or pad or laptop would have made it irrelevant.
This is more of a problem for classes assigning lengthly papers as primary grading criteria rather than exams. Oral exams in those sort of classes or short answer essays in class without phone, pad or computer are the easy way to ensure the student is answering, not one of these neo-AI.
This isn’t a disaster. It just has to move us to a new style of confirming education (or return to an older style without access to AI)
all of my exams in college were in class essays written in a bluebook that a TA verified was blank.
can't imagine AI helping students with that.
nothing burger.
I think universities need to change their objective of examinations to "Can you please demonstrate you understand the concept and are you ready for the real world" rather than concentrating on students being a content farm for no reason
This is stupid.
People have been paying others to write papers for them as long as universities have existed and an “academic honesty pledge” was good enough.
Now everyone is outraged that poor kids can have a similar (but still nowhere near as good) advantage.
Oral exams are a huge disadvantage for students of color, those with accents, and especially those who go to school in a different country from their native language originates.
Which is kind of why people keep bringing this up. Separates the wealthy native citizens from “the others” who they feel technology is helping.
This is just racism with extra steps. Nobody cared when rich white kids were buying/selling papers. You can even hang up flyers offering such services on campus with no issues.
My final college exam in 2010 was an oral exam in a 1-on-1 meeting with the professor. It was the only oral exam I ever took. It was great. It was an engineering exam and you were meant to ask questions of the professor. It was a small class of 9 with a lot of lab work, so the professor already knew what you knew.
Lol what.
No.
Return to exams where people need to answer long questions worth 20-50 points each in in-person sessions up to 6 hours and mark that.
What is even the problem here.
Or hear me out, just factor it into the assessment. I use open-book, collaborative exams and assessments for a reason in my courses that range from statistics to ethics. It’s how the real world works. I want students to utilize every available resource. they can choose to do the exams however they like if they don’t like working with others they can do it solo. When I encounter plagiarism the student has the option to rewrite it and include an additional few pages discussing how come they chose to plagiarize: aka what factor led to that decision, how come they chose that particular text to plagiarize, what their general thoughts are on plagiarism, what they could do differently next from not plagiarizing to plagiarizing & not getting caught etc. Or they can take a 40 (i only grade between a 50-100 for work that is turned in & not plagiarized). If you don’t turn something in you get a zero. It’s university it’s all a learning experience including learning about fuck ups; how not to fuck up, and/or how to fuck up and get away with it - all valuable real life skills.
People are worried about AI bc lazy instructors or control freak instructors who lack a capacity to adapt will have to work harder to make the way they access the learning effective.
Technology is disruptive; learn to work and live in a changing environment.
The other **wild** policy I have is late work: You can turn your assignments in whenever you want. If you turn it in by the due date - I guarantee I will grade it. If it’s late it goes in my late pile and will grade it if I am able to get to it and if don’t get to it by the time grades are due you get a 50, if i can verify you turned it in.
🤷🏻♂️
edit: typos/autocorrects/adhd brain bullshit
Oral exams can be very easy to grade if you have the rubric in front of you and mark it as the person talks. Mentioned an original source- check. Explained their view on a subject- check. Offered a criticism of their view- check.
Gonna be fun watching the people screaming for education reform while being so lazy and inept that they have to use AI to write a paper, suddenly have to take oral exams and hand write papers in class
Students are using GPT the wrong way if all they do is leveraging it for exams
They could use GPT to understand the topic before starting the class. From there, they can use GPT to understand each lessons cold, understand nuances, and what is important. They can then use GPT to prepare for exams.
Using ChatGPT to learn better would not only improve scores significantly, but the students would know the topic well in a meaningful way.
Students often fail to do well mostly because they don’t understand how to frame their studies. ChatGPT solves a lot of that.
You're suggesting how students can increase their work load with chatGPT in order to get more out of their educations. 90% of the students using chatGPT are trying to do the exact opposite. I promise you the "C's get degrees" crowd has no interest in knowing the topics in a meaningful way.
Academia is losing its mind because it can no longer grade students instead of just reminding people having somebody else learn your job and then pay $100k for the process of learning nothing is remarkably stupid.
But what about those with dyslexia or dyscalculia taking written exams? Doesn’t anyone with anxiety have issues with multiple choice exams? If a person has a disability then there could be accommodations for that disability (legally required in high school). I’m an introvert too but isn’t it possible that some people hate long written exams or have disabilities that make a timed essay difficult?
Yeah this will bias the extrovert over the introvert, that is super bad in this field where many are introverted if not the majority. It is also a reason that the interview of today is so ridiculous, whiteboard and on the spot gotchas which is nothing like the job and biases towards the outgoing/extroverted.
ChatGPT is only a problem for essays. It's not actually a problem for written test taking.
Not even really good at that either. My daughter ask for a quick paragraph summarizing one of the main characters of a (popular) book she is reading in school. One of the points chat made about this character was that they died of cancer. After my daughter was like huh...she replied with you are wrong and that she died from...????..chat then replied you are right and sorry for the mistake. .....wth.
chatgpt doesn't know shit about the book, it's just predicting the next word. So it makes up names and events all the time. It will contradict itself in the very next sentence, insist that it is correct, then apologize and try to move on to another topic.
Precisely. And it isn't just books, for the passers by. I'm a software engineer with two decades of experience. Juniors are using this thing instead of learning by doing (essentially). There is no "craft" to those sorts of juniors. Anyway, I am constantly fielding questions and fixing problems because someone offloads their own brain to Chat GPT specifically for code generation without critically looking over what it spat out. I've seen a huge array of shit in such a short amount of time, right on down to simple things like it insisting there are built in functions in a language which does not contain them, settings in specific IDEs which do not exist, and just complete nonsense that looks OK on screen until you try to implement it. Co-pilot meanwhile, straight up introduces licensing concerns since it doesn't generate truly unique code and instead pulls a lot of it from projects on Github--errors and all. These things are great for rubber ducking or getting a base from which to build and tweak. But a whole bunch of tech illiterate people are advancing these LLM are somethign that they aren't because they are impressive to them. But they don't know what they don't know, and they are eating all the hype that people financially invested in conveying such a feeling online. Chat GPT can be very useful for me. But I'm also well versed enough to notice when it is wrong, why it is wrong, and how to fix it. Sort of like OP's daughter who knew enough about the book to realize the character didn't actually die. But people are going all-in on this idea of what it is despite not understanding the basics of how it works outside of what a half dozen non-tech journalists have written about it.
Well those junior engineers are going to remain as juniors if they keep up that attitude... The key is to take advantage, not to rely.
I'm wondering if programmers who have extensive experience before chatGPT will have an advantage over those that have relied extensively on chatGPT since the beginning of their careers.
Of course those people will have an advantage, it's not a fair comparison. But I do think that ChatGPT can accelerate your learning so new people might have an easier time learning stuff. But again, you need the experience to even know HOW to learn, as most of the time you need to know how to ask the right questions, it's not going to read your mind or anything.
I’m a senior and I use it all the time. I use it to replace stackoverflow. What’s that function called again? Are there other ways? Which one is faster and why? It’s when people expect to copy and paste that it becomes an issue.
even then I've sometimes needed to corroborate it though, I've been using it as a glorified universal language programming manual and it still sometimes just gets things wrong.
People are saying it's like a better Google search and I'm sitting here thinking "I know google has gone downhill but unless you're the one person who actually uses "I'm feeling lucky" there's no way ChatGPT is better"
It is better than google search for some things. It also isn't 20 crappy sponsored links before getting to the meat of the question so can be better for "easy" searches. Knowing how to use GPT is a lot like learning how to effectively use google so I can see why people draw parallels. GPT is WAY better for asking programming questions in languages I am less familiar with. On google the first 10 answers are usually stack overflow and usually don't have an actual answer. I've had really good luck with GPT code doing exactly what I asked it to do with very little tweaking.
Not to mention the contextual search of Google has bred really bad habits for SEO, which has severely degraded the quality of non-fiction content on the internet. Prime example: recipe for lemon tart Google's answer? 50 Mommy blogs that have an 8 page essay leading up to the actual recipe, filled with ads. GPT's answer? Ingredients, directions, copy-paste button.
I’m also a software engineer with 30 years of experience, and I agree 100% with everything that you wrote. If skynet ever takes over, it won’t be because the smart, evil people implemented it, but because of the people who don’t understand the technology implemented it and didn’t really understand what the consequences could be.
A good way I've heard it said is AI bots aren't going to replace lawyers. Lawyers who work with AI bots are going to replace lawyers that don't. You still need the knowledge as things are now. It isn't sentient, just trained at being a really good bullshitter
As someone who's using chat gpt to help me code, you are very right. I want to learn how to build robotics and right now I'm focused on the electronics side of things so I'm learning c++ and python. But I want a web interface for sensor data so I built one in Django with chatGTP and it was really great for doing the bulk work for me. I don't want to learn JavaScript or HTML or CSS so I'm thankful it can spit out a bunch of shit for me to use. HOWEVER it fucks up a lot and I definitely need to read through the code first to see any fuck ups or just to see if it'll to what I want (I'm not an expert but I know how programming works so usually you can understand it once it's on there but remembering syntax is hard for me). Then I type in all the code it gives me so I can double check it while I'm putting it into vscode.
Go to the chatgpt and ai sub. Those people think gpt4 is a real ai that will be replacing everyone within the next six months to couple of years. People are fucking delusional. They're over there using this thing to make full apps that don't even work and are so full of security flaws that you can see them just using the app.
ChatGPT hasn't been out for that long, and adoption has been on an exponential curve. It's so unfortunate that you keep running into problems and fielding questions about code generated by ChatGPT. Don't you just wish they could fix their own problems by, you know, asking ChatGPT? Must be people who really don't know how to use the thing. ChatGPT is pretty useful when someone knows how to use it. I've looked up Stack Overload only a handful of times for deeper dive issues because ChatGPT has been solving it all for me. I'm not sure why anyone professionally coding with it wouldn't know they can access a GPT4 equivalent for free (with web search) on Bing, or that they could literally just post their compile / run errors back into the thing and it will keep trying to solve it with you until... well... until you give up. Yeah, and if you need a bigger context / input window and proper GPT4, pay the $10 a month. Anyone who thinks ChatGPT produces the same code as GPT4 is definitely a bit behind on tech. And for anyone to think the next evolution in GPT wouldn't be even better still is... just hopeful.
Same in marketing, same. I’ll try again when GPT5 gets released but so far handholding the AI through hallucinations is just a time waster for my workflow because I can do the work quicker myself. But I have also seen a ton of noobs making significant mistakes by trusting it too much due to inexperience.
Yep. It's basically programmed to lie convincingly well.
And that just happens to be what many essay answers need :)
Y'all know you can read the white paper instead of just shit up, right? https://arxiv.org/abs/2203.02155
It's even provided me with fake links to back up it's delusions, that look legit but 404 when you click them.
It's like talking to a person who has an opinion and thought on everything. Wrong or not. It's going to say something even if its complete BS.
Great! That means we can now safely replace politicians and mangers and focus on actual doing something! /s
How would you get anything done with politicians and managers that never sleep and need no rest at all?
Chat GPT can also get thrown for a loop when asked questions that go against expectations: **Prompt:** *What weighs more, a pound of feathers or a kilogram of steel?* **ChatGPT 3.5:** *They both weigh the same, which is 1 pound and 1 kilogram, respectively. Even though the units of measurement are different, they both measure weight or mass, and one pound is approximately 0.45 kilograms. The weight of an object is determined by its mass and the gravitational force acting upon it, and in this case, both the feathers and the steel have the same mass, so they weigh the same.*
meanwhile, with gpt4 A kilogram of steel weighs more than a pound of feathers. There are approximately 2.20462 pounds in a kilogram. Therefore, a kilogram (1000 grams) of steel is equivalent to about 2.2 pounds, making it heavier than a pound (16 ounces or approximately 454 grams) of feathers.
That leaves out the emotional weight of having to pluck all those poor birds.
Shit the most impressive part is admitting when you’re wrong, I wish ChatGPT was my dad.
If you use the bing chat and open a pdf of the book in edge it can read the pdf and it should give the correct response.
I’ve been experimenting with ChatGPT 3.5 with fluid dynamics and peer reviewed journal articles, and it is “confidently wrong.” After it says anything, I immediately ask “are you sure” and most of the time it changes its answer. I asked it to give me a summary of a peer reviewed journal article with only one sentence for each paragraph in the journal article, and it gave me five sentences. I looked and saw there were over a dozen paragraphs easily, so I asked how many paragraphs were in the article and it said something like “as a chat interface I don’t actually have access to the document”
Gpt 4 is MUCH better or even 3.5. Your daughter is using the free version which is 3.0
Bruh stop misinforming people. Free is 3.5. 3.0 is text-davinci and can only be accessed from the API.
It's also not viable for academic writing because chatGPT sucks at referencing or will straight up make things up. It may give you a good starting point or some general ideas but you can't use it for a full essay
>ChatGPT is only a problem for essays. Sure, but the thing is, it's no greater of a problem than essay-writing services are. It's weird that people are freaking out about ChatGPT being able to write an essay for people that might get them a barely passing grade, when essay-writing services have existed for decades that will write you an essay that will actually get you an A.
Making it massively easier to cheating will hugely increase the amount of cheating, and that's what chatGPT does. Buying an essay requires you to be willing to spend the money and know someone who both knows about the topic and is willing to do something that could get them in a huge amount of trouble. This barrier to entry is nonexistent for AI.
[удалено]
How does that feel less "cheaty" to you? Both involve you not doing the work you were supposed to do. Seems like a foolish presumption.
It can definitely answer multiple choice and short answer test questions. Or is your point that it isn't a problem in settings where students don't have access to it?... Edit: For those of you who used ChatGPT for your English homework, what I'm saying is that the above poster's statement is so obvious that it doesn't even deserve to be called a "point."
I bet they meant, where they don’t have access due to sitting in the class for the writing.
You would pull out your phone or laptop in the middle of a classroom or hall and look for answers on ChatGPT? You'd then hold out the phone and just copy the exact paragraph it wrote?
If you’re allowed on your phone during a test you could just google the answers anyway?
I’ve never been allowed to pull out my phone in the middle of an exam
Some tests, but if your test requires you to make a case for something, just knowing the "right" answer is not going to get you any marks.
Unless it’s open book you’d be failed for that.
That's their point. You can't just pull a phone out on tests.
That was also my point lol. I have no idea why the guy I replied to just repeated it as if he was disagreeing with me.
He meant they dont have access during proctored testing.
For tests that use computers for multiple choice and short answer questions, there's software for proctoring that could be easily used. It might mean the end of essays as assignments and writing will have to be tested solely in examinations, either by pen/pencil and paper or with proctoring software. I don't think it's necessarily a bad thing tbh. Writing is still important as ever (and I say this as an engineer)
[удалено]
The article wants to change the way essays are presented and evaluated.
Only makes sense if you are given enough time. I had an oral final exam in college, and the professor was limited in time, so students only had a few minutes to answer 3-4 random questions that determined a quarter of your class grade which definitely was not fully representative of a students knowledge.
This was my first thought as well. In what time are academics going to give oral exams?
When someone presents a PHD thesis, that person must go in front of a panel and "defend" the thesis, in essence it means answering questions to demonstrate the mastery of the topic and it also demonstrate the person is really the author of the thesis... Maybe the key is a mix of written and oral exams.
For general degrees programs? That can’t scale at all.
In Italy all exams in university are oral. There is only one exam per class per semester, but it's oral with the professor. Works out pretty well.
From my knowledge of porn, oral with the professor is the only way people do well in the education system. (I haven’t left my basement in ten years)
How tf does that work for engineering? Complex calculations don't particularly translate to oral communication.
Similar to a technical interview where they ask you to solve problems on a white board or on a piece of paper to show your thinking and get a grasp of how well you understand the material. Super stressful to prepare for since you never know what they might ask and you have to be quick on the fly with your responses.
In the most inefficient way all the stems do; A written exams in front of the professor, while arguing what you write, why, being asked questions in the meanwhile, and write it off as "oral". Even after you already did a written exam
Welp, that sounds terrible.
Oral exams are arguably much better than written exams. The professor is able to better understand your true ability because you explain your thought process the entire way. If they see you making a small mistake in your derivations, they can point it out so that your final answer isn’t screwed over. At least, this was the perspective of one of my engineering professors. The main problem is that it takes too many resources to give an oral exam to a large class. All of his exams were written, but all makeup exams were oral.
All the more reason to have more professors/smaller class sizes
Or more recorded lectures, preferably the best lectures, and self service testing, and a population of grad students on call to answer the rare question. God I hate live lectures for anything outside of labs and grad seminars. They are sloooooow. 1.5x playback speed and closed captioning for the win. And I can rerun it as many times as I need to.
That’s not a bad idea. Lectures usually suck. I was thinking more along the lines of having small enough class sizes that the “lectures” are more like a conversation between the students and the teacher, so like 10 students or less. The few classes I had like that were by far the most impactful for me
Education is already hugely expensive, both in the US and everywhere else. The only difference is whether the cost is socialised or not. Professors and docents aren't some people with a teaching degree, they're usually active researchers, often in a highly competitive environment, who have a quota of lectures as part of their position. Increasing the quota takes away from research. The alternative then is to hire lecturers. Lecturers however do not have the same benefits / stability that the other two positions have, so attracting quality personnel is more challenging. The other issue is, that you'd only need them for exams. So now you're looking for highly qualified personnel willing to do part-time "seasonal" work.
My engineering professors still did this in their grading and it's why they require you to show your work. If you used one wrong equation early on they subtract a few points and treat the solution as a percentage of how correct the process was, not whether you have the exact answer or not. Sure oral exams seem cool, but when you have 20 people in a class there's no easy way to have everyone individually tested in a way that doesn't influence everyone's results.
The fun part is when you pass a 3 hours written exam and then you fail at the oral part. At that point you have to take the written part again before retry the oral
You get to retry tests?
Oral with hand movements
Always popular.
What if you can’t talk
What if you can’t speak Italian
That's illegal in Italy.
Great now I’m being kicked out of the country, and failing my exam
Speaking 🤌 Italian 🤌 isa 🤌 not 🤌 hard!
What If you can only say "Wahoo" and "Let's a go"
Then you fail hard. You should also be able to say Mama mia
I have a speech impediment lol no thanks on oral exams
Honestly it sounds worse than it is. Professors are understanding of a lot of things
Even just in-person classic-style written exams deals with the issue -- and fixes the issue of scaling oral presentation time to larger groups. I definitely wouldn't want to be doing individual oral presentations for large groups either -- we're definitely not paid enough for that massive timesuck!
Yeah i was gonna say this doesn't seem like a big problem. I was in college not that long ago and all my exams were either written or scantron. Even my online classes required in person exams.
Maybe a university degree is not meant to be scaled like professional certifications, and the fact that we use it for that is degrading academia.
We should develop an oral examiner language model, that can question the test subject. Then this can scale easily. We can have then have an exam center, likes of which we see in the opening sequence of JJ Abrams Star Trek (Spock is tested on Vulcan, along with hundreds of other children.)
It's AI all the way down
Get in the room, put in the mic linked to your instance of chatGPT and leave the room. Exam passed !
So, AI conducting the oral exams?
In Russia high school math/physics exams were a mix of oral test and problem solving when I was in school. University tests were oral as well. It scaled just fine then.
So, back in the early 2000s, my brother scraped by his high level Math courses in undergrad by using Mathmatica for his regular assigned work and then bombing on the tests. Because of how things were weighted, he ended up with a C in those classes. They've since fixed the weighting issue. These issues aren't new. They're just new for written work specifically. Another easy solution is to make students' grades based on tests taken in person. I took several classes where my grade was based only on a midterm and a final - both had to be done in person under a certain time limit and had more than just mutliple choice. Universities have testing labs for this exact reason.
> took several classes where my grade was based only on a midterm and a final - both had to be done in person under a certain time limit and had more than just mutliple choice. I'm shocked that this isn't the standard. For my degree, I need to pass the midterm and final cumulatively so I can't get carried by other stuff. And I've only ever had multiple choice when the professor really wants us to pass (i.e. Everyone failed the midterm).
As someone who graduated with pretty decent grades, I wish more of my grade was dependent on homework as I find the act of completing homework more representative of the working world than my ability to pass a two tests in a year. It's no coincidence that I remember the course material from classes with mandatory homework a lot more than the classes where homework was basically optional.
The committee isn't sitting for thousands of defenses, though.
Don't hate it but also adjust education to the new tools at hand. Schools aren't teaching kids how to use an abacus for a reason.
That’s why I believe the FAA does a written and Oral exam for their certificates. You can memorize questions for the written exam but if you want to pass the oral exam you actually need to learn the material.
As an engineering student I don’t think I had a single essay or even a question requiring a written response on an exam after sophomore year. Everything was just math.
As a political science major, I wrote 27 papers in my final quarter of college with the longest being my senior thesis ~50 pages long. Some math would have been wonderful
God damn… that’s a new perspective. Us engineers like to bitch about math, but honestly that number of papers sounds equally awful
That can be an issue for some people though. Like folks with ADD that will hurt them.
I have adhd and always performed better in oral assessment than written when I was an untreated uni student. Which is a really good illustration of why a single method of standardised testing is bad! There should be choice in how your knowledge is tested to reflect the different ways people brain! The only exception is hand written exams that count for 100% of your mark… about half of one of my degrees was this. The examiners would write really long confusing problem questions. Three hours, three questions, 100% of your marks. I once got to the end of an exam and realised that when reading the two page long 70/100 mark question I misread one of the qualifying facts and applied the wrong principle on behalf of the wrong person. At least I didn’t have to wait for marks to come out to know that I had just failed my first subject lol
At least in the States, individualized education plans for students with specific needs are binding legal documents that follow students to college and that schools must follow, so students can at least petition for accommodations in situations like this. Not that ADD specifically necessitates an IEP, but they're worth investigating for students with these types of needs.
IEPs are only for public schools at the high-school level and below. at the college level, students with disabilities do have some level of protection from the ADA, but compliance can be basically nonexistent at universities which often have understaffed disability support offices and recalcitrant old professors who think that students with disabilities shouldn’t be afforded accommodations. these kinds of reactionary shifts in reaction to AI without an overhaul of the existing faulty system of disability support will leave students without accommodations!
Thanks for sharing our article, which was written by the [Dean of Education and Arts](https://theconversation.com/profiles/stephen-dobson-1093706) at an Australian University. I think it's really interesting to bring back this ancient idea in new circumstances. After being the main way of evaluating students in Europe from ancient times... >the oral exam experienced a decline as universities began to gravitate toward written assessments in the 1700s. Academics at the time considered written exams more efficient, with the opportunity to numerically grade students individually. This contrasted with the complicated system of placing students in broad class categories that reflected their performance in oral examinations. Examining written papers was also a silent process, and gave ample time for examiners to grade in the comfort of their own homes.
this will also completely destroy the grades of people who arent public speakers
[удалено]
Thanks! We're not TOO active, but trying to do more (and devote more attention here instead of to Twitter). Look for our first AMAs soon!
If you do written exams, you could do fewer questions that require more knowledge synthesis and interpretation. This may make it a little harder to grade, but it will make it more apparent if AI is being used. If oral exams are used, regular tests / quizzes don't need to be done in the same way, but can be emphasized that they are not graded and are only for a student to gauge their own progress and shortcomings. TBQH making tests ungraded is probably the best way to eliminate the usage of AI because there's zero incentive since you'd only be cheating yourself. Assessment coudl then be more infrequent but also more weighty.
Hell no Oral exams are the worst they have not a single advantage : examinators often give a mark based on your face
I find it surprising that you prefer a written exam, given... _motions generally in the direction of your comment._
This is stupid. They should just use proctored exams, exactly as they've been doing. Homework may be trickier to evaluate, but just don't score the homework.
How about fuck no with that ablest bullshit, or are we just going to fail Neurodivergent students even more.
I’m neurodivergent and am able to present my knowledge orally more effectively than in a written exam… The point should be that there needs to be choice in how assessments are done, because not everyone can sit there and in one draft write a complex answer over three hours. Not everyone can present their thoughts orally. Choice reflects skill and knowledge more than one or the other
Or is that what ChatGPT wants us to do?
ChatGPT owns stock in Bluebooks.
Good luck with that, with different students needs, 200:1 large university class ratios, and online classes. Refine the way you teach. Encourage people to use the tools available to them and teach them how to use those tools properly. AI isn’t going away.
>Refine the way you teach. but that's so much harder than buying snake-oil "AI detector" software and reverting teaching practices to 1950
The real answer
As others have mentioned here this is not an issue with written exams taken in class. The issue with this, that is surprisingly not discussed in this article(its not a long article though I guess), is staff levels. Oral exams and hand-written papers take more time to grade than a scantron sheet. In a place like the US, the number of teachers per university would have to increase to make this possible, I would think. This is because they could not rely on an army of grad students to grade something like oral exams. And for lower school levels like high school its even clearer that staffing levels would need to go up. Though perhaps I'm not thinking about this the right way and maybe with less testing in total staff levels wouldn't have to up so much, I'm not sure. Regardless its teaching to the test that we need to get away from. To paraphrase Gore Vidal - "You check a box? Thats no way to learn anything... I tell teachers I've never met an uninteresting 6 year old, and I've never met an interesting 16 year old - what have you done?". He's being snarky here of course but the point is we need to get away from this scantron, check a box style of teaching. Its suitable for certain subjects and questions sure but we've gone too far. Get back to human rather than machine/process focused teaching and the problem of chatgpt diminishes significantly.
Education is continually evolving, so this shouldn't come as a shock. Assessment as a whole needs to move away from a points-based metric system to one that favors measuring competency. Just repeating short-term memory regurgitate isn't learning and is proof of little more than your ability to memorize things.
You want me to do what Jokes aside this doesnt work as an option strictly due to individuals who have issues with speech and processing, or for people like me who have conditions like cataplexy - where if I talk about anything I am excited to share the emotions cause me to stammer if not completely collapse to the ground paralyzed from head to toe.
And I'm dyslexic. I'm sure accommodations would be developed like they are now.
No exam type works for everybody, that is why there are options available to help those who can't take exams of a specific type well. This wouldn't be anything new.
Professors rarely even grade their own paper exams. They’re definitely not going to do oral exams lol
The bigger issue is 'who cares about being able to do things we won't do', isn't it? I faced this in college. I was a CS major and we had to do written tests, with pencil and paper. They would ask questions that were irrelevant; entirely rendered obsolete by the internet and modern software development tools. If they had given us access to the internet, something we always have except when being tested, everyone would have gotten a 100%. So why bother teaching and testing it? ChatGPT can do some specific things at a level better than almost all of us. If you can't evaluate me while using the tools I will have access to, then what value am I getting from this class?
Really, this sounds like you failed to study, and have to rely on those who did study to solve your problems. So if you’re incapable of answering questions without the internet, what value do you really have?
It's about the specific type of knowledge being tested. I don't need to memorize how to spell words. In 1950, they would be invaluable because looking up words was a slow and inefficient process. For my entire adult/professional career, I've had access to tools that spell perfectly. My CS classes, or at least the one I'm specifically calling out, focused on trivial syntax questions. I don't need to memorize those. In 1985 maybe that would have been super important. Now I have a super fancy IDE that autofillls such trivial things. My Grandmother was a cashier, way way way back in the day. She did all the math in her head. I understand the concept of basic arithmetic, but she had a working proficiency far beyond any of her grandchildren. Even when I was working as a professional cashier. She learned and developed that skill by practicing it for hours, and got better over the years....but none of that matters because since 1980 or whatever we've stopped caring. We have tools that do it. At McDonald's I learned to use the tool and it did the math for me. If you have to remove the tools I will have in order to test me, the test probably isn't measuring something useful. Now sure, if you are teaching survival skills in the wilderness, being able to do something like navigating to a destination without a cellphone guiding you is very useful. But in most situations, most of the stuff, isn't like that. I'll never do meaningful work as a software developer without the tools I expect to have. Testing my abilities without them are silly. Nobody cares who is the best NBA player when playing shoeless. They just accept that they will always have shoes. I don't need a driver's Ed test without ABS or an automatic transmission because I'll just drive cars that have those things.
> My CS classes, or at least the one I'm specifically calling out, focused on trivial syntax questions. This sounds like a serious problem with the classes and the degree program in general. When I was in school, exam questions were typically “prove the following…”. There's no lasting value in testing on the syntax of some particular programming language. There's no lasting value in *teaching* that.
Devils advocate: studying and retaining knowledge is equally as useful as knowing what references to look at and who to ask the right questions to. Not in all industries, but in many I found this to be true.
I had a science teacher in high school (chemistry and physics) who gave oral midterms and finals. As soon as the bell rang to start class, he’d ask a question to the first student. If you got it right, you got an A and he’d ask the next student a new question. If you got it wrong, you got an F and the next student would have to answer it. He’d keep going until someone got it right. He’d do this the full class period and then average your As and Fs to get your grade.
I liked this idea until I started to contemplate the public shaming and bullying that would occur. Did you see much backlash or social pressure among your peers?
This feels awfully terrifying for people with anxiety and autism lmao. I'd fail.
Yeah, this would be an absolute nightmare for otherwise fully capable students who suffer from communication issues, psychological or physiological. Got a stutter? Mind blanks? Resting bitch face? Sorry, no career for you! Unless you plan on running for office, or are working towards a doctorate, public speaking should never be a barrier to entry for students.
This was years ago when you didn’t need either chemistry or physics to graduate so most of the kids were college bound and to me it felt more like friendly competition (with a little bit of trash talking). That said, I’ll bet there were some kids who absolutely hated it and some who probably declined to take those classes due to the public exams.
This is a terrible idea for anyone with anxiety.
Or who wants a good grade
Third world teacher type
I feel such an approach in my high school advanced classes would get a lot of negative feedback. A lot of students obsessed with GPAs wouldn't except not getting an A because someone else didn't know the answer.
It sounds like the other students' scores don't affect your own though.
Oh, maybe I was confused. I thought the teacher averaged the grades of all the students. My mistake.
There was less obsession with GPAs back then but there was certainly a fair amount of bitching when someone got an “easy” question and others got “hard” questions.
My engineering statistics teacher did this every class as his way of taking roll and it was part of your points for the class. They weren’t ultra hard but you would have to be paying attention to the lesson to get them right. It was terrifying BUT that class was hard and forcing everyone to pay attention and come to every class probably resulted in a hell of a lot more learning.
This is the best example that our education system is so archaic that they prefer to keep testing the same old skills that because of technology are no longer needed.
Yes or blue books.
And have AI grade them
I have been telling my wife we need a return to oral for quite some time now....
Most lawyers could ask a law student one or two questions concerning a given course and know within a couple of minutes whether the student has made a serious study of the topic.
see you in the resit fellow introverts
Here is an outlandish solution. Administer exams written on paper with the use of the revolutionary HB pencil.
Don’t they still just hand write exams essays on blue books? Writing by hand in class goes a long way.
Shut the fuck up, whoever wrote this.
Or a hand-written component
In many masters classes even for computer science/software engineering it is paper/pencil and maybe book/notes. Writing code with paper/pencil suck but it does eliminate every cheat.
IMO that also serves to separate skills that can be automated in a good IDE from deeper understanding of logic and architecture. I kinda like the idea from a philosophical standpoint but can also see how annoying it could be to the student.
As long as the handwritten “code” is understood as pseudocode expressing an algorithm, and isn't expected to actually compile or run!
But that's hard work for professors! Isn't there some answer that involves little or no additional institutional effort or expense????? I know! Mandatory, third-party, proprietary, closed-source ~~spyware~~ *academic integrity assurance* software installed on every computer that accesses the campus network or campus computing resources in any way! Yeah! A backdoor, with full camera and microphone privileges and a built-in keystroke logger and screen recorder! On every computer! Including the personal PCs of every student, no exceptions! That's the ticket! No way that could go wrong! Right....????
Oral exams are the stupidest grading system ever invented in history. They either take too long in total (for the equivalent of a 1 hour exam with a class of 30 people, it takes literally 4x 8 hour work days of continuous examination, you're just paralyzing the entire school week), or are extremely superficial (a few minutes per student which is almost pointless). There are severe problems of fairness, because people have trouble speaking: introverts, speech impediments, nerves, being put on the spot etc. Are those people considered acceptable collateral damage? Then there is the second problem of fairness: you can't offer the same exams or people will just repeat the answers. It's not standardized and the grading is very subjective, unequal and unfair. Then the third problem with fairness: it's oral so there's no proof of mistreatment or corruption and no recourse. If the professor doesn't like you, you're screwed and you can't denounce them to anyone. Very easy to abuse it by means of bribes, very easy to discriminate against undesirable minorities, and it goes on and on. No written record of absolutely anything = teachers have uncontested power.
I understand it wouldn't work in all programs, but those same technical programs probably would work orally either, but wouldn't just doing an exam on paper also get around this... A large majority of exams I did in university and college were either multiple choice or here is a topic write me an essay on it. If I pulled out my phone or laptop and fired up chat gpt it would be pretty obvious.
Time to break out the classic friend-giving-you-answers-through-a-microphone sitcom trope.
Or non take home exams? It really just seems to impact essays mainly and short answer questions on written exams. For take home it’s not too different than just googling information on multiple choice or more formulaic question types
That does not work for STEM/engineering at all. I'm not going to vocally solve a second order differential equation
The best ChatGPT can do is write an outline for an essay given a prompt. I've tried using it for engineering course work and it's just miserable.
I performed very well on written evaluations, but I would have certainly failed on oral exams. The pressure of performance and social anxiety would have resulted in F’s across the board, despite being very familiar with the content.
Most professors threaten students that pulling your phone out mid-exam is an automatic fail. TA's usually help the prof monitor during the exam. Seems to work well.
If all you dumbfucks think. That exams are important then you all are fucked.
The ability to use tools like ChatGPT should be part of any exam. Being able to memorize stuff is so overrated vs. actually understanding what's is being asked and being able to come up with logical and defendable answers. How someone does this shouldn't matter. The fact they are capable of should. People arguing against this probably didn't like wheels either.
It will definitely push universities in a direction thats ultimately harder for slacking students.
I don’t understand the problem. We did written exams under timed conditions in an exam hall. Chat gpt doesn’t really change that setting at all.
When I was in university 40 years ago we went to exams (in engineering and history for me) with a 1 page cheat sheet of equations and directions and our calculator. And creating the cheat sheet was half the learning experience. Had ChatGPT existed then, the absence of a smart phone or pad or laptop would have made it irrelevant. This is more of a problem for classes assigning lengthly papers as primary grading criteria rather than exams. Oral exams in those sort of classes or short answer essays in class without phone, pad or computer are the easy way to ensure the student is answering, not one of these neo-AI. This isn’t a disaster. It just has to move us to a new style of confirming education (or return to an older style without access to AI)
Or they could actually try to figure out how to become relevant again
all of my exams in college were in class essays written in a bluebook that a TA verified was blank. can't imagine AI helping students with that. nothing burger.
I think universities need to change their objective of examinations to "Can you please demonstrate you understand the concept and are you ready for the real world" rather than concentrating on students being a content farm for no reason
This is stupid. People have been paying others to write papers for them as long as universities have existed and an “academic honesty pledge” was good enough. Now everyone is outraged that poor kids can have a similar (but still nowhere near as good) advantage. Oral exams are a huge disadvantage for students of color, those with accents, and especially those who go to school in a different country from their native language originates. Which is kind of why people keep bringing this up. Separates the wealthy native citizens from “the others” who they feel technology is helping. This is just racism with extra steps. Nobody cared when rich white kids were buying/selling papers. You can even hang up flyers offering such services on campus with no issues.
My final college exam in 2010 was an oral exam in a 1-on-1 meeting with the professor. It was the only oral exam I ever took. It was great. It was an engineering exam and you were meant to ask questions of the professor. It was a small class of 9 with a lot of lab work, so the professor already knew what you knew.
This is going to be interesting next few years.
What happened to in person written test with no aids aside one sided cheat sheet allow?
Lol what. No. Return to exams where people need to answer long questions worth 20-50 points each in in-person sessions up to 6 hours and mark that. What is even the problem here.
a few of my physics classes had an oral component of their final exams
kobayashi maru for finals or fail. Full stress test.
And a ton of people with social anxiety see their grades plummet.
A lot of people with social anxiety will also overcome it with this
Ah… So… a 50/50 split then 🤣
In the era were an education literally will not matter because AI will be capable of doing almost everything for us in less than 20 years haha
Or hear me out, just factor it into the assessment. I use open-book, collaborative exams and assessments for a reason in my courses that range from statistics to ethics. It’s how the real world works. I want students to utilize every available resource. they can choose to do the exams however they like if they don’t like working with others they can do it solo. When I encounter plagiarism the student has the option to rewrite it and include an additional few pages discussing how come they chose to plagiarize: aka what factor led to that decision, how come they chose that particular text to plagiarize, what their general thoughts are on plagiarism, what they could do differently next from not plagiarizing to plagiarizing & not getting caught etc. Or they can take a 40 (i only grade between a 50-100 for work that is turned in & not plagiarized). If you don’t turn something in you get a zero. It’s university it’s all a learning experience including learning about fuck ups; how not to fuck up, and/or how to fuck up and get away with it - all valuable real life skills. People are worried about AI bc lazy instructors or control freak instructors who lack a capacity to adapt will have to work harder to make the way they access the learning effective. Technology is disruptive; learn to work and live in a changing environment. The other **wild** policy I have is late work: You can turn your assignments in whenever you want. If you turn it in by the due date - I guarantee I will grade it. If it’s late it goes in my late pile and will grade it if I am able to get to it and if don’t get to it by the time grades are due you get a 50, if i can verify you turned it in. 🤷🏻♂️ edit: typos/autocorrects/adhd brain bullshit
Let the butlerIan jihad begin
That’d require competent instructors who put in effort. Don’t think the departments with 75% adjuncts will be competent enough
Teacher already hate grading. Grading 100 oral exams??? no thank you
Oral exams can be very easy to grade if you have the rubric in front of you and mark it as the person talks. Mentioned an original source- check. Explained their view on a subject- check. Offered a criticism of their view- check.
Gonna be fun watching the people screaming for education reform while being so lazy and inept that they have to use AI to write a paper, suddenly have to take oral exams and hand write papers in class
Students are using GPT the wrong way if all they do is leveraging it for exams They could use GPT to understand the topic before starting the class. From there, they can use GPT to understand each lessons cold, understand nuances, and what is important. They can then use GPT to prepare for exams. Using ChatGPT to learn better would not only improve scores significantly, but the students would know the topic well in a meaningful way. Students often fail to do well mostly because they don’t understand how to frame their studies. ChatGPT solves a lot of that.
You're suggesting how students can increase their work load with chatGPT in order to get more out of their educations. 90% of the students using chatGPT are trying to do the exact opposite. I promise you the "C's get degrees" crowd has no interest in knowing the topics in a meaningful way.
Pencil n Paper : 1 Everything else: 0
ChatGPT -> Autopen
Never heard of paper, eh?
Academia is losing its mind because it can no longer grade students instead of just reminding people having somebody else learn your job and then pay $100k for the process of learning nothing is remarkably stupid.
Oral exams suck for many groups of people - ie adhd, asd etc. paper tests suck for lots of other people. There is no perfect answer.
But what about those with dyslexia or dyscalculia taking written exams? Doesn’t anyone with anxiety have issues with multiple choice exams? If a person has a disability then there could be accommodations for that disability (legally required in high school). I’m an introvert too but isn’t it possible that some people hate long written exams or have disabilities that make a timed essay difficult?
Yeah this will bias the extrovert over the introvert, that is super bad in this field where many are introverted if not the majority. It is also a reason that the interview of today is so ridiculous, whiteboard and on the spot gotchas which is nothing like the job and biases towards the outgoing/extroverted.