Of course teachers make mistakes, but the genre of mistakes that a teacher might make will be very different from the genre of mistakes produced by AI. When AI hallucinates, it usually produces something quite plausible, but totally fictional.
About textbooks, I disagree. Textbooks will not contain the sort of plausible hallucinations produced by AI. Textbooks will have typographical errors or editing mistakes, but comparatively few of them. With AI it’s really difficult to predict how much nonsense will be mixed in, because it varies depending on exactly what is being asked.
2 Likes
it’s a difficult question to answer but the truth of the matter is, were it not for AI I would not be using this forum, I would have given up on Greek a long time ago and chalked it up as too hard.
Right, but no one claimed you should never use it. We just need to understand that the things it tells us are often quite false, and perhaps insidiously false, meaning they have a ring of truth and appear to be true, but they are just wrong.
It is an appropriate tool for many tasks, but definitely not all tasks.
1 Like
“often false” - absolutely not. I just asked it roughly 40 questions regarding Xenophon’s Anabasis book 1. More than 90% of the time the confusion in the texts is clarified by reminding me of grammar rules which I was already aware of. The other 10% of the time of course I cannot know if it is right or wrong, but more than 95% of the time, when I am able to verify the info it is correct. It just so happens that I cut and paste many of its answers on to a text file then I go back and later condense the info so as to practice learning the grammar rules. I still have not condensed the info, so I’m sending you this file as a direct message. If it is often wrong, then it should be the case that more than 50% of its answers contain false info on key points. I challenge you to look over this document and prove that more than 50% of the info contained therein is false.
Also bear in mind, that I only copy and paste the info that contains info which I previously was unaware of, so for every 1 answer contained in this doc, there are another 4 or 5 which you are not seeing which contain obviously true info.
No, this is not a good criterion. As I mentioned above, with AI it’s difficult to predict how much nonsense will be mixed in, because it varies depending on exactly what is being asked. AI handles certain kinds of questions well. For certain other kinds of questions, it’s a disaster.
I am not saying you should stop using it. If it’s helpful for you, then keep going. But this experience is not generalizable. Sometimes AI is helpful even in the case where it is wrong, because it gets you thinking about the problem in a new way. But the user needs to beware.
AI’s resuls are often ficitonal.
The first item in the file is wrong: “τὸ θέρος is correctly in the nominative case in the sentence.” But τὸ θέρος is not nominative, it’s accusative.
The explanation about δόξαι looks suspicious to me. I’m not sure what’s going on there, but the AI’s explanation doesn’t make sense to me. I could be wrong.
The explanation about “πρὸς σέ, ὦ τᾶν, ὥσπερ τὸν Κῦρον ὄντα,” seems to be wrong. ὄντα agrees with σέ, and σέ is the object of πρὸς. There is no infinitive εἶναι anywhere in this sentence, so this comment by the AI is not correct: “an implied phrase involving σὲ εἶναι (‘that you are’)“
AI is a black box, not only in the sense that you can’t inspect the intermediate layers of the CNN, but that you generally have no idea what the training set was. For our domain, a lot of the data has not been digitized, thus has not been part of the training.
We’re not predicting protein folding here. There are abundant real world resources which have been used for centuries to achieve fluency in Latin and Greek. Put the time in at the desk and it will come.
2 Likes