AI as a Greek Tutor

I have a subscription to Google’s Gemini Pro, but I believe this functionality is possible on other platforms.

When I get stuck on an exercise, or when I complete some exercises without an answer key to check my translations, I can take a picture of the relevant text and ask the AI to translate it. At least on my AI, I get a translation in the original word order, then a version in natural language, then some helpful notes on interesting features of the grammar (e.g. participle of cause or attribute or whatever). So far I’ve not found much to quibble with. But my skills are humble at best

Like other applications of AI, I suppose, I might be concerned about short-changing the learning purpose, but it seems useful if you stick to your own “on-board puzzler” before looking for an evaluation.

I’ve had good experiences using the AI as a computer coding tutor like this. I think it can do that effectively for Greek as well.

I wonder what others are experiencing along this vector.

1 Like

hi

my experience with AI is that it is unreliable for Ancient Greek. almost every time i ask questions it returns errors. i am mostly looking for material i can set as exercises or tables of declensions etc. often they contain errors. i don’t ask about translations as presumably they will only be as good as what they found on reddit. analysis seems pretty hit and miss. i wouldn’t recommend AI as a Greek Tutor. one day it might work but not atm. what i find odd is that when i point out errors it seems very ready to accept what i say. i suppose the maxim is the customer is always right?

i used chatgbt as a travel guide in Seville recently and it was very informative. but i questioned it about the archaeological museum and it failed to tell me that it was closed for restoration for a couple more years.

1 Like

I concure with the above advice as AI is far from a flawless method and it’s ability to translate ancient languages is limited at best and quiet capable of mixing up different Greek dialects in a confusing and chaotic way at worst.

However, if you use it as a bit of a puzzle I can see the applications. Talking to the bot and then “checking” it’s work for errors would have the double benefit of giving you grammer practice and improving the interface for future users.

1 Like

OpenL can be a help sometimes, but it’s certainly not 100% reliable.

For example, CoPilot gives the aorist passive infinitive instead of the perfect passive infinitive:

Copilot
“The ancient Greek perfect passive infinitive form of βλάπτω (meaning “to harm” or “to damage”) is βεβλαφθῆναι.”

I’ve found that Claude, at least, makes a fair number of mistakes. One I remember was it misidentified suppletive paradigms for several verbs, in either direction, identifying a form (when asked) as a suppletive paradigm when it was not, and vice versa. When I pointed out its mistake it said basically (I don’t have it in front of me): “oh yeah, I did make a mistake.”

I would say it writes with an assurance which it should not possess, which makes it dangerous to trust for somebody who is not an expert.

I use it too because I need help, but despite needing help, I catch errors all the time. I use chatgpt because that’s what I’ve found is free. I use it and then go check other sources if I can’t tell whether it’s right or wrong. I tell chatgpt all the time that whatever it’s telling me, usually Greek is wrong. It tells me sorry and then gets something new that is also wrong and finally I yell at it and figure I should probably go back to my books.

I’m using Ai to generate original ancient Greek. This is then transliterated into Mycenaean, Linear B and Cypriot Greek - under programme control. The conversion is executed by human crafted spreadsheet user defined functions:

  • Where B1 is ‘Πηλέως καὶ Θέτιδος Γάμος καὶ Κρίσις Πάριδος’
    B2 = ApplyOrthography(B1)
    B3 = SyllabifyGreek(B2)
    =ConvertSyllables(“greek”,“mycenaean”,B$3)
    =ConvertSyllables(“greek”,“cypriot”,B$3)
    =ConvertSyllables(“greek”,“linearb”,B$3)
    =ConvertSyllables(“greek”,“roman”,B$3)

for example:**

The Tale of Troy**
by Claude Sonnet 4
Marriage of Pelius and Thetis
function do_Get_PeliusThetis() {
return [
Πηλέως καὶ Θέτιδος Γάμος καὶ Κρίσις Πάριδος
Μοῦσα θεῶν βουλὰς ἄειδε καὶ ἀρχὴν πολέμοιο
ὡς ἔρις ἀθανάτοισι μετ’ ἀνδράσι τε βροτοῖσι
γάμου ἐκ Θέτιδος καὶ Πηλέος ἀντιθέοιο.
etc …. for 35 lines
];
}
// Epic Fragment Marriage and Judgement
// Claude Sonnet 4.0 23 July 2025 - 22 Aug 2025
// Here’s an original composition in the style of the Epic Cycle
// covering the Marriage of Peleus and Thetis to the Judgement of Paris:

Mycenaean
𐀟:𐀩~𐀃:𐀮 𐀏𐀂 𐀳~𐀴𐀈𐀮 𐀏~𐀗𐀮 𐀏𐀂 𐀑𐀪𐀯𐀮 𐀞𐀪𐀈𐀮
𐀗𐀄𐀭 𐀳~𐀃:𐀚 𐀡~𐀄𐀨~𐀮 𐀀𐀁𐀂𐀆 𐀏𐀂 𐀀𐀩:~𐀐:𐀚 𐀡𐀩~𐀗𐀂𐀃
𐀃:𐀮 𐀁𐀪𐀮 𐀀𐀲~𐀙𐀵𐀂𐀯 𐀕𐀳 𐀀𐀙𐀅𐀨𐀯 𐀳 𐀡~𐀫𐀵𐀂𐀯
𐀏~𐀗𐀄 𐀁𐀐 𐀳~𐀴𐀈𐀮 𐀏𐀂 𐀟:𐀩~𐀃𐀮 𐀀𐀛𐀴𐀳~𐀃𐀂𐀃.

Cypriot
𐠩𐠰~𐠥𐠞 𐠩𐠪𐠥𐠌 𐠂𐠊 𐠩𐠗~𐠊 𐠩𐠰~𐠯~𐠮 𐠂𐠊 𐠩:𐠃𐠐:𐠟
𐠃𐠂𐠗𐠐𐠡 𐠚:𐠋~:𐠤𐠀 𐠂𐠊 𐠮~𐠂𐠁𐠀 𐠩𐠏𐠄~𐠡 𐠚:𐠃~𐠮 𐠨𐠄𐠗
𐠪𐠂𐠰𐠦~𐠡 𐠮 𐠪𐠣𐠭~𐠙𐠀 𐠮𐠕 𐠪𐠂𐠰𐠙~𐠭𐠀 𐠩𐠥𐠁 𐠩:𐠃
𐠃𐠂𐠃~𐠮𐠯𐠛𐠀 𐠩𐠃𐠐:𐠟 𐠂𐠊 𐠩𐠰~𐠯~𐠮 𐠋𐠁 𐠄𐠗~𐠊 |

The process is described in From Linear B to Mycenaean Epic

2 Likes

AI tooling is only as good as the context it is provided. I built a platform (Koine Guide) using OpenAI to provide translation exercises/feedback for the whole NT. But to help make the feedback more specific, I built a grammar and context file for every verse of the New Testament so that when users submit their translations, I pass along the appropriate context file under the hood so that OpenAI can actually provide meaningful and context specific feedback.

I am working on cleaning up all those files now, which is a big reason I joined this community! As I study through the NT on my own, I update my verse files so that all the rest of my platform users can reap the benefit in their own translation efforts! 8)

1 Like

The way I have heard it is that AI tools are like having a really smart, confident, but fallible friend. They know a lot about a lot of things, but they make mistakes. 8)

A lot of those mistakes can be overcome though if given proper context when chatting.

1 Like

To err is human too. Not unlike myself and many of my friends rather than spoil a good story because one’s forgotten the facts we tend make it up just to keep the conversation flowing.

But prompt engineering, persona definitions, context grounded in NLP (keywords, named entries, summarisation etc) and web search, maintaining chat history and providing it with source texts as systems instructions all help. API access to models offers much better control and returns more trustworthy results than UI chats.

1 Like

I don’t think I want a friend who is prone to just making stuff up all the time. LLMs can be a good tool, but they hallucinate at a crazy rate, especially when you veer off into technical analysis or information that isn’t easily found online.

That has not been my experience. 8) I have learned a ton using ChatGPT and other LLMs. Scholars hallucinate too. :upside_down_face: Context is king.

LLMs definitely do invent things regularly. If you haven’t observed that, you’re not paying attention.

LLMs can be helpful as long as you think of them as a “search engine” and possibly as a “suggester of new insights.” But the user has to fact check everything. Every single thing.

Just today I was asking an AI tool about one line of the Vulgate: 2 Maccabees 15:17, because I wanted to know what’s up with “exhortati”, which comes from the deponent verb “exhortor”. Is it passive in meaning, or active in meaning? What does the AI say?

2 Maccabees 15:17

“exhortati itaque Iudae sermonibus bonis . . . statuerunt dimicare . . .”

“after being exhorted by the good speeches . . . they resolved to fight”

The AI gave me this wonderful gem, a botched explanation:

‘exhortati: This is the perfect passive participle of the deponent verb exhortor (“to encourage, exhort”). As discussed earlier, exhortor is a deponent verb, so its perfect passive participle (exhortatus, -a, -um) has an active meaning, here translating to “having been exhorted” or “after being exhorted”.’

After I challenged it, it told me the following:

‘The participle exhortati is part of an ablative absolute construction (exhortati . . . sermonibus bonis), which sets the temporal or causal context for the main clause (statuerunt dimicare, “they resolved to fight”).’

That’s obviously wrong.

If you are newish Latin student and you use AI naively as a teaching tool, you are making a huge mistake. You cannot trust anything it says.

2 Likes

This is silly. If you tell someone that Grayson p. 255 doesn’t say what he thinks he says, that scholar will go look it up and then you can hash it out. ChatGPT will simply continue to argue or make up something else. I know this because I just experienced it with the most recent ChatGPT (5).

Mistakes scholars make are categorically completely different from LLM hallucinations.

Bingo. Early on I asked it to provide some paradigm tables for Akkadian, and I could easily see that the forms were gibberish. I tried it for some other Akkadian translation that I was struggling with, and again, it simply lied.

I agree with you. Anyone not fact-checking the statements is just not paying attention or inept at it.

Totally agree with you. I was just meaning to imply that there is value in using LLMs. But they should definitely not be used naively or blindly. If a model hallucinates, its either a model quality, prompt structure, or context issue.Were you using Claude or ChatGPT for that 2 Maccabees 15:17 feedback? What model? Would you mind sharing your whole prompt??? I am just curious. I asked ChatGPT 5 the following:

Here is the Vulgate translation of 2 Maccabees 15:17: 
Exhortati itaque Judae sermonibus bonis valde, de quibus extolli posset impetus, et animi juvenum confortari, statuerunt dimicare et confligere fortiter: ut virtus de negotiis judicaret, eo quod civitas sancta et templum periclitarentur.

I am trying to understand the syntactical function of "Exhortati". Is it passive in meaning, or active in meaning?

Think hard. Walk through your logic. And make sure to take important Latin grammatical context into consideration.

And this was its response:

Short answer: passive in meaning.

Why
	•	Form: exhortāti = nominative masculine plural perfect participle.
	•	Verb family: Latin has both exhortor (deponent) and exhortō (regular active). The participle exhortātus can therefore be either:
	•	deponent perfect participle with active sense (“having exhorted”), or
	•	regular perfect passive participle (“having been exhorted/encouraged”).
	•	Context decides. Exhortāti itaque Iūdae sermōnibus bonīs valdē… — the ablative sermōnibus (“by [Judas’] speeches/words”) is ablative of means/instrument, i.e., the thing by which they were encouraged. That fits a passive sense: “having been exhorted/encouraged by Judas’ very good words…”
	•	If it were active (“having exhorted”), we’d expect an accusative object (whom did they exhort?)—but there isn’t one. Instead we have the means by which they were exhorted.

Function in the sentence
	•	It’s a circumstantial nominative participle agreeing with the (understood) subject of statuerunt (“they resolved”), supplying background: “After being encouraged by Judas’ words… they resolved to fight…”
	•	You could paraphrase with a clause: cum Iūdae sermōnibus valdē bonīs exhortātī essent, statuerunt… or with a straightforward passive: exhortātī sunt… statuerunt…

Smooth rendering

“So, encouraged by Judas’s very good words—which could rouse their attack and strengthen the spirits of the young men—they resolved to fight bravely…”

Deponents love to mess with you; this isn’t one of those times. Here, the morphology is ambiguous, but the syntax ties it down: passive sense.

Now, I don’t know Latin… Do deponents always have an active sense? Or do they sometimes convey a passive sense in Latin based on the context?

Thanks for sharing your experience!

I accept the "silly” label. 8) And I completely agree. Scholarly mistakes are different than LLM hallucinations. All I was trying to say is that you can learn things from LLMs. But it can involve some critical thinking. For LLMs, if you are getting hallucinations it is either because the model quality, the prompt structure, or the context you provided the LLM was insufficient.

No, hallucinations are an inherent part of the model, as researchers broadly agree.

“AI hallucinations are a feature of LLM design, not a bug”

https://www.nature.com/articles/d41586-025-00662-7

Even OpenAI themselves have admitted this.

2 Likes

You misssed the point. My LLM claimed that this particular deponent participle had an active sense… and then immediately translated it as a passive. This definitively shows that the LLM has no idea what it is actually saying. It’s unable to distinguish truth vs. falsehood vs. outright nonsense.

If a model hallucinates, its either a model quality, prompt structure, or context issue

Nope, this is definitively wrong, and bad advice. Hallucinations are a severe problem and an intrinsic problem. You cannot eliminate them by tweaking prompts or customizing the input or switching to a new model. It’s not accurate to claim “you’re holding it wrong,” as Steve Jobs famously said about one old iPhone version. We are not holding it wrong. AIs hallucinate. They make things up.

AI in its current form categorically cannot detect what is “true” on even the most basic level. It cannot even be self-consistent from one word to another. Unfortunately, sometimes it seems to be able to do this, so even clever users are fooled. Users become complacent and too trusting with it.

The claim that “well, I’ve been using it a long time and it gave me good answers” is not evidence that AIs work well, it’s evidence that the user is careless.

AIs are useful, but only on the understanding that they frequently tell falsehoods that are sometimes overt, and sometimes subtle and difficult to see, but still very real.

2 Likes