Xenophon’s consultation of the Pythia
1. Statement of prayer-question – Xenophon begins by verbally addressing Apollo, asking “to which of the gods should I sacrifice and pray in order best and most successfully to perform the journey I have in mind and return home in safety?” Only once this plea is uttered does Apollo’s priesthood record the god’s reply.
2. Ritual hymn & payment – Like all individual consultants, he had to buy a pelanos (sacrificial cake) and burn it on the altar while reciting the short Delphic paean in Apollo’s honour; the spoken hymn and the offering together signalled respect and opened the way for prophecy.
3. Sacrificial plea – A goat was sprinkled with water; if it shuddered, Apollo was deemed willing to speak. The consultants (or an attendant priest) then voiced a brief prayer “Hear me, Lord Apollo…” over the animal before it was sacrificed. Only after this spoken plea did the Pythia mount the tripod and deliver the oracle.
That is an o3 answer in response to one of my queries, namely whether you had to make incantations to oracles before they would respond. You did! If you scroll down, you will see that original answer is amended somewhat and improved in accuracy. For instance “…drop the idea that each visitor had to intone a fixed hymn. At most, priests might intone a brief paean while the cake was burned…”
In any case, you could not do “one shot” with the oracle — you had to put a bit of effort into it. If you simply approached them and asked for a prophecy of the future (and did nothing else) you would get no meaningful response. In contemporary terminology, you needed a bit of prompting.
To return more explicitly to the current day, many people complain about the hallucinations of top LLMs, and indeed those hallucinations are still present. (o3 is much quicker than o1 pro, but probably has a higher hallucination rate.) If you ask them only once, you are more likely to get hallucinations. If you ask a follow-up, and request a correction of errors, the answer usually is better.
Almost everyone evaluates the LLMs and their hallucinations on a one-shot basis. But historically we evaluated oracles on a multi-shot basis. It would be easy for us to do that again with LLMS, and of course many users do. For the faster models the follow-up query really does not take so long.
Or just start off on the right foot. Marius recommends this prompt:
Ultra-deep thinking mode. Greater rigor, attention to detail, and multi-angle verification. Start by outlining the task and breaking down the problem into subtasks. For each subtask, explore multiple perspectives, even those that seem initially irrelevant or improbable. Purposefully attempt to disprove or challenge your own assumptions at every step. Triple-verify everything. Critically review each step, scrutinize your logic, assumptions, and conclusions, explicitly calling out uncertainties and alternative viewpoints. Independently verify your reasoning using alternative methodologies or tools, cross-checking every fact, inference, and conclusion against external data, calculation, or authoritative sources. Deliberately seek out and employ at least twice as many verification tools or methods as you typically would. Use mathematical validations, web searches, logic evaluation frameworks, and additional resources explicitly and liberally to cross-verify your claims. Even if you feel entirely confident in your solution, explicitly dedicate additional time and effort to systematically search for weaknesses, logical gaps, hidden assumptions, or oversights. Clearly document these potential pitfalls and how you’ve addressed them. Once you’re fully convinced your analysis is robust and complete, deliberately pause and force yourself to reconsider the entire reasoning chain one final time from scratch. Explicitly detail this last reflective step.
I haven’t tried it yet, but it doesn’t cost more than a simple “Control C.” Perhaps some of you can do better yet, depending of course on what your purpose is.
There is no reason why you cannot ask for better, and get it. Beware those who dump on hallucinations without trying to do better — they are the Negative Nellies of LLM land.
And oh — o3 pro is coming soon.