Prompting Techniques That Make a Difference: A Follow-Up to The Spice is in the Prompt
- Laura Gavrilut
- 31 minutes ago
- 2 min read
You might remember our previous blog post, The Spice is in the Prompt, based on Eliot Salant's Medium article on enhancing SQL query generation with large language models (LLMs). Good news! Eliot Salant has publish a second article - packed with fresh techniques that will push the boundaries even further.
In the first article, Salant explored how storing user question/SQL query pairs in a vector database — and retrieving them via similarity and keyword matching — could help construct multi-shot prompt examples for NL2SQL tasks. This approach significantly improved the LLM’s ability to generate complex SQL queries, especially for JOIN operations over the data. But even with these gains, some queries still required human intervention to correct or rewrite. That’s where the second article steps in.
The new techniques focus on minimizing the need for manual corrections. As SQL queries grow more intricate, they often become too complex for non-expert users to construct manually — and too nuanced for LLMs to handle reliably without guidance. Salant’s second article benchmarks improvements in query accuracy and introduces strategies that help LLMs better navigate these challenges.
These advancements aren’t just theoretical. The article highlights improvements made to IBM’s RAG-enabled chatbot, developed under the DS2 Horizon EU project. In the Murska Sobota pilot, the chatbot is being used to support air quality monitoring and policy-making. By leveraging environmental data shared through the DIH AGRIFOOD Dataspace, the system provides citizens and decision-makers with deep insights into air quality in the city of Murska Sobota.
Prompt messages are not only about clever syntax — are about empowering systems to deliver real-world value. Whether it’s helping users write better SQL or enabling smarter decisions about public health, these techniques are making a tangible difference.

Comments