Project description

Advancements in Artificial Intelligence (AI) and the rise of Large Language Models (LLMs) have led to the widespread adoption of AI tools in various aspects of life. However, growing concerns about the trustworthiness and ethical implications of these models have emerged due to their black-box nature and limited ability to explain their decisions. This project aims to leverage argumentation techniques for in-context learning in LLMs to develop secure, reliable, and trustworthy AI systems. These new techniques seek to ensure the correctness of LLM behavior and enhance user confidence by making their decision-making processes transparent and grounded in cause-and-effect relationships.

Assumed knowledge

Basic understanding of prompt engineering


Note: You need to register interest in projects from different supervisors (not a number of projects with the one supervisor).
You must also contact each supervisor directly to discuss both the project details and your suitability to undertake the project.