Unmasking the Mystique of 'Prompt Engineering'

What's the problem with 'prompt engineering'? 

Prompt engineering is a term that’s been all the rage recently. While understanding how to formulate prompts correctly when interacting with LLMs is important, we must be careful not to take a myopic view that overemphasizes the significance of this practice. The strong focus on prompt engineering limits us from more important discussions around the purpose and future of artificial intelligence for legal professionals.

Prompt engineering is often treated as a kind of universal language, however, there are a multitude of Large Language Models that are incredibly expansive in their size, capacity, and capability. Many of the prompt engineering courses out there focus on ChatGPT, which makes intuitive sense given the popularity of OpenAI’s models. However, different kinds of large language models have vastly different requirements for the way prompts should be formatted, and many lawyers are inadvertently hamstringing themselves by purely learning to engage with the conventions of ChatGPT. Prompts are not universally applicable, and indeed, they are less versatile than you might believe, given that the same prompt input will output different results depending on a model’s temperature and underlying architecture. Therefore, be careful when people claim there is a ‘best’ or ‘perfect’ prompt for elucidating information from an LLM.

Further, the term ‘prompt engineering’ provides magic and mystique to what is happening under the hood. Namely, large language models are about input and output, and prompt-engineering is ultimately about effective formulation of language, a skill lawyers are already well-versed in. Every model has limitations when it comes to knowledge retention and utility, and there isn’t a 'secret prompt' which will get you past this.

How significant is prompt engineering for developers creating legal software which leverages generative AI? 

When it comes to newly emerging legal software which leverages generative AI, lawyers should understand that prompt engineering constitutes an incredibly small part of the overall engineering work developers and product managers do to create new and meaningful applications of AI. In fact, the pains of ‘prompt engineering’ are increasingly being taken care of for the user 'under the hood'. As an example, every time a user submits a prompt to Habeas, the prompt is reformulated ‘behind the scenes’ to ensure factual and relevant output, as well as to account for the time period limitations a user may have specified in their query.

Lawyers should remember tools like ChatGPT are very much ‘general purpose’ and thus no tintended for usage in specific domains which have requisite discipline or knowledge. In comparison, legal software such as Habeas leverages AI for the specific problems and knowledge requirements of the legal field. Learning to interact with emerging (and existing) domain-specific AI software will become crucial to a future lawyer’s skillset.

So if not prompt-engineering, what should a lawyer focus on to get to grips with AI?

Firstly, I encourage all lawyers who use Habeas to also learn about the intricacies and architecture of large language models (and not just ChatGPT). It’s incredibly important to understand how these models come into existence, because it will allow you to come to terms with their use cases and limitations in a more meaningful way. Liz Chase is an Australian thought-leader who has written extensively about this in digestible form. I also recommend people start with the research paper “Attention is all you Need” to understand the basis of what a ‘transformer model’ is, how they are trained and where they are most useful.

But more importantly, AI is a tool that becomes most helpful once you’ve identified the core problems you need to solve. Don Giannatti is spot-on when he suggests that diagnosis, decomposition and reframing of the core problem in need of a solution is going to be crucial for industry professionals engaging with AI.  

For example, a lawyer engaging with Habeas should have identified the legal issues they’re looking to get more information on before they use the platform, as well as what aspects of the legal issue are in contention within their specific case. It’s always preferable to initiate a conversation with a question like “what’s the law on eviction rights in NSW”, having already identified that’s the core legal issue you need clarity on.

Habeas isn’t built to be given a mass of information and be told ‘solve this for me’. Rather, it’s an incredibly knowledgeable ideation assistant that is best used in tandem with the creative and analytical skills lawyers are trained in from day one. This is why continuing to hone your problem-solving and analytical skills, in conjunction with LLMs as a ‘rocketship for the mind’, is the best thing you can be doing as a lawyer moving into the future.