LLM As A Function

LLMs have seen a huge surge of popularity with ChatGPT by going from prompt to text for various use cases. But what's really exciting is that they are also extremely useful as ways to implement normal functions within a program. This is what I call LLM As A Function.

For example, if you want to build a website builder that will only use components that you have in your design system, you can write the following:

const prompt = 'Build a chat app UI';
 
const components = llm<Array<string>>(
  'You only have the following components: ' +
  designSystem.getAllExistingComponents().join(', ') + '\n' +
  'What components to do you need to do the following:\n' +
  prompt
);
// ['List', 'Card', 'ProfilePicture', 'TextInput']
 
const result = llm<{javascript: string, css: string}>(
  'You only have the following components: ' +
  components.join(',') + '\n' +
  'Here are examples of how to use them:\n' + 
  components.map(component => 
    designSystem.getExamplesForComponent(component).join('\n')
  ).join('\n') + '\n' +
  'Write code for making the following:\n' +
  prompt
);
// { javascript: '...', css: '...' }

What's pretty magical here is that the llm calls are taking as input arbitrary string but output real values and not just string. In this case it's using JavaScript and TypeScript for the type definition but can be anything you want (Python, Java, Hack...).

How does it work?

The function that we are using is llm<Type>(prompt: string): Type. It takes an explicit type that will be returned.

The first step is you need to have introspection / code generation from your language to be able to take the type you put and manipulate it. With this type we are going to do two things:

We will convert it to a JSON example and augment the prompt with it. For example in the second invocation the type was {javascript: string, css: string}, we are going to generate: 'You need to respond using JSON that looks like {"javascript": "...", "css": "..."}'. We are using prompt engineering to nudge the LLM to be responding in the format we want.

We also convert it to a JSON Schema that looks something like this:

{
  "type": "object",
  "properties": {
    "javascript": {"type": "string"},
    "css": {"type": "string"}
  }
}

This is fed to JSONFormer which restricts what the LLM can output to 100% follow the schema. The way LLMs generate the next token is by computing the probability of every single token and then picking the most likely. JSONFormer restricts it to only the tokens that match the schema.

In this case, the first generated tokens can only be {"javascript": " and then the LLM is left filling the blanks until the next " at which point it will be forced to insert ", "css": ", left on its own again and then forced with "}.

The great property of LLMs generating new tokens based on the previous ones is that even without the added prompt engineering, if it sees {"javascript": " it will automatically continue generating JSON and will not be likely to add all the intros like Sure, here is the response.

At this point we are guaranteed to get a valid JSON using our structure. So we can use JSON.parse() on it and then convert it to the JavaScript object we requested.

Conclusion

Before we implemented this magic llm<Type>() function, we'd see people adding a lot brittle logic in order to try and get the LLM to output things in the correct format, do lot of prompt engineering, add fuzzy parsing, retry logic... This was both brittle and added latency to the system.

This is not only a reliability improvement but really unlocks a whole new world of possibilities. You can now leverage LLMs within your codebase to implement functions that returns values just like any other function would, but instead of writing code to run it, you tell it what to do using text.

If you liked this article, you might be interested in my Twitter feed as well.
 
 

Related Posts

  • August 27, 2011 Start a technical blog, it’s worth it! (6)
    Lately, I've been advocating to all my student friends to start a blog. Here's an article with the most common questions answered :) What are the benefits? Being known as an expert. The majority of my blog posts are about advanced Javascript topics. As a result, I'm being tagged as […]
  • November 4, 2013 Bitwise Truthiness (0)
    In this blog post, I explore another form of truthiness in Javascript. What happens if you use a bitwise operator on a value like 0|value or ~~value. Context We recently turned on the JSHint bitwise rule by default and the following code was caught. var isValid = false; for […]
  • February 23, 2012 Dassault Systemes Javascript Evangelism Talk (3)
    I recently had the chance to do a 2-hour Javascript evangelism talk at Dassault Systèmes. Unfortunately the presentation has not been recorded. I reused my the presentation I did at EPITA at the beginning and added a second part with a lot of demos. I've written down notes about the […]
  • December 22, 2011 Javascript – One line global + export (2)
    I've been working on code that works on Browser, Web Workers and NodeJS. In order to export my module, I've been writing ugly code like this one: (function () { /* ... Code that defines MyModule ... */ var all; if (typeof self !== 'undefined') { all = self; // Web […]
  • September 14, 2011 CSS – One Line Justify (28)
    I came across a CSS problem, text-align: justify does not work with only one line. Justify behavior The reason is because it has been designed with paragraphs in mind. It justifies all the lines but the last one. Normal Justify Lorem ipsum dolor sit amet, consectetur […]