Summaries/AI/Generative AI & LLM's/Introduction LLM & lifecycl...

5.7 KiB

Generative AI & LLMs

Introduction

Large language models, their use cases, how the models work, prompt engineering, how to make creative text outputs, and outline a project lifecycle for generative AI projects.

Generative AI is a subset of traditional machine learning. And the machine learning models that underpin generative AI have learned these abilities by finding statistical patterns in massive datasets of content that was originally generated by humans.

Foundation models, sometimes called base models. Examples are GTP, BERT, LLaMa, BLOOM, FLAN-T5 and PaLM

The more parameters a model has, the more memory, and as it turns out, the more sophisticated the tasks it can perform.

Prompt and completion

The text that you pass to an LLM is known as a prompt. The space or memory that is available to the prompt is called the context window, and this is typically large enough for a few thousand words, but differs from model to model. The output of the model is called a completion, and the act of using the model to generate text is known as inference.

Capabilities of LLMs

  • next word prediction

  • translation tasks

  • program code generation

  • information retrieval: ask the model to identify all of

    the people and places identified in a news article => named entity recognition, a word classification.

Transformer architecture

This novel approach unlocked the progress in generative AI that we see today. It can be scaled efficiently to use multi-core GPUs, it can parallel process input data, making use of much larger training datasets, and crucially, it's able to learn to pay attention to the meaning of the words it's processing.

Paper: Transformers: Attention is all you need.

The power of the transformer architecture lies in its ability to learn the relevance and context of all of the words in a sentence. To apply attention weights to those relationships so that the model learns the relevance of each word to each other words no matter where they are in the input.

Attention map and can be useful to illustrate the attention weights between

each word and every other word

Words are strongly connected to other words (orange lines) is called Self-attention and the ability to learn a tension in this way across the whole input significantly approves the model's ability to encode language.

The transformer architecture is split into two distinct parts, the encoder and the decoder. These components work in conjunction with each other and they share a number of similarities.

first tokenize the words

Multiple tokenization methods, for example:

  • token IDs matching two complete words,

  • using token IDs to represent parts of words.

Important is that once you've selected a tokenizer to train the model, you must use the same tokenizer when you generate text

Embedding layer

This layer is a trainable vector embedding space, a high-dimensional space where each token is represented as a vector and occupies a unique location within that space. Each token ID in the vocabulary is matched to a multi-dimensional vector, and the intuition is that these vectors learn to encode the meaning and context of individual tokens in the input sequence. Word2vec use this concept.

Each word has been matched to a token ID, and each token is mapped into a vector.

Adding the token vectors into the base of the encoder or the decoder, then the positional encoding is also added. The model processes each of the input tokens in parallel. So by adding the positional encoding, you preserve the information about the word order and don't lose the relevance of the position of the word in the sentence. Once you've summed the input tokens and the positional encodings, you pass the resulting vectors to the self-attention layer.

The transformer architecture actually has multi-headed self-attention. This means that multiple sets of self-attention weights or heads are learned in parallel independently of each other. The number of attention heads included in the attention layer varies from model to model, but numbers in the range of 12-100 are common. The intuition here is that each self-attention head will learn a different aspect of language.

It's important to note that you don't dictate ahead of time what aspects of language the attention heads will learn. The weights of each head are randomly initialized and given sufficient training data and time, each will learn different aspects of language.

Now that all of the attention weights have been applied to your input data, the output is processed through a fully-connected feed-forward network. The output of this layer is a vector of logits proportional to the probability score for each and every token in the tokenizer dictionary. You can then pass these logits to a final softmax layer, where they are normalized into a probability score for each word.

One single token will have a score higher than the rest, but there are a number of methods that you can use to vary the final selection from this vector of probabilities.

[Video Transformer Architecture](Transformers architecture.mp4)Transformers architecture.mp4

Video Transformer Architecture