What is Artificial Intelligence(AI)?
Artificial intelligence is a branch of computer science that deals with the creation of intelligence agents, which are systems that can reason, learn, and act autonomously. It involves the use of machines to process information and make decisions based on provided data, mimicking human intelligence.
What is Machine Learning?
Machine learning(ML) is a program or system that trains a model from input data. The trained models can make useful predictions from new or never before seen data drawn from the same one used to train the model. Machine learning gives the computer the ability to learn without explicit programming. It's a subset of AI.
ML model types
Supervised models- these models use labeled data to provide an output based on the provisioned data. Labeled data is data that comes with a tag like a name, a type, or a number. In supervised learning, testing data values are input into the model. The model outputs a prediction and compares that prediction to the training data used to train the model.
Note: If the predicted test data values and actual training data values are far apart, that’s called an error. The model will try to reduce this error until the predicted and actual values are closer together. This is a classic optimization problem.
Unsupervised models- these models use unlabeled data to train models to give an output. Unlabeled data is data that comes with no tag. Unsupervised problems are all about discovery, looking at the raw data and seeing if it naturally falls into groups.
What is Deep Learning?
Deep Learning is a type of machine learning that uses artificial neural networks, allowing them to process more complex patterns than machine learning. Neural networks can use both labeled and unlabeled data which is known as semi-supervised learning(a small amount of labeled data and a large amount of unlabeled data).
Deep Learning Models
Discriminative model is used to classify or predict labels for data points. This model is typically trained on a data set of labeled data points. It learns the relationship between the features of the data points and the labels
Note: A trained discriminative model can be used to predict the label for new data points.
Generative model generates new content. The gen AI process can take training code, labeled and unlabeled data of all data types and build a foundation model.
A foundation model is a large AI model pre-trained on a vast quantity of data designed to be adapted or fine tuned to a wide range of downstream tasks, such as sentiment analysis, image captioning, and object recognition. Foundation models have the potential to revolutionize many industries, including healthcare, finance, and customer service. They can be used to detect fraud and provide personalized customer support.
- Vertex AI offers a model garden that includes foundation models. The language foundation models include PaLM API for chat and text.
- The vision foundation models include stable diffusion, which has been shown to be effective at generating high quality images from text descriptions.
Note: generative models can generate new data instances while discriminative models discriminate between different kinds of data instances.
Gen AI is a type of artificial intelligence that creates new content based on what it has learned from existing content.
Gen AI is a subset of deep learning, which means it uses artificial neural networks to process both labeled and unlabeled data using supervised, unsupervised, and semi-supervised methods.
A prompt is a short piece of text that is given to the large language model as input. And it can be used to control the output of the model in a variety of ways. Prompt design is the process of creating a prompt that will generate the desired output from a large language model.
What are LLMs?
A large language model (LLM) is a type of artificial intelligence (AI) algorithm that uses deep learning techniques and large data sets to understand, generate, and predict new content.
LLMs can perform a variety of natural language processing (NLP) tasks, such as generating and classifying text, answering questions, and translating text from one language to another.
Examples of LLMs include:
LaMDA(Language Model for Dialogue Applications)
LaMDA is built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. This architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next.
Unlike most other language models, LaMDA is trained on dialogue. During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language. One of those nuances is sensibleness. LaMDA can be fine-tuned to significantly improve the sensibleness and specificity of its responses.
For example, deciding on the most likely meaning and appropriate representation of the word “bank” in the sentence “I arrived at the bank after crossing the…” requires knowing if the sentence ends in “… road.” or “… river.”
Processing the example above, an RNN could only determine that “bank” is likely to refer to the bank of a river after reading each word between “bank” and “river” step by step.
PaLM( Pathways Language Models)
PaLM is a machine-learning technique created by Google. PaLM 2 follows up on the original PaLM, which Google announced in April 2022. PaLM 2 supports over 100 languages and can perform “reasoning,” code generation, and multi-lingual translation. A single model that can generalize across domains and tasks while being highly efficient.
- PaLM API lets you test and experiment with Google’s large language models and gen AI tools. To make prototyping quick and more accessible, developers can integrate PaLM API with the Maker suite and use it to access the API using a graphical user interface.
- The suite includes a number of different tools such as a model training tool, a model deployment tool, and a model monitoring tool.
- PaLM API for text is fine-tuned for language tasks such as classification, summarization, and entity extraction.
- PaLM API for chat is fine-tuned for multi-turn chat, where the model keeps track of previous messages in the chat and uses it as context for generating new responses.
- Text Embedding API generates vector embeddings for input text. You can use embeddings for tasks like semantic search, recommendation, classification, and outlier detection.
- Codey APIs generate code. The Codey APIs include three models that generate code, suggest code for code completion, and let developers chat to get help with code-related questions.
GPT-3 & GPT-4(OpenAI)
Generative Pre-trained Transformer 3. It’s a machine learning model that generates any type of text using internet data. GPT-3 was developed by OpenAI and released in 2020. It requires only a small amount of input text to generate large volumes of relevant and sophisticated machine-generated text. GPT-3 can comprehend text and write like a human, which makes its applications almost endless.
Generative Pre-trained Transformer 4 is a large language model created by OpenAI. It was released on March 14, 2023. GPT-4 is a multimodal model that accepts image and text inputs and emits text outputs. It can produce more natural-sounding text and solve problems more accurately than its predecessor GPT -3.
It’s important to acknowledge that models created through AI training to some extent can perpetuate negative behaviors such as internalizing biases, replicating harmful language, or even spreading inaccurate information. It’s also worth noting that even when using carefully vetted data, the model itself can still be misused.
Have watched the drama series NEXT? It offers a thrilling exploration of how a rogue AI can wreak havoc on society. The show follows Silicon Valley pioneer Paul LeBlanc as he teams up with Special Agent Shea Salazar to stop the AI he created from destroying humanity. NEXT is a gripping tale that combines action with a thoughtful examination of how technology is changing our world in ways we may not fully comprehend.