What is generative AI? Everything you need to know
Generative AI is a type of artificial intelligence technology that can produce various types of content, including text, imagery, audio and synthetic data. The recent buzz around generative AI has been driven by the simplicity of new user interfaces for creating high-quality text, graphics and videos in a matter of seconds.
The technology, it should be noted, is not brand-new. Generative AI was introduced in the 1960s in chatbots. But it was not until 2014, with the introduction of generative adversarial networks, or GANs -- a type of machine learning algorithm -- that generative AI could create convincingly authentic images, videos and audio of real people.
On the one hand, this newfound capability has opened up opportunities that include better movie dubbing and rich educational content. It also unlocked concerns about deepfakes -- digitally forged images or videos -- and harmful cybersecurity attacks on businesses, including nefarious requests that realistically mimic an employee's boss.
Two additional recent advances that will be discussed in more detail below have played a critical part in generative AI going mainstream: transformers and the breakthrough language models they enabled. Transformers are a type of machine learning that made it possible for researchers to train ever-larger models without having to label all of the data in advance. New models could thus be trained on billions of pages of text, resulting in answers with more depth. In addition, transformers unlocked a new notion called attention that enabled models to track the connections between words across pages, chapters and books rather than just in individual sentences. And not just words: Transformers could also use their ability to track connections to analyze code, proteins, chemicals and DNA.
The rapid advances in so-called large language models (LLMs) -- i.e., models with billions or even trillions of parameters -- have opened a new era in which generative AI models can write engaging text, paint photorealistic images and even create somewhat entertaining sitcoms on the fly. Moreover, innovations in multimodal AI enable teams to generate content across multiple types of media, including text, graphics and video. This is the basis for tools like Dall-E that automatically create images from a text description or generate text captions from images.
These breakthroughs notwithstanding, we are still in the early days of using generative AI to create readable text and photorealistic stylized graphics. Early implementations have had issues with accuracy and bias, as well as being prone to hallucinations and spitting back weird answers. Still, progress thus far indicates that the inherent capabilities of this type of AI could fundamentally change business. Going forward, this technology could help write code, design new drugs, develop products, redesign business processes and transform supply chains.
How does generative AI work?
Generative AI starts with a prompt that could be in the form of a text, an image, a video, a design, musical notes, or any input that the AI system can process. Various AI algorithms then return new content in response to the prompt. Content can include essays, solutions to problems, or realistic fakes created from pictures or audio of a person.
Early versions of generative AI required submitting data via an API or an otherwise complicated process. Developers had to familiarize themselves with special tools and write applications using languages such as Python.
Now, pioneers in generative AI are developing better user experiences that let you describe a request in plain language. After an initial response, you can also customize the results with feedback about the style, tone and other elements you want the generated content to reflect.
Generative AI models
Generative AI models combine various AI algorithms to represent and process content. For example, to generate text, various natural language processing techniques transform raw characters (e.g., letters, punctuation and words) into sentences, parts of speech, entities and actions, which are represented as vectors using multiple encoding techniques. Similarly, images are transformed into various visual elements, also expressed as vectors. One caution is that these techniques can also encode the biases, racism, deception and puffery contained in the training data.
Once developers settle on a way to represent the world, they apply a particular neural network to generate new content in response to a query or prompt. Techniques such as GANs and variational autoencoders (VAEs) -- neural networks with a decoder and encoder -- are suitable for generating realistic human faces, synthetic data for AI training or even facsimiles of particular humans.
Recent progress in transformers such as Google's Bidirectional Encoder Representations from Transformers (BERT), OpenAI's GPT and Google AlphaFold have also resulted in neural networks that can not only encode language, images and proteins but also generate new content.
How neural networks are transforming generative AI
Researchers have been creating AI and other tools for programmatically generating content since the early days of AI. The earliest approaches, known as rules-based systems and later as "expert systems," used explicitly crafted rules for generating responses or data sets.
Neural networks, which form the basis of much of the AI and machine learning applications today, flipped the problem around. Designed to mimic how the human brain works, neural networks "learn" the rules from finding patterns in existing data sets. Developed in the 1950s and 1960s, the first neural networks were limited by a lack of computational power and small data sets. It was not until the advent of big data in the mid-2000s and improvements in computer hardware that neural networks became practical for generating content.
The field accelerated when researchers found a way to get neural networks to run in parallel across the graphics processing units (GPUs) that were being used in the computer gaming industry to render video games. New machine learning techniques developed in the past decade, including the aforementioned generative adversarial networks and transformers, have set the stage for the recent remarkable advances in AI-generated content.
What are Dall-E, ChatGPT and Bard?
ChatGPT, Dall-E and Bard are popular generative AI interfaces.
Dall-E. Trained on a large data set of images and their associated text descriptions, Dall-E is an example of a multimodal AI application that identifies connections across multiple media, such as vision, text and audio. In this case, it connects the meaning of words to visual elements. It was built using OpenAI's GPT implementation in 2021. Dall-E 2, a second, more capable version, was released in 2022. It enables users to generate imagery in multiple styles driven by user prompts.
ChatGPT. The AI-powered chatbot that took the world by storm in November 2022 was built on OpenAI's GPT-3.5 implementation. OpenAI has provided a way to interact and fine-tune text responses via a chat interface with interactive feedback. Earlier versions of GPT were only accessible via an API. GPT-4 was released March 14, 2023. ChatGPT incorporates the history of its conversation with a user into its results, simulating a real conversation. After the incredible popularity of the new GPT interface, Microsoft announced a significant new investment into OpenAI and integrated a version of GPT into its Bing search engine.
Bard. Google was another early leader in pioneering transformer AI techniques for processing language, proteins and other types of content. It open sourced some of these models for researchers. However, it never released a public interface for these models. Microsoft's decision to implement GPT into Bing drove Google to rush to market a public-facing chatbot, Google Bard, built on a lightweight version of its LaMDA family of large language models. Google suffered a significant loss in stock price following Bard's rushed debut after the language model incorrectly said the Webb telescope was the first to discover a planet in a foreign solar system. Meanwhile, Microsoft and ChatGPT implementations also lost face in their early outings due to inaccurate results and erratic behavior. Google has since unveiled a new version of Bard built on its most advanced LLM, PaLM 2, which allows Bard to be more efficient and visual in its response to user queries.
What are use cases for generative AI?
Generative AI can be applied in various use cases to generate virtually any kind of content. The technology is becoming more accessible to users of all kinds thanks to cutting-edge breakthroughs like GPT that can be tuned for different applications. Some of the use cases for generative AI include the following:
- Implementing chatbots for customer service and technical support.
- Deploying deepfakes for mimicking people or even specific individuals.
- Improving dubbing for movies and educational content in different languages.
- Writing email responses, dating profiles, resumes and term papers.
- Creating photorealistic art in a particular style.
- Improving product demonstration videos.
- Suggesting new drug compounds to test.
- Designing physical products and buildings.
- Optimizing new chip designs.
- Writing music in a specific style or tone.
What are the benefits of generative AI?
Generative AI can be applied extensively across many areas of the business. It can make it easier to interpret and understand existing content and automatically create new content. Developers are exploring ways that generative AI can improve existing workflows, with an eye to adapting workflows entirely to take advantage of the technology. Some of the potential benefits of implementing generative AI include the following:
- Automating the manual process of writing content.
- Reducing the effort of responding to emails.
- Improving the response to specific technical queries.
- Creating realistic representations of people.
- Summarizing complex information into a coherent narrative.
- Simplifying the process of creating content in a particular style.
What are the limitations of generative AI?
Early implementations of generative AI vividly illustrate its many limitations. Some of the challenges generative AI presents result from the specific approaches used to implement particular use cases. For example, a summary of a complex topic is easier to read than an explanation that includes various sources supporting key points. The readability of the summary, however, comes at the expense of a user being able to vet where the information comes from.
Here are some of the limitations to consider when implementing or using a generative AI app:
- It does not always identify the source of content.
- It can be challenging to assess the bias of original sources.
- Realistic-sounding content makes it harder to identify inaccurate information.
- It can be difficult to understand how to tune for new circumstances.
- Results can gloss over bias, prejudice and hatred.
Attention is all you need: Transformers bring new capability
In 2017, Google reported on a new type of neural network architecture that brought significant improvements in efficiency and accuracy to tasks like natural language processing. The breakthrough approach, called transformers, was based on the concept of attention.
At a high level, attention refers to the mathematical description of how things (e.g., words) relate to, complement and modify each other. The researchers described the architecture in their seminal paper, "Attention is all you need," showing how a transformer neural network was able to translate between English and French with more accuracy and in only a quarter of the training time than other neural nets. The breakthrough technique could also discover relationships, or hidden orders, between other things buried in the data that humans might have been unaware of because they were too complicated to express or discern.
Transformer architecture has evolved rapidly since it was introduced, giving rise to LLMs such as GPT-3 and better pre-training techniques, such as Google's BERT.
What are the concerns surrounding generative AI?
The rise of generative AI is also fueling various concerns. These relate to the quality of results, potential for misuse and abuse, and the potential to disrupt existing business models. Here are some of the specific types of problematic issues posed by the current state of generative AI:
- It can provide inaccurate and misleading information.
- It is more difficult to trust without knowing the source and provenance of information.
- It can promote new kinds of plagiarism that ignore the rights of content creators and artists of original content.
- It might disrupt existing business models built around search engine optimization and advertising.
- It makes it easier to generate fake news.
- It makes it easier to claim that real photographic evidence of a wrongdoing was just an AI-generated fake.
- It could impersonate people for more effective social engineering cyber attacks.
What are some examples of generative AI tools?
Generative AI tools exist for various modalities, such as text, imagery, music, code and voices. Some popular AI content generators to explore include the following:
- Text generation tools include GPT, Jasper, AI-Writer and Lex.
- Image generation tools include Dall-E 2, Midjourney and Stable Diffusion.
- Music generation tools include Amper, Dadabots and MuseNet.
- Code generation tools include CodeStarter, Codex, GitHub Copilot and Tabnine.
- Voice synthesis tools include Descript, Listnr and Podcast.ai.
- AI chip design tool companies include Synopsys, Cadence, Google and Nvidia.
Use cases for generative AI, by industry
New generative AI technologies have sometimes been described as general-purpose technologies akin to steam power, electricity and computing because they can profoundly affect many industries and use cases. It's essential to keep in mind that, like previous general-purpose technologies, it often took decades for people to find the best way to organize workflows to take advantage of the new approach rather than speeding up small portions of existing workflows. Here are some ways generative AI applications could impact different industries:
- Finance can watch transactions in the context of an individual's history to build better fraud detection systems.
- Legal firms can use generative AI to design and interpret contracts, analyze evidence and suggest arguments.
- Manufacturers can use generative AI to combine data from cameras, X-ray and other metrics to identify defective parts and the root causes more accurately and economically.
- Film and media companies can use generative AI to produce content more economically and translate it into other languages with the actors' own voices.
- The medical industry can use generative AI to identify promising drug candidates more efficiently.
- Architectural firms can use generative AI to design and adapt prototypes more quickly.
- Gaming companies can use generative AI to design game content and levels.
GPT joins the pantheon of general-purpose technologies
OpenAI, an AI research and deployment company, took the core ideas behind transformers to train its version, dubbed Generative Pre-trained Transformer, or GPT. Observers have noted that GPT is the same acronym used to describe general-purpose technologies such as the steam engine, electricity and computing. Most would agree that GPT and other transformer implementations are already living up to their name as researchers discover ways to apply them to industry, science, commerce, construction and medicine.
Ethics and bias in generative AI
Despite their promise, the new generative AI tools open a can of worms regarding accuracy, trustworthiness, bias, hallucination and plagiarism -- ethical issues that likely will take years to sort out. None of the issues are particularly new to AI. Microsoft's first foray into chatbots in 2016, called Tay, for example, had to be turned off after it started spewing inflammatory rhetoric on Twitter.
What is new is that the latest crop of generative AI apps sounds more coherent on the surface. But this combination of humanlike language and coherence is not synonymous with human intelligence, and there currently is great debate about whether generative AI models can be trained to have reasoning ability. One Google engineer was even fired after publicly declaring the company's generative AI app, Language Models for Dialog Applications (LaMDA), was sentient.
The convincing realism of generative AI content introduces a new set of AI risks. It makes it harder to detect AI-generated content and, more importantly, makes it more difficult to detect when things are wrong. This can be a big problem when we rely on generative AI results to write code or provide medical advice. Many results of generative AI are not transparent, so it is hard to determine if, for example, they infringe on copyrights or if there is problem with the original sources from which they draw results. If you don't know how the AI came to a conclusion, you cannot reason about why it might be wrong.
Generative AI vs. AI
Generative AI produces new content, chat responses, designs, synthetic data or deepfakes. Traditional AI, on the other hand, has focused on detecting patterns, making decisions, honing analytics, classifying data and detecting fraud.
Generative AI, as noted above, often uses neural network techniques such as transformers, GANs and VAEs. Other kinds of AI, in distinction, use techniques including convolutional neural networks, recurrent neural networks and reinforcement learning.
Generative AI often starts with a prompt that lets a user or data source submit a starting query or data set to guide content generation. This can be an iterative process to explore content variations. Traditional AI algorithms process new data to return a simple result.
Generative AI history
The Eliza chatbot created by Joseph Weizenbaum in the 1960s was one of the earliest examples of generative AI. These early implementations used a rules-based approach that broke easily due to a limited vocabulary, lack of context and overreliance on patterns, among other shortcomings. Early chatbots were also difficult to customize and extend.
The field saw a resurgence in the wake of advances in neural networks and deep learning in 2010 that enabled the technology to automatically learn to parse existing text, classify image elements and transcribe audio.
Ian Goodfellow introduced GANs in 2014. This deep learning technique provided a novel approach for organizing competing neural networks to generate and then rate content variations. These could generate realistic people, voices, music and text. This inspired interest in -- and fear of -- how generative AI could be used to create realistic deepfakes that impersonate voices and people in videos.
Since then, progress in other neural network techniques and architectures has helped expand generative AI capabilities. Techniques include VAEs, long short-term memory, transformers, diffusion models and neural radiance fields.
Best practices for using generative AI
The best practices for using generative AI will vary depending on the modalities, workflow and desired goals. That said, it is important to consider essential factors such as accuracy, transparency and ease of use in working with generative AI. The following practices help achieve these factors:
- Clearly label all generative AI content for users and consumers.
- Vet the accuracy of generated content using primary sources where applicable.
- Consider how bias might get woven into generated AI results.
- Double-check the quality of AI-generated code and content using other tools.
- Learn the strengths and limitations of each generative AI tool.
- Familiarize yourself with common failure modes in results and work around these.
The future of generative AI
The incredible depth and ease of ChatGPT have shown tremendous promise for the widespread adoption of generative AI. To be sure, it has also demonstrated some of the difficulties in rolling out this technology safely and responsibly. But these early implementation issues have inspired research into better tools for detecting AI-generated text, images and video. Industry and society will also build better tools for tracking the provenance of information to create more trustworthy AI.
Furthermore, improvements in AI development platforms will help accelerate research and development of better generative AI capabilities in the future for text, images, video, 3D content, drugs, supply chains, logistics and business processes. As good as these new one-off tools are, the most significant impact of generative AI will come from embedding these capabilities directly into versions of the tools we already use.
Grammar checkers are going to get better. Design tools will seamlessly embed more useful recommendations directly into workflows. Training tools will be able to automatically identify best practices in one part of the organization to help train others more efficiently. And these are just a fraction of the ways generative AI will change how we work.
Generative AI FAQs
Below are some frequently asked questions people have about generative AI.
Who created generative AI?
Joseph Weizenbaum created the first generative AI in the 1960s as part of the Eliza chatbot.
Ian Goodfellow demonstrated generative adversarial networks for generating realistic-looking and -sounding people in 2014.
Subsequent research into LLMs from Open AI and Google ignited the recent enthusiasm that has evolved into tools like ChatGPT, Google Bard and Dall-E.
How could generative AI replace jobs?
Generative AI has the potential to replace a variety of jobs, including the following:
- Writing product descriptions.
- Creating marketing copy.
- Generating basic web content.
- Initiating interactive sales outreach.
- Answering customer questions.
- Making graphics for webpages.
Some companies will look for opportunities to replace humans where possible, while others will use generative AI to augment and enhance their existing workforce.
How do you build a generative AI model?
A generative AI model starts by efficiently encoding a representation of what you want to generate. For example, a generative AI model for text might begin by finding a way to represent the words as vectors that characterize the similarity between words often used in the same sentence or that mean similar things.
Recent progress in LLM research has helped the industry implement the same process to represent patterns found in images, sounds, proteins, DNA, drugs and 3D designs. This generative AI model provides an efficient way of representing the desired type of content and efficiently iterating on useful variations.
How do you train a generative AI model?
The generative AI model needs to be trained for a particular use case. The recent progress in LLMs provides an ideal starting point for customizing applications for different use cases. For example, the popular GPT model developed by OpenAI has been used to write text, generate code and create imagery based on written descriptions.
Training involves tuning the model's parameters for different use cases and then fine-tuning results on a given set of training data. For example, a call center might train a chatbot against the kinds of questions service agents get from various customer types and the responses that service agents give in return. An image-generating app, in distinction to text, might start with labels that describe content and style of images to train the model to generate new images.
How is generative AI changing creative work?
Generative AI promises to help creative workers explore variations of ideas. Artists might start with a basic design concept and then explore variations. Industrial designers could explore product variations. Architects could explore different building layouts and visualize them as a starting point for further refinement.
It could also help democratize some aspects of creative work. For example, business users could explore product marketing imagery using text descriptions. They could further refine these results using simple commands or suggestions.
What's next for generative AI?
ChatGPT's ability to generate humanlike text has sparked widespread curiosity about generative AI's potential. It also shined a light on the many problems and challenges ahead.
In the short term, work will focus on improving the user experience and workflows using generative AI tools. It will also be essential to build trust in generative AI results.
Many companies will also customize generative AI on their own data to help improve branding and communication. Programming teams will use generative AI to enforce company-specific best practices for writing and formatting more readable and consistent code.
Vendors will integrate generative AI capabilities into their additional tools to streamline content generation workflows. This will drive innovation in how these new capabilities can increase productivity.
Generative AI could also play a role in various aspects of data processing, transformation, labeling and vetting as part of augmented analytics workflows. Semantic web applications could use generative AI to automatically map internal taxonomies describing job skills to different taxonomies on skills training and recruitment sites. Similarly, business teams will use these models to transform and label third-party data for more sophisticated risk assessments and opportunity analysis capabilities.
In the future, generative AI models will be extended to support 3D modeling, product design, drug development, digital twins, supply chains and business processes. This will make it easier to generate new product ideas, experiment with different organizational models and explore various business ideas.
Latest Generative AI technology defined
AI art (artificial intelligence art)
AI art is any form of digital art created or enhanced with AI tools. Read more
Auto-GPT
Auto-GPT is an experimental, open source autonomous AI agent based on the GPT-4 language model that autonomously chains together tasks to achieve a big-picture goal set by the user. Read more
Google Search Generative Experience
Google Search Generative Experience (SGE) is a set of search and interface capabilities that integrates generative AI-powered results into Google search engine query responses. Read more
Q-learning
Q-learning is a machine learning approach that enables a model to iteratively learn and improve over time by taking the correct action. Read more
Google Search Labs (GSE)
GSE is an initiative from Alphabet's Google division to provide new capabilities and experiments for Google Search in a preview format before they become publicly available. Read more
Inception score
The inception score (IS) is a mathematical algorithm used to measure or determine the quality of images created by generative AI through a generative adversarial network (GAN). The word "inception" refers to the spark of creativity or initial beginning of a thought or action traditionally experienced by humans. Read more
Reinforcement learning from human feedback (RLHF)
RLHF is a machine learning approach that combines reinforcement learning techniques, such as rewards and comparisons, with human guidance to train an AI agent. Read more
Variational autoencoder (VAE)
A variational autoencoder is a generative AI algorithm that uses deep learning to generate new content, detect anomalies and remove noise. Read more
What are some generative models for natural language processing?
Some generative models for natural language processing include the following:
- Carnegie Mellon University's XLNet
- OpenAI's GPT (Generative Pre-trained Transformer)
- Google's ALBERT ("A Lite" BERT)
- Google BERT
- Google LaMDA
Will AI ever gain consciousness?
Some AI proponents believe that generative AI is an essential step toward general-purpose AI and even consciousness. One early tester of Google's LaMDA chatbot even created a stir when he publicly declared it was sentient. Then he was let go from the company.
In 1993, the American science fiction writer and computer scientist Vernor Vinge posited that in 30 years, we would have the technological ability to create a "superhuman intelligence" -- an AI that is more intelligent than humans -- after which the human era would end. AI pioneer Ray Kurzweil predicted such a "singularity" by 2045.
Many other AI experts think it could be much further off. Robot pioneer Rodney Brooks predicted that AI will not gain the sentience of a 6-year-old in his lifetime but could seem as intelligent and attentive as a dog by 2048.
Latest generative AI news and trends
Generative AI – the next biggest cyber security threat?
Following the launch of ChatGPT in November 2022, several reports have emerged that seek to determine the impact of generative AI in cybersecurity. Undeniably, generative AI in cybersecurity is a double-edged sword, but will the paradigm shift in favor of opportunity or risk? Read more
SHRM CEO addresses AI 'nightmare' in HR
SHRM CEO Johnny Taylor is encouraging optimism about AI in HR, arguing it will augment jobs rather than automate them. Read more
AWS invests $100 million in new generative AI program
The Generative AI Innovation Center will connect AWS machine learning experts and data scientists with enterprises. Enterprises can also use AWS services to train and scale models. Read more
Monitor generative AI in customer experiences -- or else
As marketers and customer service leaders deploy generative AI tools that tech vendors are rapidly commercializing, they should monitor U.S. FTC guidance. Read more
Mastercard, Moderna expect AI to improve jobs, productivity
Mastercard and Moderna believe AI's future impact on jobs is a positive one, but both stress the need to upskill employees. Read more