top of page

What the Heck is AI, Anyway?

Updated: Jul 6

I am so excited to share this series of blog posts with you! Before I dive in to talk about what AI is, I first have to nerd out for a moment. I have to say that we are living in one of the most exciting times in history. Yes, we've got our issues, but I think, thanks to advances in computer science and artificial intelligence, we are going to see things our ancestors never could have imagined! I feel truly blessed to be alive during this time to see these amazing innovations, and what makes me so freaking hopeful is the impact that this technology could have on teachers, students, and the education field in general.


Classroom of diverse students using technology.
Classroom of the Future

I have to admit that I was disappointed in how easily things went back to the status quo once we all returned from Covid. As trying and devastating as that time was, there was an underlying hope, at least in the #edtech world that we were finally going to see true change in the way school was structured and how we taught our students. That hope wasn't realized though, as we've seen a return to the same schedules and the same instructional practices while never really having dealt with the trauma and last effects this time had on all of us. I get it. We're tired in body and spirit, and a lot of us are just plain tired of computers and devices.


Then comes November 2022, and we are introduced to a new tool called ChatGPT 3.0. With a few well-worded prompts we could generate new ideas; new possibilities. Since then, this technology has advanced at exponential rates, and only continues to do so. My call to action to you, dear teacher, is to read each of these posts that I'll share with you over the next several weeks. Have an open mind and take some time to "plearn" (play and learn) with the tools that I'm going to share with you. With all that said, let's answer the question, "What the heck is AI, anyway."


 

TL;DR

Artificial Intelligence (AI) has evolved significantly since the 1950s. Starting with basic concepts in the 1960s, it has grown to include Machine Learning, Deep Learning, and now Generative AI. Key advancements include machine learning allowing computers to learn from data, deep learning utilizing neural networks for complex pattern recognition, and generative AI creating new content like text, images, and music. Large Language Models (LLMs) such as GPT-3 use extensive datasets to understand and generate human-like text, aiding in personalized education, automated grading, and AI tutoring. AI tools like ChatGPT and DALL-E are transforming education by making learning more personalized and efficient. Understanding these tools and their applications can help educators enhance teaching and learning experiences.

 

The rest of this post was written in collaboration with ChatGPT 4o.

In preparation for this blog post, I was having a conversation with ChatGPT about what I should include in an introduction to AI. I specifically told ChatGPT, "I want this introduction to be a very easy to understand explanation what generative AI is. I want to start at the beginning with the origins of AI, moving from AI into machine learning, then into deep learning and now into Generative AI. Can you explain all of this in a way that a third grader could understand it?" Within seconds of hitting enter, I had this information:


From ChatGPT

Sure, Lauren! Here's an easy-to-understand explanation of the evolution of AI, suitable for a third grader:

The Journey of Artificial Intelligence (AI)

1. What is AI?

AI, or Artificial Intelligence, is like having a really smart robot friend. Imagine if your favorite toy could talk to you, understand what you say, and even help you with your homework! AI is a computer or machine that can think and learn like a person.

2. The Beginning of AI

A long time ago, scientists wanted to create machines that could think and learn. They started making simple programs that could follow instructions and solve problems. This was the very beginning of AI.

3. Machine Learning

As time went on, scientists wanted these machines to get even smarter. They came up with something called Machine Learning. Think of it like teaching your robot friend new tricks. Instead of just following instructions, the robot can learn from experience, just like how you get better at riding a bike the more you practice.

4. Deep Learning

But scientists didn’t stop there. They wanted the robot to be even more like a human brain. So they created Deep Learning. This is like giving the robot a super brain with lots of layers, where each layer helps it understand and learn more complicated things. Imagine if your robot friend could now recognize faces, understand different languages, and even play games with you!

5. Generative AI

Now we have the coolest part – Generative AI. This is like giving your robot friend a super creative mind. Not only can it learn, but it can also create new things. It can write stories, draw pictures, and even make music. Generative AI is like having a robot artist or writer who can come up with amazing new ideas all by itself!

How It Works in School

Teachers are using these smart robots to help in the classroom. For example, they can:

  • Help students understand their lessons better.

  • Make learning fun with cool games and activities.

  • Help teachers with their work, like grading papers and planning lessons.

Why It’s Important

AI is making learning more fun and interesting. It helps teachers and students do things more easily and quickly. But it's also important to use it wisely and make sure everyone can use it safely and fairly.

I hope this helps explain the basics of AI and how it has evolved into something really cool and helpful for everyone!

Split image: on the left are men working with antiquated technology, on the right are men working with technology tools of the future.
Technology: Past vs. Future

Again...within seconds! I knew that I wanted to include specific dates and facts about the history of AI and how everything got started. So, I prompted ChatGPT to rewrite it, but to include specific dates, and an explanation of what the first iterations of AI were like in the 1950s and 1960s. Here's what ChapGPT had to say,


"The Early Days of AI in the 1960s and 1970s

1960s: The Dawn of AI

In the 1960s, AI was in its infancy. Here are some notable early AI systems and their characteristics:

  1. ELIZA (1966):

  • Developer: Joseph Weizenbaum at MIT.

  • Description: ELIZA was one of the first chatbots. It could simulate a conversation by using pattern matching and substitution methodology. One of its most famous scripts, DOCTOR, mimicked a Rogerian psychotherapist, giving the illusion of understanding by rephrasing user inputs as questions.

  • Impact: Although primitive, ELIZA demonstrated how computers could mimic human-like conversation, even if it didn’t truly understand the content.

  1. SHRDLU (1968-1970):

  • Developer: Terry Winograd at MIT.

  • Description: SHRDLU was a program that could understand and respond to commands in plain English to manipulate objects in a virtual “blocks world.” It combined natural language understanding with simple logical reasoning and problem-solving.

  • Impact: SHRDLU showed that computers could process and act on natural language instructions in a limited context, representing an early success in natural language processing and AI reasoning.

1970s: Expansion and Experimentation

The 1970s saw further experimentation and development in AI, with a focus on knowledge representation and problem-solving.

  1. MYCIN (1972-1977):

  • Developer: Edward Shortliffe at Stanford University.

  • Description: MYCIN was an expert system designed to diagnose bacterial infections and recommend antibiotics. It used a rule-based system to emulate the decision-making process of human experts.

  • Impact: MYCIN was a pioneering project in medical AI and expert systems, demonstrating the potential for AI in specialized domains. It was one of the earliest systems to show that AI could perform tasks typically done by experts.

  1. DENDRAL (1965-1970s):

  • Developers: Edward Feigenbaum, Bruce Buchanan, and Joshua Lederberg at Stanford University.

  • Description: DENDRAL was an expert system designed to analyze chemical compounds. It used heuristics and rules to infer molecular structures from mass spectrometry data.

  • Impact: DENDRAL was one of the first successful expert systems, showing that AI could assist in complex scientific research and hypothesis formation.

Key Characteristics of Early AI Systems:

  • Rule-Based Reasoning: Early AI systems often relied on predefined rules and logic to simulate decision-making and problem-solving.

  • Natural Language Processing: Systems like ELIZA and SHRDLU began to explore how computers could understand and generate human language, albeit in limited contexts.

  • Expert Systems: Programs like MYCIN and DENDRAL demonstrated how AI could encapsulate expert knowledge to assist in specific domains.

Challenges and Limitations:

  • Limited Understanding: Early AI systems lacked true understanding and were heavily dependent on pre-programmed rules.

  • Computational Power: The computational resources available during the 1960s and 1970s were limited, restricting the complexity and scope of AI applications.

  • Narrow Domains: These systems were generally confined to specific tasks or domains and could not generalize beyond their programmed capabilities.

These early iterations of AI laid the groundwork for future advancements, showing the potential and sparking interest in developing more sophisticated AI systems.

Sources:


Notice that ChatGPT has given me the sources it used to gather this data. This is one way that these tools are already becoming better, especially for use in K-12 education. When they first were developed, we had to "trust but verify" that what they were "spitting" out what actually true. In fact, a lot of the times it wasn't. You see, LLMs were not (some still aren't) connected to the internet. So they had a stopping point for the knowledge it had. When ChatGPT first came on the scene, it was only trained on data up until September 2021. However, if you asked it what time a movie was playing, it would tell you something completely made up and then stand you down that it was correct. We refer to these as hallucinations. Now, several tools, like ChatGPT have access to the internet, so they are much more reliable. (I still recommend verifying your facts!)


Futuristic view of a person's desk set up. Contains a monitor, keyboard, mouse, tablet, and holographic screens.

Okay, back to my conversation with ChatGPT about this post. I feel very strongly that teachers need to be up-to-date on the vocabulary that is associated with artificial intelligence, so I asked it to define the words I know are associated with AI like machine learning, deep learning, generative AI, and Large Language Models. I asked it to include any other terms teachers should know. This is the response I got, again, within seconds.



Sure, Lauren! Here are some key AI terms defined in a way that teachers can understand and use in conversations:

Key AI Vocabulary

Artificial Intelligence (AI)

Definition: AI is the science of creating machines that can think, learn, and perform tasks that typically require human intelligence. This includes problem-solving, understanding natural language, recognizing patterns, and making decisions.

Example: Virtual assistants like Siri and Alexa are examples of AI because they can understand and respond to spoken requests.

Machine Learning (ML)

Definition: A subset of AI, machine learning is a method where computers learn from data to make decisions or predictions. Instead of being explicitly programmed to perform a task, the machine uses algorithms to find patterns in data and improve over time.

Example: Email spam filters use machine learning to identify and move unwanted emails to the spam folder by learning from examples of spam and non-spam emails.

Deep Learning

Definition: Deep learning is a specialized form of machine learning that uses neural networks with many layers (hence "deep") to analyze various factors of data. This approach is particularly good at handling large amounts of data and complex patterns, such as image and speech recognition.

Example: Self-driving cars use deep learning to interpret data from sensors and cameras to navigate and make driving decisions.

Generative AI

Definition: Generative AI refers to systems that can create new content, such as text, images, or music, based on the data they have been trained on. These systems generate outputs that are similar to the examples they learned from.

Example: OpenAI's GPT-3 is a generative AI model that can write essays, generate programming code, and create poetry.

Large Language Models (LLMs)

Definition: LLMs are advanced AI models trained on vast amounts of text data to understand, generate, and manipulate human language. They are capable of tasks such as translation, summarization, and conversation.

Example: ChatGPT is an example of a large language model that can understand and respond to text input in a conversational manner.

Neural Networks

Definition: Neural networks are a set of algorithms designed to recognize patterns, inspired by the way the human brain operates. They consist of layers of interconnected nodes (neurons) that process data and can learn to make decisions.

Example: Image recognition software uses neural networks to identify objects within photos.

Natural Language Processing (NLP)

Definition: NLP is a branch of AI that focuses on the interaction between computers and humans through natural language. It involves enabling computers to understand, interpret, and respond to human language in a useful way.

Example: Translation services like Google Translate use NLP to convert text from one language to another.

Reinforcement Learning

Definition: Reinforcement learning is a type of machine learning where an agent learns to make decisions by performing actions in an environment to maximize some notion of cumulative reward.

Example: Training a robot to navigate a maze by rewarding it for successful moves and penalizing it for mistakes.

Computer Vision

Definition: Computer vision is a field of AI that trains computers to interpret and understand the visual world. By using digital images from cameras and videos, and deep learning models, machines can accurately identify and classify objects.

Example: Facial recognition technology used in security systems and smartphones.

Sources:

These definitions and examples should help teachers grasp the fundamental concepts of AI and engage in informed discussions about its applications and implications in education.


Representation of how generative AI works, shown through a multi-leveled structure.

Next I wanted to ensure that I was providing enough information so that you could have at least a surface level understanding of how all of this actually works, so I prompted ChatGPT, "Can you explain how generative AI works? Tools like ChatGPT and other LLMs, text to image generators, text to music and video generators, things like that?" Here's what it had to say: Sure, Lauren! Here’s a simplified explanation of how generative AI works, focusing on tools like ChatGPT, text-to-image generators, and other creative AI systems.


How Generative AI Works

1. Foundations of Generative AI:

Generative AI models are designed to create new content based on patterns learned from vast amounts of data. They use a type of deep learning called neural networks, specifically designed to handle large and complex datasets.

Large Language Models (LLMs) like ChatGPT

How They Work:

  • Training Phase: LLMs like ChatGPT are trained on massive datasets containing text from books, websites, and other written materials. During training, the model learns to predict the next word in a sentence, which helps it understand grammar, facts, and some level of reasoning.

  • Transformer Architecture: These models use a neural network architecture called transformers, which excel at handling sequential data like text. Transformers have attention mechanisms that allow the model to focus on different parts of the text, making it more effective at understanding context and generating coherent responses.

  • Generation Phase: When you interact with ChatGPT, the model generates responses based on the input it receives. It predicts the most likely sequence of words that should follow, ensuring the response is contextually relevant and grammatically correct.

Example: When you ask ChatGPT a question, it uses its training to generate a response that is relevant and informative, based on patterns and information it learned from the vast amounts of text it was trained on.

Text-to-Image Generators (e.g., DALL-E)

How They Work:

  • Training Phase: Text-to-image models like DALL-E are trained on datasets that pair images with descriptive text. The model learns to associate specific visual features with corresponding words and phrases.

  • Transformer and Convolutional Networks: These models use a combination of transformer architecture (for understanding text) and convolutional neural networks (CNNs) (for handling image data). This combination allows the model to understand textual descriptions and generate corresponding images.

  • Generation Phase: When given a text prompt, the model generates an image that matches the description by piecing together visual elements it has learned during training.

Example: If you input "a cat wearing a space helmet," the text-to-image generator will create an image of a cat with a space helmet, combining its understanding of "cat," "wearing," and "space helmet" from the training data.

Text-to-Music and Text-to-Video Generators

How They Work:

  • Training Phase: These models are trained on large datasets of music or video paired with descriptive text or other relevant metadata. They learn to recognize patterns in how certain words or phrases correspond to musical notes, video frames, and scenes.

  • Specialized Neural Networks: These systems often use specialized architectures suitable for handling temporal data, like recurrent neural networks (RNNs) or long short-term memory networks (LSTMs), which are good at processing sequences over time.

  • Generation Phase: When given a text prompt, the model generates a piece of music or a video clip by predicting sequences of notes, sounds, or video frames that match the description.

Example: For a text prompt like "upbeat electronic music for a party," a text-to-music generator creates a corresponding music track that captures the upbeat, electronic, and party-like qualities described.

Challenges and Considerations:

  • Data Quality: The quality of the generated content heavily depends on the quality and diversity of the training data. Biases or gaps in the training data can lead to biased or inaccurate outputs.

  • Computational Resources: Training these models requires significant computational power and resources, making it accessible primarily to large organizations or research institutions.

  • Ethical Considerations: There are ethical concerns related to the use of generative AI, such as the potential for creating misleading or harmful content, intellectual property issues, and the need for transparency about AI-generated materials.

Sources for Further Reading:

These explanations and examples should help teachers understand the basics of how generative AI works and its various applications in education and beyond.


People working in a collaborative space of the future. There are robots present, and people are all working on computers.

At this point, I felt like I had a good foundation for a blog post that explained what AI is in terms that we can understand it. So, as a last prompt, I asked it, "Okay, so can you take all of this information and generate a blog post that is written at an early college reading level that would take someone 5 to 7 minutes to read. I want to ensure that all the important vocabulary is defined and explained, the history of AI with some key details is included, and how we have moved from the 1960s to where we are today. I want to ensure readers have more than a surface level understanding of the generative AI tools that are widely available today and how they can be used to streamline workflows and help students learn and achieve."


Here was the final result:


Understanding AI: From the 1960s to Today

What is AI?

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines designed to think and learn like humans. AI enables computers to perform tasks that typically require human intelligence, such as understanding natural language, recognizing images, solving problems, and making decisions.

The Evolution of AI

The Beginning of AI

The journey of AI began in the 1950s. In 1956, a group of scientists held the Dartmouth Conference, where they coined the term "Artificial Intelligence." They aimed to create machines capable of performing tasks that would require intelligence if done by humans.

Machine Learning

In the 1980s, AI research made significant strides with the development of Machine Learning. This approach allows computers to learn from data rather than being explicitly programmed. A famous example from the 1990s is IBM's Deep Blue, which defeated world chess champion Garry Kasparov in 1997 by learning from countless chess games and strategies.

Deep Learning

Around 2006, researchers introduced Deep Learning, a more advanced form of machine learning. Deep learning utilizes neural networks with multiple layers, enabling the computer to analyze complex patterns and large datasets. This technology powers many modern AI applications, such as image and speech recognition.

Generative AI

Generative AI, a cutting-edge technology, can create new content, including text, images, music, and videos. A prominent example is OpenAI's GPT-3, released in 2020. GPT stands for "Generative Pre-trained Transformer," and it can generate human-like text based on its extensive training data.

Large Language Models (LLMs)

LLMs are advanced AI models trained on vast amounts of text data. These models are capable of understanding and generating human language. Examples of LLMs include:

  • GPT-3 by OpenAI: Known for its impressive text generation capabilities.

  • BERT by Google: Used primarily for understanding the context of words in search queries.

  • T5 by Google: A text-to-text transformer model that can translate, summarize, and answer questions.

How LLMs are Created:

  1. Data Collection: LLMs are trained on extensive datasets containing text from books, websites, articles, and other written materials.

  2. Pre-training: The model learns to predict the next word in a sentence, which helps it understand grammar, context, and facts.

  3. Fine-tuning: The model is further trained on specific tasks to refine its performance and make it suitable for particular applications.

How They Work:

  • Training Phase: During this phase, the model learns from the data by recognizing patterns and relationships between words and phrases.

  • Transformer Architecture: LLMs use transformers, a type of neural network architecture that excels at processing sequential data like text. Transformers have attention mechanisms that allow the model to focus on different parts of the text, enhancing its understanding and generation capabilities.

  • Generation Phase: When prompted with a question or task, LLMs generate responses based on their training, ensuring the output is contextually relevant and coherent.

Text-to-Image Generators (e.g., DALL-E)

Text-to-image generators like DALL-E create images from textual descriptions. These models are trained on datasets that pair images with descriptive text, allowing them to generate visual content that matches a given prompt.

Text-to-Music and Text-to-Video Generators

These AI tools can create music or videos based on textual input. By understanding the relationships between text descriptions and corresponding audio or visual elements, these models produce creative works that align with the given prompts.

How AI Enhances Education

AI is revolutionizing education by offering personalized learning experiences and automating administrative tasks. Here are some applications:

  • Personalized Learning: AI tailors educational content to individual students, addressing their strengths and weaknesses.

  • Automated Grading: AI can quickly grade assignments and tests, giving teachers more time to focus on teaching.

  • Tutoring: AI-powered tutors provide additional support to students, helping them understand challenging subjects.

Key Vocabulary:

  1. Artificial Intelligence (AI): Simulation of human intelligence in machines.

  2. Machine Learning: Technique allowing computers to learn from data.

  3. Deep Learning: Advanced machine learning using neural networks with many layers.

  4. Generative AI: AI that creates new content.

  5. Large Language Models (LLMs): Advanced AI models for understanding and generating text.

  6. Metadata: Data providing information about other data, helping in organization and management.

Conclusion

AI has evolved remarkably since its inception in the 1950s. Today, generative AI tools like ChatGPT and DALL-E are transforming various fields, including education. Understanding these tools and their applications can help educators leverage AI to enhance teaching and learning experiences.

Sources for Further Reading:


 
Futuristic classroom of diverse students using simulations to learn.

Your Turn to Play!


Now it's your turn. Choose a Generative AI tool like ChatGPT, Perplexity, Claude, or Google Gemini (I actually pull all of these up in separate tabs and copy and paste my prompts into all of them and pick and choose the best responses). Use the following series of prompts to plan your meals for next week.


Prompt: I have a family of {#}, and I need help making a meal plan. Can you generate a 7 day meal plan with breakfast, lunch, and dinner for my family?


Let that AI do it's magic. What you will probably notice is that it'll give you some foods that you and your family don't like or can't eat. So let's refine things a bit.


Prompt: In my family we don't eat ___________________________, and we really like _____________. Our food allergies include ________________________. I only have around 20 minutes in the morning to prepare and eat breakfast, lunches need to be able to be packed quickly, and dinners need to take no longer than 45 minutes to cook. Please regenerate the meal plan with this new information. (You may also want to include if you have any gadgets like a slow cooker, Instant Pot, or air fryer).


You may have to prompt it a few more times, refining each time, to get a meal plan that perfectly meets your needs. Next use this prompt:


Prompt: Can you please put this meal plan into a chart form and at the bottom include any recipes that I'll need to make these meals.


Prompt: Now I need a grocery list for the week.


And voila! Within 10 minutes you have a 7 day meal plan, customized to your family needs, the accompanying recipes, and a shopping list. Now I want you to think about the implications for you as a teacher. What are some tasks that you could partner with AI to streamline? Be thinking about that and exploring more AI tools until the next post, where I'll go more into detail about how to be an amazing prompt engineer and get to your desired results quicker!


Now I'm going to copy and paste this whole blog post back into ChatGPT and have it generate a TLDR (Too Long Didn't Read) synopsis and post back up at the top of the post. 😁 I'm then going to use an AI image generator, probably Ideogram.ai or Adobe Firefly to generate the images for the blog post. The last step before I publish is to copy and paste the whole thing into Claude 3 Haiku and have it proof read for misspelled words, grammatical errors, usage errors, etc. and I ask it to make sure everything flows well and makes sense. I'd love to hear your initial thoughts about AI, so tell me one potential impact you see AI having on education in the comment box below.



Comments


bottom of page