Artificial Intelligence for Simple Games

Learn how to use powerful Deep Reinforcement Learning and Artificial Intelligence tools on examples of AI simple games!
Artificial Intelligence for Simple Games
File Size :
3.53 GB
Total length :
12h 22m

Category

Instructor

Jan Warchocki

Language

Last update

2/2021

Ratings

4.7/5

Artificial Intelligence for Simple Games

What you’ll learn

SOLVE THE TRAVELLING SALESMAN PROBLEM
Understand and implement Genetic Algorithms
Get the general AI framework
Understand how to use this tool to your own projects
SOLVE A COMPLEX MAZE
Understand and implement Q-Learning
Get the right Q-Learning intuition
Understand how to use this tool to your own projects
SOLVE MOUNTAIN CAR FROM OPENAI GYM
Understand and implement Deep Q-Learning
Build Artificial Neural Networks with Keras
Use the environments provided in OpenAI Gym
Understand how to use this tool to your own projects
SOLVE SNAKE
Understand and implement Deep Convolutional Q-Learning
Build Convolutional Neural Networks with Keras
Understand how to use this tool to your own projects

Artificial Intelligence for Simple Games

Requirements

High school maths
Basic knowledge of programming, such as “if” conditions, “for” and “while” loops, etc.

Description

Ever wish you could harness the power of Deep Learning and Machine Learning to craft intelligent bots built for gaming?If you’re looking for a creative way to dive into Artificial Intelligence, then ‘Artificial Intelligence for Simple Games’ is your key to building lasting knowledge.Learn and test your AI knowledge of fundamental DL and ML algorithms using the fun and flexible environment of simple games such as Snake, the Travelling Salesman problem, mazes and more.1. Whether you’re an absolute beginner or seasoned Machine Learning expert, this course provides a solid foundation of the basic and advanced concepts you need to build AI within a gaming environment and beyond.2. Key algorithms and concepts covered in this course include: Genetic Algorithms, Q-Learning, Deep Q-Learning with both Artificial Neural Networks and Convolutional Neural Networks.3. Dive into SuperDataScience’s much-loved, interactive learning environment designed to build knowledge and intuition gradually with practical, yet challenging case studies.4. Code flexibility means that students will be able to experiment with different game scenarios and easily apply their learning to business problems outside of the gaming industry.‘AI for Simple Games’ CurriculumSection #1 — Dive into Genetic Algorithms by applying the famous Travelling Salesman Problem to an intergalactic game. The challenge will be to build a spaceship that travels across all planets in the shortest time possible!Section #2 — Learn the foundations of the model-free reinforcement learning algorithm, Q-Learning. Develop intuition and visualization skills, and try your hand at building a custom maze and design an AI able to find its way out.Section #3 — Go deep with Deep Q-Learning. Explore the fantastic world of Neural Networks using the OpenAI Gym development environment and learn how to build AIs for many other simple games!Section #4 — Finish off the course by building your very own version of the classic game, Snake! Here you’ll utilize Convolutional Neural Networks by building an AI that mimics the same behavior we see when playing Snake.

Overview

Section 1: Installation

Lecture 1 Installing Anaconda

Section 2: Get the materials

Lecture 2 Get the materials

Lecture 3 BONUS: Learning Path

Section 3: Genetic Algorithms Intuition

Lecture 4 Plan of Attack

Lecture 5 The DNA

Lecture 6 The Fitness Function

Lecture 7 The Population

Lecture 8 The Selection

Lecture 9 The Crossover

Lecture 10 The Mutation

Section 4: Genetic Algorithms Practical

Lecture 11 Step 1 – The Introduction

Lecture 12 Step 2 – Importing the libraries

Lecture 13 Step 3 – Creating the bots

Lecture 14 Step 4 – Initializing the random DNA

Lecture 15 Step 5 – Building the Crossover method

Lecture 16 Step 6 – Random Partial Mutations 1

Lecture 17 Step 7 – Random Partial Mutations 2

Lecture 18 Step 8 – Initializing the main code

Lecture 19 Step 9 – Creating the first population

Lecture 20 Step 10 – Starting the main loop

Lecture 21 Step 11 – Evaluating the population

Lecture 22 Step 12 – Sorting the population

Lecture 23 Step 13 – Adding best previous bots to the population

Lecture 24 Step 14 – Filling in the rest of the population

Lecture 25 Step 15 – Displaying the results

Lecture 26 Step 16 – Running the code

Section 5: Q-Learning

Lecture 27 Q-Learning Intuition: Plan of Attack

Lecture 28 Q-Learning Intuition: What is Reinforcement Learning?

Lecture 29 Q-Learning Intuition: The Bellman Equation

Lecture 30 Q-Learning Intuition: The Plan

Lecture 31 Q-Learning Intuition: Markov Decision Process

Lecture 32 Q-Learning Intuition: Policy vs Plan

Lecture 33 Q-Learning Intuition: Living Penalty

Lecture 34 Q-Learning Intuition: Q-Learning Intuition

Lecture 35 Q-Learning Intuition: Temporal Difference

Lecture 36 Q-Learning Intuition: Q-Learning Visualization

Section 6: Q-Learning Practical

Lecture 37 Step 1 – Introduction

Lecture 38 Step 2 – Importing the libraries

Lecture 39 Step 3 – Defining the parameters

Lecture 40 Step 4 – Environment and Q-Table initialization

Lecture 41 Step 5 – Preparing the Q-Learning process 1

Lecture 42 Step 6 – Preparing the Q-Learning process 2

Lecture 43 Step 7 – Starting the Q-Learning process

Lecture 44 Step 8 – Getting all playable actions

Lecture 45 Step 9 – Playing a random action

Lecture 46 Step 10 – Updating the Q-Value

Lecture 47 Step 11 – Displaying the results

Lecture 48 Step 12 – Running the code

Section 7: Deep Q-Learning with ANNs

Lecture 49 Deep Q-Learning Intuition: Plan of Attack

Lecture 50 Deep Q-Learning Intuition: Step 1

Lecture 51 Deep Q-Learning Intuition: Step 2

Lecture 52 Deep Q-Learning Intuition: Experience Replay

Lecture 53 Deep Q-Learning Intuition: Action Selection Policies

Section 8: Deep Q-Learning Practical

Lecture 54 Step 1 – Introduction

Lecture 55 Step 2 – Brain – Importing the libraries

Lecture 56 Step 3 – Brain – Building the Brain class

Lecture 57 Step 4 – Brain – Creating the Neural Network

Lecture 58 Step 5 – DQN Memory – Initializing the Experience Replay Memory

Lecture 59 Step 6 – DQN Memory – Remembering new experience

Lecture 60 Step 7 – DQN Memory – Getting the batches of inputs and targets

Lecture 61 Step 8 – DQN Memory – Initializing the inputs and the targets

Lecture 62 Step 9 – DQN Memory – Extracting transitions from random experiences

Lecture 63 Step 10 – DQN Memory – Updating the inputs and the targets

Lecture 64 Step 11 – Training – Importing the libraries

Lecture 65 Step 12 – Training – Setting the parameters

Lecture 66 Step 13 – Training – Initializing the environment, the brain and dqn

Lecture 67 Step 14 – Training – Starting the main loop

Lecture 68 Step 15 – Training – Starting to play the game

Lecture 69 Step 16 – Training – Taking an action

Lecture 70 Step 17 – Training – Updating the Environment

Lecture 71 Step 18 – Training – Adding new experience, training the AI, updating cur. state

Lecture 72 Step 19 – Training – Lowering epsilon and displaying the results

Lecture 73 Step 20 – Running the code

Section 9: Deep Convolutional Q-Learning

Lecture 74 Deep Convolutional Q-Learning Intuition: Plan of Attack

Lecture 75 Deep Convolutional Q-Learning Intuition: Deep Convolutional Q-Learning Intuition

Lecture 76 Deep Convolutional Q-Learning Intuition: Eligibility Trace

Section 10: Deep Convolutional Q-Learning Practical

Lecture 77 Step 1 – Introduction

Lecture 78 Step 2 – Brain – Importing the libraries

Lecture 79 Step 3 – Brain – Starting building the Brain class

Lecture 80 Step 4 – Brain – Creating the neural network

Lecture 81 Step 5 – Brain – Building a method that will load a model

Lecture 82 Step 6 – DQN – Building the Experience Replay Memory

Lecture 83 Step 7 – Training – Importing the libraries

Lecture 84 Step 8 – Training – Defining the parameters

Lecture 85 Step 9 – Training – Initializing the Environment the Brain and the DQN

Lecture 86 Step 10 – Training – Building a function to reset the current state

Lecture 87 Step 11 – Training – Starting the main loop

Lecture 88 Step 12 – Training – Resetting the Environment and starting to play the game

Lecture 89 Step 13 – Training – Selecting an action to play

Lecture 90 Step 14 – Training – Updating the environment

Lecture 91 Step 15 – Training – Remembering new experience and training the AI

Lecture 92 Step 16 – Training – Updating the score and current state

Lecture 93 Step 17 – Training – Updating the epsilon and saving the model

Lecture 94 Step 18 – Training – Displaying the results

Lecture 95 Step 19 – Testing – Importing the libraries

Lecture 96 Step 20 – Testing – Defining the parameters

Lecture 97 Step 21 – Testing – Initializing the Environment and the Brain

Lecture 98 Step 22 – Testing Restting current and next state and starting the main loop

Lecture 99 Step 23 – Testing – Resetting the game and starting to play the game

Lecture 100 Step 24 – Testing – Selecting an action to play

Lecture 101 Step 25 – Updating the environment and current state

Lecture 102 Step 26 – Running the code

Section 11: ANNEX 1: Artificial Neural Networks

Lecture 103 Plan Of Attack

Lecture 104 The Neuron

Lecture 105 The Activation Function

Lecture 106 How do Neural Networks work?

Lecture 107 How do Neural Networks learn?

Lecture 108 Gradient Descent

Lecture 109 Stochastic Gradient Descent

Lecture 110 Back-Propagation

Section 12: ANNEX 2: Convolutional Neural Networks

Lecture 111 Plan Of Attack

Lecture 112 What are convolutional neural networks?

Lecture 113 Step 1 – Convolution Operation

Lecture 114 Step 1(b) – ReLU Layer

Lecture 115 Step 2 – Pooling

Lecture 116 Step 3 – Flattening

Lecture 117 Step 4 – Full Connection

Lecture 118 Summary

Lecture 119 Softmax & Cross-Entropy

Anyone interested in beginning their AI journey,Anyone interested in creating an AI for games,Anyone looking for flexible tools to solve many kinds of Artificial Intelligence problems,A data science enthusiast looking to expand their knowledge of AI

Course Information:

Udemy | English | 12h 22m | 3.53 GB
Created by: Jan Warchocki

You Can See More Courses in the Developer >> Greetings from CourseDown.com

New Courses

Scroll to Top