Principles of Artificial Intelligence
|
The material to be covered each week and the assigned readings (along with online lecture notes, if available)
are included on this page. The study guide (including slides, notes, readings) will be updated each week. The assigned readings
are divided into required and recommended readings and notes from recitations (if available). You will be responsible
for the material covered in the lectures and the assigned required readings. The optional readings are provided as a guide for those interested in exploring specific topics in greater depth.
Week 1
Overview of the course; Overview of artificial intelligence: What
is intelligence? What is artificial intelligence (AI)? History of AI;
Working hypothesis of AI. Introduction to intelligent
agents. Intelligent agents defined. Taxonomy of agents. Simple reflex
agents (memoryless agents); agents with limited memory; rational
agents; agents with goals; utility-driven Agents.
Required Readings -- Artificial Intelligence
Week 2
Goal-Based Agents. Problem-solving as state space search. Formulation of state-space search
problems. Representing states and actions. Basic search algorithms and their properties: completeness,
optimality, space and time complexity. Breadth-first search,
depth-first search, backtracking search, depth-limited and interative
deepening search.
Heuristic search. Finding optimal solutions. Best
first search. A* Search: Adding Heuristics to Branch and Bound Search.
Completeness, Admissibility, and Optimality of the A*
algorithm. Design of admissible heuristic functions. Comparison of
heuristic functions ("informedness" of heuristics).
Problem Solving through Problem Reduction. Searching AND-OR graphs.
A*-like admissible algorithm for searching AND-OR graphs.
Required readings
Recommended Materials
- G. Polya, How to Solve It, Princeton University Press, 1957.
- Pearl, J. (1988). Heuristics: Intelligent Search Strategies for
Computer Problem Solving.
Week 3
Pathless search problems. Hillclimbing, Stochastic search: Metropolis Algorithm, Simulated Annealing, Genetic Algorithms.
Required readings
Recommended Materials
Week 4
Problem solving as
Constraint Satisfaction. Properties of constraint satisfaction
problems. Examples of constraint satisfaction problems.
Iterative
instantiation method for solving CSPs. Scene interpretation as
constraint propagation (Waltz's line labeling algorithm).
Node consistency, arc consistency, and related algorithms.
Required readings
Recommended Materials
Weeks 5-6
Introduction to Knowledge Representation.
Logical Agents with explicit knowledge representation.
Knowledge representation using propositional logic; Review of
Propositional Logic: Propositional logic as a knowledge
representation language: Syntax and Semantics; Possible worlds
interpretation; Models and Logical notions of Truth and Falsehood;
Logical Entailment; Inference rules; Modus ponens; Soundness and
Completeness properties of inference. Modus Ponens is a sound
inference rule for Propositional logic, but is not complete. Extending
modus ponens - the resolution principle.
Logical Agents without explicit representation.
Comparison of logical agents with and without explicit representations.
FOPL (First-Order Predicate Logic). Ontological and epistemological commitments
and Syntax and semantics of FOPL.
Examples. Theorem-proving in FOL. Unification, instantiation, and entailment.
Required readings
Optional readings
- Concepts
of Logical AI, by John McCarthy.
-
DPLL Algorithm
-
A New Method for Solving Hard Satisfiability Problems,
Bart Selman, Hector Levesque, David Mitchell Proceedings of AAAI, 1992.
-
Hard and Easy Distributions of SAT Problems
David Mitchell, Bart Selman, Hector Levesque,
Proceedings of AAAI, 1992.
-
Using CSP Look-Back Techniques to Solve Real-World SAT Instances< Bayardo, R. and Schrag, R. In: Proceedings of AAAI, 1997.
- Vehicles: Experiments in Synthetic Psychology, Braitenberg, V. MIT Press. 1986.
-
The Synthesis of Digital Machines With Provable Epistemic Properties.
S. Rosenschein and L. Kaelbling (1986). In Proceedings of the Conference on Theoretical Aspects of Knowledge Representation (TARK)
Additional Information
- Genesereth, M. R., & Nilsson, N. J., Logical Foundations
of Artificial Intelligence. Palo Alto, CA: Morgan Kaufmann
(1987).
Weeks 7-8
Transformation of FOPL sentences in Clause Normal Form. Resolution by
refutation for First Order Predicate Logic. Examples. Automated
Theorem Proving. Search Control Strategies for
Theorem Proving. Unit Preference, Set of Support and related
approaches. Soundness and Completeness of Proof Procedures.
Semidecidability of FOPL and its implications. Brief discussion of Datalog (for deductive databases)
and Prolog (for logic programming).
Emerging Applications of Knowledge Representation.. Semantics-Driven Applications. Ontologies. Information Integration. Service Oriented Computing. Semantic Web. Brief overview of Ontology Languages: RDF, OWL. Description Logics - Syntax, Semantics, and Inference.
Required readings
Optional readings
Additional Information
- Knowledge Representation - Logical, Philosophical, and Computational Foundations, by John Sowa.
- Problems for Automated Theorem Provers, Geoff Sutcliffe
- Genesereth, M. R., & Nilsson, N. J., Logical Foundations
of Artificial Intelligence. Palo Alto, CA: Morgan Kaufmann
(1987).
- Davis, E. Representations of Commonsense
Knowledge. Palo Alto, CA: Morgan Kaufmann (1990).
- Glossary
of First Order Logic, P. Suber.
-
Duffy, D. Principles of Automated Theorem Proving, New York: John
Wiley and Sons. (1991).
-
Wos, L., Overbeek, R., Lusk, E., & Boyle, J., Automated Reasoning. New York, NY: McGraw-Hill (1992).
-
Logic-Based Knowledge Representation Systems and Theorem Provers:
Weeks 9-10
Representing and Reasoning Under Uncertainty.
Review of elements of probability. Probability spaces. Bayesian
(subjective) view of probability. Probabilities as measures of belief
conditioned on the agent's knowledge. Axioms of
probability. Conditional probability. Bayes theorem. Random Variables.
Independence. Probability Theory as a generalization of propositional logic.
Syntax and Semantics of a Knowledge Representation based on probability theory.
Sound inference procedure for probabilistic reasoning.
Independence and Conditional Independence. Exploiting independence relations for compact representation of probability distributions.
Introduction to Bayesian
Networks. Semantics of Bayesian Networks. D-separation. D-separation examples. Answering Independence Queries
Using D-Separation tests.
Probabilistic Inference Using Bayesian Networks.
Exact Inference Algorithms - Variable Elimination Algorithm; Message Passing Algorithm;
Junction Tree Algorithm. Complexity of Exact Bayesian Network Inference.
Approximate inference using stochastic simulation (sampling, rejection sampling, and liklihood weighted sampling
Required readings
- Chapters 13 and 14 from the Artificial Intelligence: A Modern
Approach textbook by Russell and Norvig.
- Lecture Slides, Vasant Honavar
-
Tutorial on Bayesian Network Inference by S. Davies and A. Moore
-
Inference in Bayesian Networks -- A Procedural Guide Huang, C., A. Darwiche. Journal of Approximate Reasoning. Vol 15. pp. 225-263.
- Causal
Independence for Probability Assessment and Inference Using
Bayesian Networks by D. Heckerman and J. Breese.
-
Analysis of Three Bayesian Network Inference Algorithms: Variable Elimination, Likelihood Weighting, and Gibbs Sampling Rose F. Liu and Rusmin Soetjipto, 2004.
-
Probabilistic Relational Models, Getoor, L., Friedman, N., Koller, D., Pfeffer, A., and Taskar, B. (2007).
-
Open Universe Probability Models, Milch, B. and Russell, S. (2010).
Optional readings
Additional Information
- Probabilistic Reasoning in Intelligent Systems: Networks of
Plausible Inference. Judea Pearl (1997).
-
Castillo, E., Gutierrez, J. M., Hadi, A. S. Expert Systems and Probabilistic Network Models. Berlin: Springer (1996).
-
Cowell, R. G. Lauritzen, S. L., and Spiegelhalter, D. J. Probabilistic Networks and Expert Systems Berlin: Springer (2005).
-
Korb, K.B., and Nicholson, A.E., Bayesian Artificial Intelligence, Chapman and Hall (2004).
Week 11
Making Simple Decisions under uncertainty, Elements of utility theory, Constraints on rational preferences, Utility functions, Utility elicitation, Multi-attribute utility functions, utility independence, decision networks, value of information
Required readings
- Chapter 16 from the Artificial Intelligence: A Modern
Approach textbook by Russell and Norvig.
- Lecture Slides
Optional Readings
Additional Information
-
French, S. Decision Theory: An Introduction to the Mathematics of Rationality. Ellis Horwood, 1986.
-
Luce, R. D., and Raiffa, H. (1957). Games and decisions. New York: John Wiley and Sons. Dover republication 1989.
-
Korb, K.B., and Nicholson, A.E., Bayesian Artificial Intelligence, Chapman and Hall (2004).
-
Ralph L. Keeney and Howard Raiffa, Decisions with Multiple Objectives:
Preferences and Value Trade Offs, Wiley, New York, 1993.
Week 12
Planning. Representation Language for Planning Problems. Representing actions and their effects. STRIPS and its derivatives. Examples.
Planning with State-Space Search. Forward (Progression) and Backward (Regression) search. Heuristic State-space search.
Partial Order Planning. Planning Graphs. Graph-Plan Algorithm. Planning using SAT solvers.
Required readings
- Chapter 11 from the Artificial Intelligence: A Modern Approach textbook by Russell and Norvig.
-
Lecture Slides.
Optional readings
-
Weld, D. Recent Advances in Planning, AI Magazine, 1999.
-
A Concise Introduction to Models and Methods for Automated Planning", Hector Geffner and Blai Bonet, Morgan Claypool, 2013.
-
An introduction to planning domain definition language, Patrik Haslum, NirLipovetzky, Daniele Maggazzeni, and Christian Muise, Morgan Claypool, 2019.
Additional Information
Week 13
Markov Decision Processes and Sequential Decision Problem.
Reinforcement Learning. Agents that learn by exploring and interacting with environments. Examples of reinforcement learning scenario. Markov decision processes. Types of environments (e.g., deterministic versus stochastic state transition functions and reward functions, stationary versus non-stationary environments, etc.).
The credit assignment problem. The exploration vs. exploitation dilemma. Value Iteration algorithm. Policy Iteration algorithm. Q-learning Algorithm, Confergence of Q-learning. Temporal Difference Learning Algorithms.
Required readings
Optional readings
- Kaelbling, L., Littman, M., and Moore, A. (1996). Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research.
-
J. Rennie and A. McCallum. Using Reinforcement Learning to Spider the Web Efficiently. Proceedings of the Sixteenth International Conference on Machine Learning, 1999.
-
Singh, S., Littman, M., Jong, N.K., Pardo, D., and Stone, P. (2003). Learning predictive state representations. In: Proceedings of the International Conference on Machine Learning (ICML 2003).
-
Singh, S., Barto, A., and Chentanez, N. (2005). Intrinsically motivated reinforcement learning. In: Advances in Neural Information Processing Systems (NIPS 2005).
-
Barto, A., and Mahadevan, S. Hierarchical Reinforcement Learning. Draft.
-
Schultz, W, Dayan, P and Montague, PR (1997). A neural substrate of prediction and reward. Science, 275, 1593-1599.
Additional Information
- Sutton, R. and Barto, A. (1998). Reinforcement
Learning: An Introduction Cambridge, MA: MIT Press.
-
Mahadevan, S. Spatio-Temporal Abstraction of Stochastic Sequential Processes.
-
T. Dietterich. Hierarchical reinforcement learning with the MAXQ value function decomposition. JAIR, 13:227-303, 2000.
-
Mark Humphrys. Action Selection methods using Reinforcement Learning. PhD thesis, University of Cambridge, June 1997.
-
Even-Dar, E. and Mansour, Y. (2001). Convergence of Optimistic and Incremental Q- Learning. NIPS 2001.
Week 14
Optional Topics:
- Causal Models. The importance of causal models for making sense of data. Causal calculus. Confoundiers and causal inferece. How to identify confounders. Inferring causal models from data.
-
Algorithmic fairness and accountability.
Required readings
Optional readings
Additional Information
-
Causal Inference in Statistics, A primer. Judea Pearl, Madelyn Glymour, Nicholas Jewell. Wiley.
-
Cause and correlation in biology. Bill Shipley. Cambridge University Press.
|