This assignment is worth 20 points and has two parts.
Implement uniform-cost search in Java.
To begin, first ensure
that you have Java version 11 or higher. Clone
the ai_1 repository,
which contains files for this course's assignments. Look in the
src/search
subdirectory, which has source files for
classes described in this section.
You will write a general-purpose implementation that can search any problem that implements this interface:
// S = state type, A = action type interface Problem<S, A> { S initialState(); List<A> actions(S state); S result(S state, A action); boolean isGoal(S state); double cost(S state, A action); }
Build your implementation in src/Ucs.java
, in this
class:
// uniform-cost search class Ucs<S, A> { public static <S, A> Solution<S, A> search(Problem<S, A> prob) { … your implementation here … } }
The search()
method should return a Solution<S,
A>
if a solution was found, otherwise null. Solution<S,
A>
contains these fields:
class Solution<S, A> { public List<A> actions; // series of actions from start state to goal state public S goalState; // goal state that was reached public double pathCost; // total cost from start state to goal … }
Your search method could be used like this, for example:
Problem<Integer, Integer> p = new MyPuzzle(); Solution<Integer, Integer> result = Ucs.search(p);
The src/problems
subdirectory includes several Problem
instances that you can use for testing:
Empty.java
– an problem with a
single state, which is also the goal
Graph.java
– a small
undirected graph
Line.java
– a one-dimensional
state search problem
Grid.java
– a two-dimensional
state search problem
It would be wise to ensure that your uniform-cost search implementation works on these problems before attempting to use it in your Ms. Pac-Man agent. Run the tests in UcsTest.main().
Java collections overview will be helpful.
Create a class Node<S, A>
that will hold nodes for the search.
Use a PriorityQueue<Node<S,
A>>
to store the frontier.
Make your Node
class implement
Comparable<Node<S,
A>>
and give it a compareTo
method that
compares nodes by path cost.
To store the explored set, use a data
structure that lets you check you quickly check whether a state is
in the set, e.g. a HashSet<S>
.
(You may assume that the state class (S) in any Problem instance
correctly implements the getHash() method, so you may use states as
hash keys.)
Your algorithm will sometimes discover a new path to a state that is already in the frontier. There are several possible approaches you could take in this situation:
Check whether the new path is cheaper than
the existing path. If it is, you want to record it, but you cannot
simply update the fields of the existing frontier node, since
that will not change its position in the priority queue. Instead,
you can remove the existing frontier node from the queue, and add a
new node representing the new and cheaper path. With this approach,
you will never have two nodes in the queue that represent the same
state. However, removing an existing node is potentially slow,
since the remove
method of PriorityQueue
runs in O(N).
Check whether the new path is cheaper than
the existing path. If it is, then add a new node representing the
new and cheaper path, but leave the existing node in place in the
queue as well, to avoid the expensive remove
operation. With this approach, there may be multiple nodes in the
queue that have the same state. So whenever you remove a node from
the queue, you must check whether its state is already in the
explored set. If so, you have already handled that state, so
you should ignore the node.
Always add a new node representing the new path, even if it is more expensive than the existing path. With this approach, you will have more duplicate nodes in the queue than in approach (2). As in approach (2), when you remove a node from the queue you will need to check whether it has already been explored.
With
approach (1) or (2), you will need some way to quickly check whether
a given state is in the frontier so that you can find its existing
path cost. One way to do that is to use a HashMap<S,
Node<S>>
to keep a map from states to frontier
nodes.
I recommend approach (2) or (3), since (1) will become slow as the frontier grows large.
In this part of the assignment, you will write an agent that plays Ms. Pac-Man. Your uniform-cost search implementation from part (1) above may be helpful.
Clone the Ms. Pac-Man source code from the repository at MsPacMan-vs-Ghosts-AI. When you visit that page on GitHub, be sure to read the entire page – it has lots of useful information about the game, including a link to the API documentation.
Write your agent in src/MyA
gent
.java
,
which initially holds a dummy implementation.
Assuming you want to use uniform-cost search, I
recommend the following basic approach. In your agent code, create an
instance of the Problem<S, A>
interface above and
call Ucs.search()
on every game tick. Perform a
uniform-cost search over the physical maze graph, not the
abstract tree of game states. You can think of the physical graph as
a simplified state space in which each state contains only Ms.
Pac-Man's position, not additional game state such as the ghost
positions.
In the API, each position in the maze is a node in the maze graph. The initial node for your search should be Ms. Pac-Man's current position. Usually every pellet can be a goal node, perhaps unless it is close to a non-edible ghost. Edible ghosts and fruits are also great goals to aim for.
When Ucs.search()
returns a Solution
,
the first action in the solution is the direction that Ms. Pac-Man
should go.
Assign costs to graph edges as necessary to avoid going along undesirable paths. Any edge that ends at a non-edible ghost should have a very high cost. You may also want to give high costs to edges that are close to ghost positions, or that move in a direction toward the ghosts, or that reach goal nodes that are too close to ghosts. If you want to be smarter, consider not only where the ghosts are now, but where they are heading.
Remember that uniform-cost search (i.e. Dijkstra's algorithm) does not work for negative edge weights. So don't be tempted to assign negative weights to edges that lead to places you want to go, such as power pellets – that won't work. A better approach is to make these be goal nodes.
The game runs at 25 frames per second, so your tick() method will need to return in under 40 ms. There are about 1,000 nodes in the graph, which is a small enough number that you can actually perform a uniform-cost search over this graph in this amount of time, as long as you don't do too much work in your path cost function. When you run your agent, if you see messages of the form
[SIMULATOR] PacMan is still thinking!
then your agent is sometimes failing to respond in 40 ms, and you will need to make it faster. Be cautious about using the getGhostPathDistance() function, which can eat up a lot of time as mentioned in the API documentation.
To gain insight into your agent's behavior, use the GameView.addPoints() method to draw the path that Ms. Pac-Man is currently following. You can generate this path from the list of actions in the Solution that you generate on each tick. You may even want to draw different edges of the path in different colors to indicate the cost (i.e. danger) along those edges.
Your Ms. Pac-Man agent may want to use your UCS code, which lives in a different project (i.e. the ai_1 project). Here are two ways to enable that:
You can copy the files and directories from ai_1/src into MsPacMan-vs-Ghosts-AI/src. Then all the ai_1 sources will be available inside the Ms. Pac-Man project.
This is the most direct method, though it's a bit inelegant. If you ever run 'git pull' in ai_1 to update the sources there, you may have to copy them into MsPacMan-vs-Ghosts-AI/src again. Furthermore, git will report all of these copied files as new inside the MsPacMan-vs-Ghosts-AI project unless you add them to .gitignore.
Alternatively, you can add the ai_1 project as a dependency of MsPacMan-vs-Ghosts-AI. To take this approach:
Edit
MsPacMan-vs-Ghosts-AI/pom.xml
and
add the dependency there:
<project> ... <dependencies> <dependency> <groupId>cz.cuni.mff</groupId> <artifactId>ai-1</artifactId> <version>0.1</version> </dependency> </dependencies> </project>
With this dependency in place, the Java compiler will be able to find the ai_1 source files when it compiles your Ms. Pac-Man agent. You will need to have both projects (ai_1 and MsPacMan-vs-Ghosts-AI) loaded in your IDE.
(Alternatively, your IDE may provide other ways to specify a project dependency. However this approach of adding the dependency to the Maven project file should work in any IDE.)
Edit the mspac (or mspac.bat) startup script and add ../ai_1/target/classes to the classpath there. The mspac script will now look like this:
java -cp target/classes:../ai_1/target/classes MsPacMan $*
Or, on Windows, mspac.bat will look like this:
java -cp target/classes;../ai_1/target/classes MsPacMan %*
ReCodEx will test your test your agent by playing 50 games, each with a different random seed. Note that in each game Ms. Pac-Man initially has 3 lives and may potentially go through many different levels.
To score 10 points for this part of the assignment, your agent must score an average of at least 12,000 points per game. ReCodEx will give you partial credit for lower scores, but will award no credit for any agent that averages less than 6,000 points per game.
To run the testing process yourself, run
$ ./mspac MyAgent -sim 50
This will run 50 random games and report your agent's average score on these games.
ReCodex must be able to evaluate your agent on 50 games within a time limit of 100 seconds, so your agent must not use more than 2 seconds of CPU time in aggregrate in an average game. This means that your average response time will probably have to be significantly less than the 40-ms limit, especially if your agent is often able to last through many levels.
All agents submitted before the soft deadline for this assignment will be entered into a tournament. Entries will be ranked by highest average score. In the tournament, I will evaluate agents by playing as many random games as necessary to ensure that the differences in average scores are statistically valid. Also, be aware that I will use a random starting seed, not seed 0 which is used by default by the -sim option.