In the first part of this lab you’ll assemble the proximity sensing and high-level processing subsystems on your robot: two forward-facing bump switches with whiskers, two side-facing infrared distance sensors, and the PandaBoard ARM processor which we have configured to run Ubuntu. The org will now communicate with the panda over a short on-board USB cable, which will take the place of the long USB cables we have previously used to connect to a remote host. The host will now communicate with the panda, either via a USB-to-serial cable or over the network (either 802.11b wifi or Ethernet).
In the second part of the lab you’ll write code for the panda to implement a form of local navigation based on the Bug2 algorithm. In the third part of the lab you’ll write code for the panda to implement a global navigation algorithm given a map. Both parts will use a library of supporting code we have prepared for the panda called the OHMM Interface.
The panda should boot once the SD card is inserted and it receives power. You should see the two LEDs near the SD card flash; you will likely become familiar with the pattern.
Important: once the panda has booted, it is important to cleanly shut it down, just as with any modern workstation type computer. This ensures that the disk filesystem (which in this case is actually on an SD card) is left in a clean state. The panda gets power from the org, so anytime you shut down the org, the panda will also lose power. (It is ok to reset the org, as this does not affect its power supply circuitry.) The correct way to shut down the panda is to log in (see below) and run the command
> sudo shutdown -h now
Wait for the LED indicator lights near the SD card to stop flashing. It is then safe to remove power. Or, if you just want to reboot the panda:
> sudo shutdown -r now
We have configured Ubuntu 11.04 on the panda SD cards so that they are actually quite similar to the ohmmkeys (VMWare is not involved here, of course). It is possible to use the PandaBoard with a standard USB keyboard, mouse, and HDMI or DVI-D (not VGA) display. The first two can be plugged in to any free USB port on the panda. The display must connect to the HDMI connector further from the front of the robot, the one labeled HDMI-1080p
. You may use either HDMI to HDMI or HDMI to DVI-D cables (VGA is unfortunately not supported without extra conversion electronics), but beware that the panda may have configuration issues with some monitors. When using a monitor, it is best to have it connected at boot. We have had good results with 1280x1024 LCD panels.
It is also possible to use the panda in a “headless” mode where we only connect to it over the network—it has both wifi and Ethernet connections—and via a serial terminal connected to the RS–232 serial port on the DB–9 connector at the rear of the board. Because most other modern computers no longer have true RS232 serial hardware, we provide you with a USB to serial adapter cable. You interact with this terminal again using a terminal program such as kermit or minicom, but be aware that the port name will not be ACM1
here. The port name will depend on your system, typically on Linux it will be /dev/ttyUSB0
, and on OSX it will be /dev/tty.usbserial
. The adapter is based on the Prolific PL2303 chipset, which should work without extra drivers at least on modern Linux installations. For other OS you may need to manually install drivers.
We have pre-configured an “ohmm” account on the pandas. Login with that account and then add users for each of your group members:
> sudo adduser LOGIN
where LOGIN is the desired username. Note, use the command “adduser”, not “useradd”. If you make a mistake, you can remove a user with
> sudo userdel -r LOGIN
but be very careful doing that, of course (you may omit the -r
to leave the user’s files around at /home/LOGIN
).
You may be familiar with GUI tools, including NetworkManager in Ubuntu, to identify and connect to wireless networks. Normally you interact with NM graphically (via an icon in the task tray), but it is also possible to manipulate it from the command line. It runs as a daemon (background service) even when running headless.
Listing available connections: the command
> nmcli con list
will show a list of the “connections” that NM knows about. “Auto eth0” is the wired ethernet network; the remainder are wireless networks that NM remembers; we have pre-configured some of these to get you started. (NM usually sets the name of a wifi connection to “Auto FOO” where FOO is the network SSID; the “Auto” prefix seems to tell NM to automatically connect to that network whenever it is available, without asking for confirmation.)
Checking the current connection: NM is designed to automatically find for the best available connection and use it; to see what it has selected, run
> nmcli con status
which will show the connection that NM has currently selected, and
> ifconfig
which will give the panda’s IP address, if the connection was successful (look for inet addr
in the section under wlan0
for a wireless connection, or eth0
for wired).
Selecting a particular connection: is done with the command
> nmcli con up id 'CONN'
where CONN is one of the connection names from con list
, verbatim. (The quotes allow spaces in the connection name; for example, to connect to wired ethernet use 'Auto eth0'
.) This will work for networks without security, or for those that use a “secret” (password) that is the same for all users of the network (in which case the secret is remembered along with the network). For networks that require username+password login, we have provided a script nm-connect-802-1x
; run it at the command line to get a usage description. For networks that require you to click buttons or fill in forms on a web page, use lynx. To bring down a connection, replace up
with down
.
Adding new connections is easiest by booting headful and using the normal GUI, where you select the desired network from a list and then enter your credentials. (You can also select “Connect to Hidden Wireless Network…”, if the access point has been configured not to broadcast itself.) If you then go back to the NM icon and select Edit Connections -> Wireless -> connection -> Edit… -> Available to all users, NM will save the connection info to a file in /etc/NetworkManager/system-connections/
, and it will automatically consider connecting to it at boot, even before any user has done a graphical login. This is called a system connection (vs a user connection that would only be available to the currently logged-in user).
We have also provided a very experimental command line script nm-create-system-connection
that can configure new system connections. Run it to get usage information.
We have prepared an archive file of sourcecode that you’ll use in this lab. Follow similar instructions as for lab 0 to download and unpack it, but this time you will do this on the panda. Make sure that the unpacked lab2
directory is a sibling to your existing lab?
directories that were unpacked from the prior lab tarballs.
If you are running the panda headless, use the following command to download a copy of the lab tarball:
> wget http://www.ccs.neu.edu/course/cs4610/LAB2/lab2.tar.gz
We ask you not to modify the files you unpack from the lab tarball. You will write code that refers to these files and that links to their generated libraries. Please do not work directly in the provided files, in part because we will likely be posting updates to the lab tarball.
lab2/src/org/libpololu-avr
run make
to build the Pololu AVR library.In lab2/src/org/monitor
run
> make
> make program
to build the OHMM monitor and flash the new monitor code to the org. This code includes a solution to the prior lab in the files drive.c
and ohmm/drive.h
; it is possible to use your own solution instead if you prefer — ask the course staff for details.In lab2/src/host/ohmm
run
> make
> make jar
to build OHMM.jar
. (In the event that we distribute an updated lab tarball, you will need to re-make
in org/libpololu-avr
, org/monitor
, and host/ohmm
, re-program
in org/monitor
, and re-jar
in host/ohmm
.)Runing
> make project-javadoc
in lab2/src/host/ohmm
will generate documentation for the OHMM Java library in lab2/src/host/ohmm/javadoc-OHMM
(the top level file is index.html
); or you can view it online here.
The file OHMM.jar
you just built is a Java library that runs on the panda (or any java-capable host connected to the org) and communicates with the monitor program on the org. You will use it to communicate with the org from your own Java code.
For now, you can try it out by running the provided OHMMShell driver. This uses JScheme to implement a scheme read-eval-print-loop (REPL) command line. In the lab2/src/host/ohmm
directory, invoke it like this:
> ./run-class OHMMShell -r /dev/ttyACM1
run-class
is a shell script that uses the makefile to help build a java command line, including classpath and other flags. (It assumes that there is a suitable makefile in the current directory.) The first argument, here OHMMShell
, is the Java class containing the main()
function, with or without a package prefix (here we could have also used ohmm.OHMMShell
); when the package is omitted it is inferred from the project directory structure. The remaining arguments, here -r /dev/ttyACM1
, are passed as command line arguments to main()
.
The -r PORTNAME
invocation asks the OHMM library to use Java RXTX for serial port communication with the org. RXTX is not a standard Java library, but we have packaged versions of it in the lab2 tarball that should work at least on all Linux x86 machines, including virtual machines like the ohmmkeys, and on the PandaBoard. This is the recommended codepath; however, there are two alternates available for special situations or in case RXTX is not working. The -f
option uses direct file I/O on the port file:
> ./stty.sh
> ./run-class OHMMShell -f /dev/ttyACM1
The -c
option runs a proxy process to send and receive bytes to the org. One interesting possibility is to use netcat to communicate over the internet. On the “server” machine to which the org is physically connected, run
> ./stty.sh
> nc -l 1234 < /dev/ttyACM1 > /dev/ttyACM1
Then on any machine that can reach the server over the network, run
> ./run-class OHMMShell -c nc S 1234
where S is the DNS name or IP address of the server (you can test this by running the client and server on the same machine and using IP address 127.0.0.1).
One problem with using both the file I/O and proxy process codepaths is that latencies can reach more than 1 second.
First try a command like
> (e 14b)
$1 = 14B
which just asks the org monitor to echo back the given byte value 14b. Or run
> (df 1000.0f)
to add a 1000mm forward drive command to the queue and start it (the robot will move!). The b
and f
suffixes force scheme to interpret numeric literals as byte
and float
datatypes, respectively; they are necessary so that JScheme can correctly match the scheme call to a Java function. If the suffixes were omitted, the literals would have been interpreted as int
and double
(due to the .0
), respectively, which will not be automatically converted to the narrower types byte
and float
.
Examine all the provided *.scm
and *.java
sourcecode so you understand what is available and how to use it.
Follow similar instructions as for lab 1 to make a checkout of your group svn repository but this time on your panda as a sibling directory to the lab2
directory you unpacked from the lab tarball.
Change directory so that you are in the base directory of your svn repository checkout (i.e. you are in the g?
root directory of the svn checkout), then run the commands
> mkdir lab2
> d=../lab2/src/host/ohmm
> cp $d/makefile $d/run-class ./lab2/
> cp $d/makefile-lab ./lab2/makefile.project
> sed -e "s/package ohmm/package lab2/" < $d/OHMMShellDemo.java > ./lab2/OHMMShellDemo.java
> svn add lab2
> svn commit -m "added lab2 skeleton"
This sets up the directory g?/lab2
in the SVN repository in which you will write all code for this lab. The makefile should not need to be modified: it is pre-configured to find the OHMM Interface library in ../../lab2/src/host/ohmm
. It also will automatically compile all .java
files you create in the same directory.
Change to your g?/lab2
directory and run
> make
> ./run-class OHMMShellDemo -r /dev/ttyACM1
This shows how you can easily write code that uses both OHMM.java
and, if you like, also OHMMShell.java
.
Bring up an OHMMShell and run the following commands to configure the bump switches as digital sensors and the IRs as analog sensors (you may omit the comments):
> (scd io-a0 #t #f) ; left bump switch on IO_A0
> (scd io-a1 #t #f) ; right bump switch on IO_A1
> (scair ch-2 1) ; front IR on A2
> (scair ch-3 1) ; rear IR on A3
Try out the bump switches:
> (srd io-a0) (srd io-a1)
They should read #t when triggered and #f otherwise. Make sure that the left sensor corresponds to the first reading and the right sensor to the second.
Try out the IRs. Set up a reasonable object at a known distance between 8 and 80cm, then run
> (sra ch-2) (sra ch-3)
They should out the distance in millimeters, plus or minus a few due to noise. Make sure that the left sensor corresponds to the first reading and the right sensor to the second.
Now you will implement a simplified version of the Bug2 local navigation algorithm we covered in L5. The simplifying assumptions are:
We strongly recommend you write all code for this lab in Java that runs on the panda and on an external host (for debugging). If you prefer to use other languages, we will allow that, but you will need to write your own version of the interface we’ve provided in OHMM.java
. There should be no need to write more code for the org.
Develop a graphical debugging system that shows a birds eye view of the robot operating in a workspace with global frame coordinates at at least covering and
. This display must bring up a graphics window that shows the axes of world frame and the current robot pose updated at a rate of at least 1Hz. Make sure that this can work over the network, somehow, even when the panda is headless. One convenient option would be to use standard remote X Window display. Arrange the graphics so that world frame
points to the right, world frame
points up, and
is vertically centered in the window (and remember, your graphics must always show at least the minimum world area stated above).
Write a program for the panda that implements the Bug2 algorithm subject to the above simplifications. The goal location should not be hardcoded, rather, read the goal coordinate from the first command line argument in floating point meters. For example, if your Java class to solve this part is called Bug2, you should be able to invoke it like this
> ./run-class Bug2 4.3
for a goal at . You will likely find this more manageable if you break the task into the following parts:
Whether or not you choose to solve the problem as we suggested above, it is a requirement that your debug program show (at least) the current robot pose, the world frame axes, the goal location, and the cumulative IR data points; i.e. all data points collected so far must be plotted. It is also a requirement that your program somehow report the following events:
Now you will implement a global navigation algorithm of some type; the visibility graph and free space graph algorithms presented in L7 are reasonable options. You may make the following simplifying assumptions:
Procedure:
Extend your graphical debug program so that the world frame and
boundary locations are not hardcoded, but can be changed at runtime.
Write a program for the panda that implements your global navigation algorithm by reading in a text map file in the format
xgoal ygoal
xmin0 xmax0 ymin0 ymax0
xmin1 xmax1 ymin1 ymax1
...
xminN xmaxN yminN ymaxN
where each token is a floating point number in ASCII decimal format (e.g. as accepted by Float.parseFloat()
in Java). The first line gives the goal location in meters in world frame. Each subsequent line defines an axis-aligned rectangle in world frame meters (the rectangle sides are always parallel or perpendicular to the coordinate frame axes, never tilted). The first is the arena boundary, and the rest are obstacles. There may be any number of obstacles, including zero. The obstacles may intersect each other and the arena boundary.
We will leave most details up to you. However, it is required that your graphical debug program show (at least) the current robot pose, the world frame axes, the arena boundary (this is why you must be able to resize the display at runtime), all obstacle boundaries, and the goal location. It is also a requirement that your program somehow indicate when the goal has been reached.
You will be asked to demonstrate your code for the course staff in lab on the due date for this assignment. We will try out code; 30% of your grade for the lab will be based on the observed behavior. Mainly want to see that your code works and is as bug-free as possible.
The remaining 70% of your grade will be based on your code, which you will hand in following the instructions on the assignments page by 11:59pm on the due date for this assignment. We will consider the code completeness, lack of bugs, architecture and organization, documentation, syntactic style, and efficiency, in that order of priority. You must also clearly document, both in your README and in code comments, the contributions of each group member. We want to see roughly equal contributions from each member. If so, the same grade will be assigned to all group members. If not, we will adjust the grades accordingly.