The debugging space concept is one example of a custom use of the ARDev library there are many more possibilities both using robots, and also in other application areas.
Detailed documentation of the individual classes is described in the classes page. This is generated with doxygen directly from the source, so should be kept up to date. The purpose of this page is to give an overview of how the library is expected to be used, hoepfully people will also start using it in some unexpected ways, in which case this page can be used as a starting point to break the mold from.
There are seven key objects that are used in the AR system, any object that is optional has a NULL implementation that can be used as a place holder. The key objects are:
The two key objects from this list are the ARDev object which manages the Augmented reality system and the OutputObject that manages the individual rendering pipeline for a display.
This first step is to set up the AR environment, we need to have some standard includes:
#include <ardev/ardev.h> #include <ardev/capture.h> #include <ardev/output_x11.h> #include <ardev/render_base.h>
It is probably a good idea to turn on a reasonable level of debugging.
ARDev::DebugLevel = ARDBG_INFO;
Then we get into the real initialisation, first set up the capture object, we are going to use firewire in this case.
CaptureObject * cap = new CaptureDC1394();
Next we initialise a static camera object based on a static calibration.
ARCamera arcam("static.calib"); CameraConstant cam(arcam); PositionConstant * CamPosition; ARPosition CameraOffsetConst(arcam.Origin,arcam.Direction); CamPosition = new PositionConstant(CameraOffsetConst); CamPosition->Initialise();
Now that we have the basics for grabbing a realworld snapshot we can create the output object and initialise the display.
OutputObject * out = new OutputX11((CaptureObject*)cap,&cam,CamPosition,800,600,":0",FullScreen);
Having created our display we can now can inistialise the ARDev object
ARDev::Start(out,"overhead");
At this stage we should now have a live stream from the camera, albiet an overly complex way of getting this if that was all we wanted. So now we want to render our augmented data. So the first step is to create the fiducial tracker and register it with the system.
ARToolKitPlusPreProcess * artkp_pre; artkp_pre = new ARToolKitPlusPreProcess(cam); artkp_pre->Initialise(); out->AddPre(artkp_pre); PositionObject * RobotPos = new ARToolKitPlusPosition(*artkp_pre,11,0.31); RobotPos->Initialise();
And now to create the render object.
ARColour Blue(0,0,255,0); RenderTeapot Teapot(Blue,0.01);
Finally we create the link between the position object and render object and add them to the render list.
ARDev::Add(RenderPair(&Teapot,RobotPos),"overhead");
After this you would have your normal application code, once you are ready to stop the AR rendering simply call the following.
ARDev::Stop("overhead");
The new objects can then simply be used as described above.