The following two-step algorithm probably solves your problem:
bootstrap
script, automake crashes. What's wrong?
Go read the DwarfCorbaTutorial.
In the COVRA project, Kim et al. showed that CORBA is viable for virtual reality applications, which pose the same stringent performance requirements as augmented reality. In addition, we use other protocols, such as shared memory for local high-bandwidth communication, and CORBA method invocations for tightly coupled reliable communication.
The DWARF CVS tree can be checked out using:
cvs -d :ext:$USER@cvsnavab.informatik.tu-muenchen.de:/cvs/dwarf co dwarf -P .
For this, you need an account on cvsbruegge in the group dwarf, which can be obtained via the Lehrstuhl.
Dont forget that you need to set CVS_RSH=ssh
for this to work, since rlogin (which is the default) is not allowed on the server.
If you only need read-only access, you may require a password (contact us via ar-group@ar.in.tum.de) and use the command:
cvs -d :pserver:dwarf@cvsnavab.in.tum.de:/cvs/dwarf login
cvs -d :pserver:dwarf@cvsnavab.in.tum.de:/cvs/dwarf co -d dwarf -P .
Yes. You may check the directory win32 in the DWARF source tree for build instructions. They are somewhat weird and require both cygwin CMake and Visual C++. The service manager currently will compile with Windows, but also single services (e.g. Java) are easily compiled and may connect to a service manager running on a supported platform. see the DwarfWindowsTutorial and the DwarfJavaTutorial
You should read the README and QUICKSTART files in the cvs root. There you can find all the information you need for bootstrapping the system.
The GNU autotools automatically create Makefile rules that rebuild all generated files when input files like Makefile.am and configure.ac change. The detection of these changes is based on timestamps. A common mistake is to use cvs checkout instead of cvs update to refresh the working copy of files. A checkout updates all timestamps, leading to the autotools input files being newer as the generated files.
bootstrap
script, automake crashes. What's wrong? ./bootstrap: line 12: 2313 Segmentation fault automake --add-missingFor some reason, this version of automake does not work with DWARF on Mac OS X. Go read the QUICKSTART file from the CVS root on how to install the working versions.
GNU libtool handles the compilation of all binaries. The DWARF specific GNU autoconf macros in the config directory handle correct settings of linker flags such as libraries needed and these libraries' paths. Therefore, compilation should always work. Things change when the binaries get executed, the OS run time environment must then find the libraries itself. On Linux, you may help the system by including the missing library paths in LD_LIBRARY_PATH. Alternatively, you may change the relevant autoconf macro to include the following options in the LDFLAGS it outputs:
-Xlinker --rpath -Xlinker ${your_lib_path_here}
config/dwarf_have_mkl.m4
contains an example.
configure --disable-static CFLAGS="-g -O0" CXXFLAGS="-g -O0" GCJFLAGS="-g -O0"It is advisable to store this complicate commandline as an alias.
TamerFahmy gave another hint: do a
make -j2this should speed up compilation around factor 2! The argument to -j should be (CPU ammount)*2.
Look for distcc and ccache, both decrease the compile time massively.
You are using GNU jar, which comes with gcj, the GNU compiler for java. That version of jar is incomplete. Throw it out and use your Java SDK's jar instead. you'll have to touch all idl files in src/idl
see error "jar: -u mode unimplemented".
If that happens with some "cannot resolve symbol symbol : ..." error message of javac, try to issue the make
command another time (and repeat this several times...). Ususally, this is a workaround for this long standing problem in our build process. If you have knowledge on ant
and automake
, and some idea how to marry these two tools, you are happily invited to implement a new build process for the Java parts of DWARF.
Use the ./run-servicemgr
script. Do not try to start the Service Manager directly, this will not work. The script checks for some preconditions and sets up the Service Manager correctly. With the first command-line argument of the script, you can specify an example application to run (for example labsheep). If you do not specify an application to run, the Service Manager will parse all service descriptions located in the share directory but it will not start any service. You would need to do this manually in this case.
You should set the parameters startOnDemand
and stopOnNoUse
to false
in the service description. Setting these to true
makes the Service start only when one of its Abilities is requested, and your Service doesn't have any...
This happens with Services that are started directly by the user, i.e. Views and debugging services like the TestStringReceiver.
Check out the SLP attribute specification under http://www.openslp.org/doc/rfc/rfc2608.txt . It's LDAP format.
You should start the service manager with run-servicemgr.sh
rather than servicemgr
. This tells the service manager which directory the XML service descriptions are in.
Make sure your network configuration is ok.
Try ping `hostname`
- if that doesn't work; neither will CORBA (or anything else...)
Make sure your SLP configuration is ok. Probably the SLP daemon is not running.
On a Mac this helps: mv /usr/sbin/slpd /usr/sbin/slpd.krupuk
Also sometimes (especially when using our openslp-rpms) it is necessary
to do:
(Linux:) ln -s /usr/local/etc/slp.conf /etc/slp.conf
(Mac:) ln -s /sw/etc/slp.conf /etc/slp.conf
To test your SLP configuration, try
slptool findsrvs service:service-agent
...that should show all SLP daemons in the network, including your local host. If that doesn't work, neither will DWARF.
Your SLP installation is corrupted, probably you need to modify your routing table. See http://www.openslp.org/doc/html/faq.html and http://www.openslp.org/doc/html/UsersGuide/Installation.html for details.
Maybe your SLP Daemon tries to communicate on the wrong network interface,
you can force him with the following line in the /etc/slp.conf
net.slp.interfaces = 127.0.0.1
if you want to use the loopback interface, you have to add to following route:
route add -net 239.0.0.0/8 gw 127.0.0.1 dev lo
You should patch your SLP configuration file, usually /etc/slp.conf
, to include the fillowing lines:
net.slp.multicastMaximumWait = 500 net.slp.multicastTimeouts = 50,75,100,150,200,300 net.slp.unicastMaximumWait = 500 net.slp.unicastTimeouts = 50,75,100,150,200,300
The service manager probably is not parsing the XML service descriptions, meaning that most services cannot register themselves with the service manager.
If the service descriptions are parsed correctly, you will see messages like newServiceDescription(Sheep)
. If not,
make install
for the services; this copies the XML descriptions to the right directory,
config.h
includes the line #define USE_XERCESC 1
. If it doesn't, re-run configure
with the right --with-xercesc
argument.
There are several reasons why this might happen...
startOnDemand
flag set.
maxInstances
have been used up. Debug this by turning on "Show unregistered services" in DIVE; fix it by adding a sensible predicate to your need.
Removing unneeded XML files also generally improves performance and stability.
You probably specified an ability or service attribute with an empty string as a value. This kills the OpenSLP daemon for some reason.
Loosing events is usually inidicating bad object design. If two event channels get connected to the same object, two threads try to access this object simultaneously. Thus, access has to be synchronized. It may now happen that one of the threads gets too few computation time, the events delivered queue up and eventually get dropped if the maximum queue length is reached.
Consequence: Use a single object for every need you have. This makes the design of your service much more elegant and extensible, prevents you from querying event type strings when receiving events and prevents loss of events in case of high system load.
You need to install Graphviz, a graph visualization software package
from AT&T:
http://www.research.att.com/sw/tools/graphviz/download.html .
The layouter, dot
, should be within the search path; DIVE calls it to do the
graph layout.
Try this command to get an unlimited version: while true; do ./DIVE; done;
No... the Distributed Interactive Virtual Environment (DIVE) at http://www.sics.se/dive/ is a different project.
You should set the environment variable DYLD_BIND_AT_LAUNCH
to 1. That improves the behavior of many libraries.
Restart it manually with sudo /usr/local/bin/slpd
. You will have to repeat this when your network connection changes.
On 10.3. with fink install (as we recommend you to do it): sudo /sw/sbin/slpd
This can be caused by the compilation of Python has been compiled with another compiler than omniORB has. (ref. http://mail.python.org/pipermail/python-dev/2003-May/035444.html)
return XXXXXX._this(orb)
at the end of your create[Ability|Need]Object
method. When you wrote return XXXXXX._this()
it will not work.
We often has this problem and don't know why, so plaese let us know (MarcusToennis). What we did against it is to remove the sourcecode:
jar xvf graham-kirby-compilier.jar
rm compiler/*.java
jar cvfm graham-kirby-compilier.jar META-INF/MANIFEST.MF compiler
If you want to avoid installing your python service every time you make changes (which somehow contradicts the use of a scripting language), then you need to adjust your pythonpath such that common sources can be found from your source directory.
Simply add export PYTHONPATH=$PYTHONPATH:$HOME/dwarf/bin
to your .bashrc file (assuming that you have used $HOME/dwarf
as the prefix when running configure).