syncevolution (1.5.3-1) unstable; urgency=medium
.
* New upstream release
* Fix override_dh_makeshlibs to handle all packages with public shared libs
(Closes: #887043)
* Remove obsolete Conflicts:/Breaks:/Replaces: sync-ui (<<1.1+ds1-1~)
* Change debhelper compatibility to 11, which is recommended
* Bump standards version to 4.1.3, no changes needed
* Fix several lintian warnings
When invoked during automated testing, stdout is not automatically
configured to support UTF-8. We know that our encoding is UTF-8,
so we can enable that unconditionally.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
This reverts commit 7d527c6dd8.
It causes link errors on Fedora, see
https://bugzilla.redhat.com/show_bug.cgi?id=1926932
This might be a compiler bug, but as this is a not particular
important size optimization, removing it is the easiest fix.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
This addresses two different warnings from Fedora Rawhide:
/srv/runtests/work/sources/syncevolution/src/syncevo/SyncContext.cpp: In member function 'std::string SyncEvo::XMLFiles::get(SyncEvo::XMLFiles::Category)':
/srv/runtests/work/sources/syncevolution/src/syncevo/SyncContext.cpp:2390:28: error: loop variable 'entry' of type 'const StringPair&' {aka 'const std::pair<std::__cxx11::basic_string<char>, std::__cxx11::basic_string<char> >&'} binds to a temporary constructed from type 'std::pair<const std::__cxx11::basic_string<char>, std::__cxx11::basic_string<char> >' [-Werror=range-loop-construct]
2390 | for (const StringPair &entry: m_files[category]) {
| ^~~~~
/srv/runtests/work/sources/syncevolution/src/syncevo/SyncContext.cpp:2390:28: note: use non-reference type 'const StringPair' {aka 'const std::pair<std::__cxx11::basic_string<char>, std::__cxx11::basic_string<char> >'} to make the copy explicit or 'const std::pair<const std::__cxx11::basic_string<char>, std::__cxx11::basic_string<char> >&' to prevent copying
This fails because StringPair has non-const members. By using "auto",
we get rid of the need to define and pick exactly the right type.
/srv/runtests/work/sources/syncevolution/src/syncevo/SyncConfig.cpp: In member function 'void SyncEvo::SyncConfig::removeSyncSource(const string&)':
/srv/runtests/work/sources/syncevolution/src/syncevo/SyncConfig.cpp:2552:36: error: loop variable 'peer' creates a copy from type 'const string' {aka 'const std::__cxx11::basic_string<char>'} [-Werror=range-loop-construct]
2552 | for (const std::string peer: m_tree->getChildren(m_contextPath + "/peers")) {
| ^~~~
/srv/runtests/work/sources/syncevolution/src/syncevo/SyncConfig.cpp:2552:36: note: use reference type to prevent copying
2552 | for (const std::string peer: m_tree->getChildren(m_contextPath + "/peers")) {
| ^~~~
| &
We could have used "auto" also instead of "std::string", but here it
doesn't save that much typing and is more readable. We just have to
use a reference.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Recent g++ on Fedora Rawhide warns that the dynamic cast result
is used without NULL check. We know that the cast must succeed,
so a static cast is more appropriate.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Some older version of libsecret.h lacked `extern "C"`. Adding
that manually now causes compile errors on Fedora Rawhide and thus
has to be removed:
/usr/include/c++/11/type_traits:480:3: error: template with C linkage
480 | template<typename _Tp>
| ^~~~~~~~
/srv/runtests/work/sources/syncevolution/src/backends/gnome/GNOMEPlatform.cpp:24:1: note: 'extern "C"' linkage started here
24 | extern "C" {
| ^~~~~~~~~~
In file included from /usr/include/glib-2.0/glib/gmacros.h:241,
from /usr/lib64/glib-2.0/include/glibconfig.h:9,
from /usr/include/glib-2.0/glib/gtypes.h:32,
from /usr/include/glib-2.0/glib/galloca.h:32,
from /usr/include/glib-2.0/glib.h:30,
from /usr/include/libsecret-1/libsecret/secret.h:18,
from /srv/runtests/work/sources/syncevolution/src/backends/gnome/GNOMEPlatform.cpp:25:
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
A lot of the old suppressions are no longer needed (determined by
running valgrind with -v during a full nightly test run) and some new
ones are needed after updating to new Linux distros.
The builtin GtkApplication support is almost equivalent and allows
getting rid of the deprecated libunique which isn't available anymore
in recent Linux distros.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
This no longer works when running tests in Docker containers because
the wrapper script for starting there only accepts a simple command,
not something that must be interpreted by a shell.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The --as-needed linker flag didn't work anymore on recent Linux
distros, with the result that libcppunit became a library dependency
of the syncevolution binaries although they didn't need it.
A better approach anyway is to only link the lib when it is expected
to be used (unit testing or in client-test).
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
-Wno-deprecated-declarations is needed everywhere (i.e. not just in certain
modules, like SYNCEVO_WFLAGS_DEPRECATED) because EDS on Ubuntu Eon uses
the deprecated GDateTime in its header, which we included through our
EDS/libical wrapper even when not actually used.
Developed originally by Milan Crha for Fedora, copied and updated
by Patrick Ohly.
From Milan:
There are going to be made huge libecal API changes, as huge as it
deserved a version bump from 1.2 to 2.0, and together with it a small
libebook API changes, most likely being part of the evolution-data-
server 3.33.2 release, which is planned for May 20. More about this can
be found here:
https://mail.gnome.org/archives/desktop-devel-list/2019-April/msg00016.html
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Newer clang (or was it gcc?) warn about catching exceptions by value
which have virtual methods. This shouldn't have mattered here because
the exception values where not really used, but using a const
reference is better nonetheless.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The signond pipe helper class uses deprecated glib methods. Not sure
whether that wasn't an option at the time that it was written, but
nowadays glib has classes which can be used instead.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Having to specify the type of an iterator is annoying and does not
really add clarity to the code. With C++11 we can use "auto"
instead and in some cases remove helper typedefs.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
This saves some space (total number of blocks for SyncEvolution object
files when building statically down from 421588 to 413300) and
presumably also build times (not measured).
However, it did not work for all templates, leading to link errors
when trying to add std::map and std::pair of strings. It probably also
does not make sense for templates where only some functionality is
used.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
NULL is ambiguous (can be integer and pointer) and using it as
terminator for vararg list of pointers without explicit casting to a
pointer was downright incorrect. nullptr fixes that.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Several headers were no longer needed resp. could be replaced by more
specific ones like noncopyable.hpp.
boost::assign mostly can be replaced with initialization lists and
boost::tuple with std::tuple.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
This allows us to get rid of an external dependency. Mostly std::regex
works, but there are limitations that have to be worked around:
- no multiline support in C++11
- conversion of groups to non-string types has to be done manually
While at it, use raw strings to get rid of excessive backslash
escaping.
pcrecpp::StringPiece was used as a general-purpose utility class. Now
we have our own implementation.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
std::unique_ptr usually can be used instead. std::vector also works
for arrays and even has a data() method as part of the official API in
C++11.
For historic reasons, the functions creating SyncSources returned
plain pointers. We are breaking the API now, so we might as well fix
that.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
This was committed to SVN, probably because it might have become
relevant again. Now it is long obsolete.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
We can use std::shared_ptr and std::function instead now.
Lambdas are usually a better alternative to boost/std::bind. The
downside is the need to explicitly specify parameters completely. When
inlining callbacks entirely with lambdas, duplication of that
parameter list can be avoided.
Whenever possible, use std::make_shared to construct objects that are
tracked by std::shared_ptr.
Some objects need a std::weak_ptr during object destruction. For that
we have to use our own implementation of std::enable_shared_from_this,
with a matching creator function. The additional benefit is that we
can get rid of explicit static "create" methods by making that create
function a friend.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
boost/foreach.hpp is no longer needed, range-based loops work
the same. With some helpers, even reverse iteration and
boost::make_split_iterator() can be handled the same way.
"auto" makes it possible to avoid explicitly spelling out the
expected type.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Lambdas without variable capture are guaranteed to be compatible with
plain C functions, so we can use them as callbacks. That keeps the
code closer together and avoids having to declare helper methods as
part of the public class.
In some cases the static method is the actual code, in which case only
"nothrow()" gets replaced with "noexcept" because it's cleaner and to
mark that the code was looked at and intentionally left as-is.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Universal references and template methods make it possible to pass
through arbitrary callbacks right through to the point where we need
to create a std::function for a plain-C lambda.
Boost is not needed anymore.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
By specifying the list of signal types as template parameters
it becomes possible to use a single implementation. Lambdas
can replace explicit callback methods.
The reimplementation is more flexible and does not enforce
the use of a boost::function. This matches how connectSignal()
was used in practice. Thanks to universal references, the boost::bind
instances get moved directly into the allocated instances that are
attached to the signal handler.
The downside is that the call syntax changes slightly, because
partially specifying template parameters does not work.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Using templates with a varying number of types allows removing
duplicated code for the different cases.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The OperationWrapperSwitch class can be reduced to just two cases,
which reduces code duplication considerably. Conditional compilation
with "if constexpr" could be used to eliminate the helper class
entirely, but that's C++17.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The code becomes a lot more readable. One can also set breakpoints
inside the callbacks.
Exception handling in GRunInMain() is better now, with the ability to
rethrow arbitrary exceptions thanks to std::exception_ptr.
Only some usage of boost::lambda in the Akonadi backend remains.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
This aspect of syncevo-dbus-server had been developed for use in an
IVI head unit. It's currently neither used nor tested and no longer
compiles (timeout.h moved, API changes in libfolks).
Instead of trying to keep it working while enhancing the usage of C++,
removing it entirely is easier.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
GIO D-Bus is a more modern and capable implementation. The older one
was only needed on certain old Linux distros (Maemo) which did not
have a recent enough glib.
The reason for removing the old one is that it allows making API
incompatible changes for the C++ D-Bus binding without having to do
that in two places.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
cppcheck started complaining about std::string::find() being
inefficient. boost::starts_with() would indeed be better, but even simpler
and more readable is a whole-string comparison.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Commit 5bafef3957 (included in v1.4)
broke the printing of the messages about Ctrl-C handling, in
particular the hint that using it quickly will switch from suspending
to aborting.
The message code was getting overwritten by a new default case due to
a missing break.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The SoupTransportAgent modernization in 1.5.3 led to SSL checking
always being enabled because the default changed from disabled to
enabled and SyncEvolution did not set it.
Worse, in older versions it probably (untested) was always disabled
because it was not enabled either.
Now the checking of SSL is always set explicitly and thus always
mirrors the SyncEvolution configuration.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
set/getenv() are not thread-safe, and a recent bug report via private
email shows that this does cause segfaults:
Thread 4 (Thread 19251):
....
Thread 1 (Thread 19311):
...
In this case, DLT was used and the setenv call was setting the
LIBSYNTHESIS_<context> variables.
The solution is to avoid setenv() in code which might run in parallel
to other threads:
- DLT-related variables are set at the beginning of
syncevo-dbus-server startup which then gets inherited
by syncevo-dbus-helper and in the environment prepared
for syncevo-local-sync (because the latter might run with
a different log level)
- the default for SYNCEVOLUTION_PBAP_SYNC is now "incremental"
everywhere and SyncContext is told about the special
mode where it needs to keep photo data differently, i.e. setting
SYNCEVOLUTION_PBAP_SYNC in dbus-sync.cpp for PBAP syncing is
no longer necessary
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
meego.org is long gone, replace it where possible. Some of the old bug
references might have a corresponding bug in bugs.freedesktop.org but
trying to update them probably isn't worth it.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Dropping the pcrecpp build hacks in the nightly testing showed that
the flags for normal linking and compilation were missing.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
This is the result of 2to3, with the interpreter path updated manually,
tabs replaced by spaces and some obsolete packages removed.
Required because the current nightly build host no longer has Python2 installed.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
With support for 32 bit binaries removed, we also no longer need
to build the special lpia packages.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Splitting commands with a semicolon didn't work as intended. We need
to use the helper function's parameters instead.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
After moving to different pre-built testing, the wrong
syncevo-dbus-server binary was getting started (/usr/lib instead from
the test directory). We must find the binary via normal PATH lookup
and also deal with symlinks before comparing.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Successful tests were not picked up by the result checker if there was
extra output between printing the test name and the " ... ok".
One source of that extra output was the D-Bus server and daemons
started by it (now redirected to a file), the other a glib warning
about an event ID not being found (caused by double-removal of a
timer, which is mostly avoided now).
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The code for checking out source code for testing no longer worked
with newer git when there weren't any remote branches which had to be
reset.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Testing them is done in separate environments and installing them
no longer works in Docker-based build environments (permission
issues).
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Newer packages will be built without the ical compatibility hack and
will start to depend on libical3, so we have to remove the packaging
hack.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Previously, nightly testing used a home-grown set of scripts arounds
Debian's schroot tool. This became increasingly harder to maintain and
update, in particular because the chroot's themselves were set up
manually. Using Docker files for building container images and then
running in those avoids this. In SyncEvolution, this affects some
places where direct access from the host to the test filesystem was
assumed.
Testing of pre-built binaries also gets changed: instead of pointing
to some directory from a previous build, we always install the output
packages from an apt repo in a clean, minimal container. Runtime
dependencies like "evolution-data-server" must be declared correctly
because they might no longer be installed already.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Running Murphy for resource allocation is overkill and
complicated (need a shared D-Bus session!). Sharing a local directory
between different test runs and using file locking is simpler. With
flock, locks are associated with a file descriptor, so they will be
returned automatically when the process quits, just like they used to
be with Murphy.
We don't want these file descriptors to be inherited by child
processes; Python 3 does that by default, so we switch to it. This is
is also a worthwhile goal by itself.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Running with no sources leads to exception later on and isn't
intended; it makes more sense to treat an empty env var like an unset
one and use the default source list.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The current HTTP server for nightly.syncevolution.org reports the
content type for .txt files as plain text, but not for .log
files. Plain text is desirable for easy viewing in a web
browser. While the .log suffix is nicer, getting the HTTP server
reconfigured is hard and might have to be repeated again in the
future, so let's just use .txt.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
This is meant to be used by automated testing, with gdb acting as
wrapper around a command so that stack traces are created
automatically when something goes wrong.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Recent EDS started to exhibit race conditions when opening a database (for
example, https://bugzilla.gnome.org/show_bug.cgi?id=791306). Opening was
already tried again for a certain known error in some old EDS version. Now it
is tried five times with a delay of one second for all errors.
The advantage is that this does not depend on accurately detecting the race
condition error.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
When libcurl was selected instead of libsoup, compilation failed
because the necessary header file was missing and the direct assignment
of a plain pointer to the shared_ptr failed.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Suppressing the warning for all code hid the deprecation warning
about auto_ptr, which is something that should have been fixed
before.
Now only some code still suppresses the warning (GTK UI, Akonadi)
because there is no time to also update and test that part.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
This allows us to get rid of deprecated function calls. We no longer
need to set a default proxy either, the newer libsoup does that itself
by default
(https://developer.gnome.org/libsoup/stable/libsoup-session-porting.html).
Not mentioned in the porting guide is that the based soup session now
has a 60s timeout by default. We don't want to time out.
We also need to quit the event loop explicitly when timing out. Somehow
that wasn't necessary before.
When using the generic SoupSession, it is no longer guaranteed that canceling
an operation invokes the callbacks before returning. Therefore we have to be
prepared for callbacks occuring after destructing the SoupTransportAgent. This
is achieved by storing all SoupTransportAgent in a boost::shared_ptr and
letting the instance know about that with a weak_ptr, which then can be used
by the callbacks.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The helper class is also useful outside of the D-Bus server,
for example in the glib-based SoupTransportAgent.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
auto_ptr has been deprecated for a while now. unique_ptr can
be taken for granted now, so use that instead.
GDBusMessage requires a custom deleter. Not sure how auto_ptr
handled that before.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
This makes it possible to use C++11 features. Choosing C++14 when available
gives us advance warning when something might break under C++14. Test builds
on a system with only C++11 and another with C++14 are needed to ensure that
both really works.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Without them, --enable-warnings=fatal was ignored for the D-Bus test
program, causing deprecation warnings about auto_ptr to be printed
without aborting the build.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The server started to re-encode the image, thus breaking the strict
comparison that is done for these tests. Normal testing allows such
changes for the Google server by ignoring PHOTO data, but in these
tests we want comparison to be strict, so we have to change the test
data.
The downside is less test coverage.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
shlibs.local was used in combination with explicit ebook/ecal/ical
dependencies to replace the automatic dependencies. It needs to
be maintained together with those explicit dependencies, so it
makes more sense to use a file provided by the code which
calls make to build releases.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
When building and installing backends on additional build platforms, linking
the binaries is both unnecessary and sometimes problematic (for example,
helper libraries not available in a version that works).
Therefore configure arguments can be used to disable linking of the binaries.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
While not necessary (attributes are not read for NOP event), it's
still cleaner to also initialize them. Found with cppcheck.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The result of shifting a signed int is undefined. Better operate
on unsigned int of the same size.
Found by clang or cppcheck.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
This is a cut-and-paste error from upstream libsynthesis: an error
code was returned in an error case where a boolean should have been
returned.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Building with recent Clang in C++ mode fails when using the non-standard
typeof operator. We can't rely on the new(ish) decltype yet, so use
the Boost implementation instead.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The only actual error was incorrect nesting of ifdef/endif and comments.
The iterator change avoids a false positive where cppcheck's for correct
begin()/end() comparisons fail. It's also a bit shorter and cleaner.
The copy operator is not needed.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Recent cppcheck warns about m_source not being initialized, which is a false
positive (it's a reference and gets initialized). Inline suppressions did not
work, so instead disable the entire warning for SyncSource.h.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The "waiting for daemon to connect to D-Bus" loop did not check whether daemon
was still running at all, causing testing to get stuck occasionally when the
daemon failed.
THe loop waiting for output already checked that, but can be simplified.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Sometimes GNOME keyring and libsecret fail to set up the right temporary keys
(https://bugzilla.gnome.org/show_bug.cgi?id=778357). This has been fixed
upstream, but still breaks with the distros used by the automated testing
occassionally.
Retrying the operations after disconnecting from the server is an attempt
to recover from this sporadic error.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The GNOME keyring library has been obsoleted for a long time now,
long enough that the replacement libsecret is available on all
supported distros. Therefore we can switch unconditionally.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
libical v3 removes some deprecated functions (like icaltime_from_timet)
and removes the "is_utc" member from icaltimetype. The replacement
code works with old and new libical and thus needs no ifdefs.
However, that struct is part of the ABI, which impacts the tricks that
syncevolution.org binaries use to get built against libical v2 and then
run with more recent libs like libical v3.
Depending on the platform ABI, it may still be okay, because the calling code
in SyncEvolution reserves and copies enough bytes for the icaltimetype
instances and because that code never directly accesses any member (is_date,
is_daylight, zone) whose offset changes.
Original author: Milan Crha <mcrha@redhat.com>
Slightly modified it so that icaltime_t.zone is not set.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Switching the testing to Exchange 2016 revealed a slightly
different time zone mangling than before:
- the "US & C" time zone name already gets truncated before the C
- due to recent rules on the server (?), several TZNAMEs get
changed
Updating the data again...
The account data was unreferenced once too often, or rather, a suitable ref
count increase was missing. A debug build of glib detects that ("GLib:
g_variant_unref: assertion 'value->ref_count > 0' failed"), but without that
check the code might also crash.
Prevents the wallet backend from crashing SyncEvolution when
enabled. Functionality not really tested, though.
PIM backend had compile problems when enabled.
When a backend is inactive, it is meant to ignore generic types like
"calendar". The idea behind that is that typically users install or
compile just the backends they want, and then ask for the "calendar"
backend using the generic sync templates or instructions.
When adding the TDEPIM calendar backend, that broke because it also
instantiated itself for those terms when active. The other TDEPIM
backends already did this as intended.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The latest activesyncd contains GSettings schema files
which get installed by the top-level makefile, therefore
we have to install everything.
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
The *Register.cpp files and everything they include must compile
without hard dependencies on header files which are TDE specific,
so add some ifdefs. Compiling with TDE backends disabled broke
because of this.
When enabled (untested!), it is unclear how some of these *Register.cpp
could have worked without including the header file that defines the
class they instantiate. Added the necessary includes.
A closing } was missing (found by cppcheck, which tests all variations
of the code, not just those currently enabled).
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
dbus-launch is considered deprecated because of the X11 dependency.
See https://lists.debian.org/debian-devel/2016/08/msg00554.html "Mass bug
filing: use and misuse of dbus-launch (dbus-x11)"
The script still needs to start the D-Bus daemon when used in the nightly
testing, so the code now does it as in
6cc8062cfb
syncevo-http-server still has some usage of dbus-launch left, but that's
strictly for systems which don't have the more modern D-Bus.
With the recent change ("Add a systemd user service as a backend for the D-Bus
session services"), activating syncevo-dbus-server via D-Bus will integrate
better with systemd. When auto-starting via the .desktop file, we can do the
same by activating via D-Bus.
We use dbus-send for that, if available. A recent busctl from systemd could
also be used, but for example the one in Debian Jessie is still to old. Better
use dbus-send. Directly starting the binary is used as fallback.
Based on a patch from Simon McVittie.
On systems with a systemd user session and a D-Bus user bus that
uses it for activation, this ensures that syncevo-dbus-server ends
up in its own cgroup, instead of being treated as part of dbus.service.
If org._01.pim.contacts and org.syncevolution are activated in quick
succession, it also prevents a race condition that would make one of
the activations fail, similar to
<https://bugs.freedesktop.org/show_bug.cgi?id=53220> in
telepathy-mission-control.
Apparently there's a race condition in the OBEX transport that causes the
connection to phones via Bluetooth to be shut down prematurely. Some phones
react by doing a slow sync instead of an incremental sync the next time.
Waiting during shutdown should address the problem (however, it was not
possible to confirm this).
syncevolution.org binaries are now getting compiled on Ubuntu Trusty and thus
no longer support distros with older EDS. The code should still compile
against older EDS (for example, for Maemo), but that is not getting tested
anymore.
This allows removing the dynamic linker hacks related to older libraries,
which was only used in those binaries. Instead, backends using libical or EDS
get compiled on Ubuntu Trusty and then the soname of those libs get patched to
make the backend module usable in combination with a different set of
libs. That patching is part of a script maintained in the syncevolution.org
build infrastructure.
This approach was already used before to generate different EDS backends
for EDS versions with the newer EClient API, because that turned out to be
easier than the dynamic loading approach. It works because none of the methods
used by SyncEvolution changed their ABI, only some other parts of the
libraries did. Should there ever be a situation again that cannot be handled
like this, then backends might also get compiled on different distros than
Ubuntu Trusty (however, that may lead to problems due to the libstdc++ ABI
changes - to be decided...).
libical still requires one special hack: system time zone loading in
libical v1 (and only in that version, v2 has builtin support again) must
be overridden such that time zones are generated with rules instead
of transitions because that is more compatible with the peers that
SyncEvolution exchanges data with.
That hack now relies on overriding the two relevant functions inside the main
binaries (has to be there, otherwise libical still ends up calling its own
internal implementation). The overriding code is in
libsyncevo-icaltz-util.so.0 and depends on libical.so.1. If
libsyncevo-icaltz-util.so.0 can be loaded, the wrappers in the main binary use
it, otherwise they fall through to the code from the current libical.so, which
then should be libical.so.2 or more recent.
This hack is active by default when libical v1 is detected during configuration.
Previously, backends were loaded in a constructor. At some point (*), that
loading crashed with a segfault in the dynamic linker. To make debugging a bit
easier and rule out non-determinism as the root cause of that crash, loading
backends was moved into SyncContect::initMain().
(*) the crash happened when there were two backends which couldn't
be loaded due to missing libraries and was related to error strings.
A simpler test program did not trigger the problem, and now SyncEvolution
has suitable backends for (hopefully?!) all platforms, so it no longer
occurs, at least not during automated testing.
SYNCEVOLUTION_DEBUG=1 syncevolution --daemon=no --version now
dumps also the debug information gathered by the binary
compatibility code. It was only available in sync logs before.
We call some pthread methods and thus must link against libpthread.
Previously this must have worked by accident, but started failing
on more recent distros.
DAViCal on Debian Stretch incorrectly re-formats contacts which do not have a
REV property (see http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=837154).
Until this gets fixed we work around the problem by including a REV in all
test cases, which implies refreshing the Google CardDAV patch.
Recent shells filter out environment variables that are not valid shell
variables, so for example,
SYNCEVOLUTION_FILE_SOURCE_DELAY_OPEN_addressbook-slow-server did not get
passed through to syncevo-dbus-server because of the hyphen. As a result, the
tests became unreliable (non-deterministic timing) or outright failed.
Now we ensure that these variables are valid also in a shell and in addition,
make the test stricter such that it detects when the file backend did not
wait.
Recent distros like Debian Stretch have newer dbus-monitor with
extended content that no longer matched the regex used before. The
relaxed regex works with old and new dbus-monitor.
When building on the new syncevolution.org reference platform, Ubuntu Trusty,
building rpm packages with checkinstall fails for unknown reasons. As
providing those rpms is of questionable value (libraries are more likely to be
different on rpm-based distros than on Debian/Ubuntu), building rpm simply
gets disabled entirely.
As a fallback for users there still are the plain .tar.gz archives containing
the same files.
Previously, when --enabled was not given, all tests were run. That was meant
to simplify running all tests, but that is rare and more likely to happen by
mistake when not selecting tests correctly in the automated testing, so now we
revert the logic and don't select a test if no --enable filter is set.
The XDG symlinks were created as fallback in case that the XDG env variables
somehow were ignored. In practice, the symlinks were not needed, so it didn't
matter that they were set incorrectly to the path name outside of the chroot.
But it's confusing when debugging the setup and just plain wrong, so better
fix it...
client-test makes new directories user-readable because that is
the default for all SyncEvolution directories. But after publishing
the test results, they need to be world-readable, because they are
going to be accessed via some kind of web server running under
some different user.
These linker flags are added to those normally used for linking against
libsyncevolution. The intended usage is to extend linking of syncevolution.org
binaries such that static versions of libcppunit, libpcrecpp and libopenobex
are used, because the ABI of those libs have changed such that binaries linked
on Ubuntu Trusty would not run on more recent distros like Ubuntu Xenial.
For example, on Ubuntu Trusty i386 one can configure with:
PCRECPP_LIBS=-lpcre \
LIBOPENOBEX_LIBS=-lpcre \
'--with-extra-core-ldadd=/usr/lib/i386-linux-gnu/libpcrecpp.a -lpcre /usr/lib/libopenobex.a /usr/lib/i386-linux-gnu/libusb.a' \
CPPUNIT_LIBS=/usr/lib/i386-linux-gnu/libcppunit.a
Now cppcheck is found via the normal PKG_CHECK_MODULES. The advantage
is that CPPUNIT_CFLAGS and CPPUNIT_LIBS can be overridden with
configure parameters, which will be used to link cppcheck statically
into syncevolution.org binaries (libcppunit ABI has changed).
syncevolution.org binaries are going to support distros >= Ubuntu Trusty.
All of those have libnotify.so.4, so the old binary compatibility hack is
no longer needed.
cppcheck correctly warned that initializing a reference is enough,
no need to copy the string:
dbus-sync.cpp:97: cppcheck performance: passedByValue - Function parameter
'source' should be passed by reference.
cppcheck seems to mix up different template variations. The following error
needs to be suppressed in the classes which don't even have an m_a member:
gdbus-cxx-bridge.h:330: cppcheck warning: uninitMemberVar - Member variable
'Set::m_a' is not initialized in the constructor.
gdbus-cxx-bridge.h:339: cppcheck warning: uninitMemberVar - Member variable
'Set::m_a' is not initialized in the constructor.
cppcheck correctly warns about the old code as less efficient (looks
at entire string instead of just the start). Using boost::starts_with
is less obscure and avoids the warning.
Building client-test fails because a const pointer looses the
const attribute while passing it through boost::bind.
Fixes:
boost/lambda/detail/function_adaptors.hpp:357:16: error: invalid conversion
from 'const SyncEvo::SyncSource*' to 'SyncEvo::SyncSource*' [-fpermissive]
return func(a1);
When compiling as C++14 (the default under gcc 6), template
expansion no longer picks the right instantiation of push_back.
We have to disambiguate which of the different push_back
alternatives we want.
Fixes:
Cmdline.cpp:1734:85: error: no matching function for call to 'bind(<unresolved
overloaded function type>, const boost::reference_wrapper<std::__cxx11::list<std::__cxx11::basic_string<char> >
>, const boost::arg<1>&)'
processLUIDs(source, boost::bind(&list<string>::push_back, boost::ref(luids), _1));
The implicit conversion of boost::shared_ptr->bool is no
longer supported when compiling as C++14 (the default under
gcc 6). We have to cast explicitly.
Fixes errors like this:
Logging.h:258:41: error: cannot convert 'const boost::shared_ptr<SyncEvo::Logger>' to 'bool' in return
operator bool () const { return m_logger; }
libical v2 provides support for interoperable timezone definitions
again, so the copy of the original code that was added for libical v1
is not needed anymore and also wouldn't compile.
Therefore SyncEvolution can drop it when being compiled for v2. In that
case, a corresponding change in libsynthesis calls
icaltzutil_set_exact_vtimezones_support(0) to enable interoperable
timezones.
This addresses compilation and running with libical v2, i.e. what
normal distros are doing. Compiling with libical v1 and running
with v2 needs to be addressed separately, if possible at all.
The previous mkdir_p() walked down top to bottom and checked each path
entry as it went along. That approach failed unnecessarily when some
existing parent directory could not be read (non-readable /home, for
example).
The new version tries to create the lowest directories first
(brute-force) and only walks up when that is necessary to avoid an
ENOENT.
twisted.web.error.NoResource was replaced by
twisted.web.resource.NoResource. This has become a real problem for
example on Fedora 22 where the old name is no longer available.
As reported by Canonical, syncing fails if data items contain
text which is not correct UTF-8 in one of the fields that
SyncEvolution logs in the command line output (like SUMMARY of
a calendar event).
That is because the byte string coming from the item is passed
unchecked to the D-Bus implementation for transmission via D-Bus. But
D-Bus strings must be correct UTF-8, so depending on the D-Bus library
in use, one gets a segfault (GIO D-Bus, due to an unchecked NULL
pointer access) or an "out of memory" error (libdbus, which checks for
NULL).
What the D-Bus bindings now do is checking the string in advance (to
avoid error messages inside the D-Bus implementations) and then
replacing all invalid bytes with question marks. The rest of the
string is preserved.
Handling this inside the D-Bus binding layer is not the "correct"
solution (which would be to check for UTF-8 in the higher layers were
such bad data might get into SyncEvolution), but it is the less
intrusive and more complete one. Changing the bindings such that all
strings must be declared explicitly as "UTF-8 string" would have been
a way to find all places where such checks are missing, but it turned
out to be too complex and requiring too many changes.
Extracting a meaningful description of each item from the Synthesis
engine when updating and adding items is easy to do for items of
certain known types (contacts and calendar items) and arguably an
improvement; in particular it makes tests like
TestCmdline.testSyncOutput more realistic.
In some cases, the prefix which was supposed to be embedded
in the log messages from the target side of a local sync got
lost on the way to the command line tool.
Primarily this affected the added/updated/deleted messages, as in:
[INFO remote@client] @client/addressbook: started
[INFO remote@client] updating "Joan Doe"
[INFO remote@client] @client/addressbook: received 1/1
This was not caught by automated testing because the corresponding
test (TestCmdline.testSyncOutput and TestCmdline.testSyncOutput2) is
using the file backend, which does not generate such messages at the
moment (to be changed separately).
When sending an access token with insufficient scope (for example,
because the Ubuntu Online Accounts service definition was incomplete,
as documented in FDO #86824), Google responds with a 403 "service
denied" error.
Neon (arguably correctly) treats this as a permanent error and not
as a transient authentication error. Google should better send
a 401 error.
To activate the 401 error handling in SyncEvolution, detect this
special case and turn the general SE_ERROR error into SE_AUTH.
The Ubuntu Online Accounts signon backend complained about
invalid "username" content with an error message containing "sso",
the string used for gSSO.
Define these magic strings once and then use only the defines.
Use the information contained in the AgAuthData object to decide which
authentication method is being used.
Move the common authentication code into a private authenticate()
method.
[PO: fixed g_variant_builder_end() mem leak]
[PO: avoid issue found via cppcheck: returning std::string::c_str() in
a method returning a std::string forces a string copy, which is
inefficient.]
[PO: UiPolicy was not set because of an invalid g_variant_builder_add()
"sv" parameter.]
Instead of passing the session data and the mechanism, pass the full
AgAuthData object to the SignonAuthProvider constructor.
This object can be used to build the session data without converting all
dictionaries to and from GHashTable.
[PO: fixed g_variant_builder_end() mem leak]
[PO: ForceTokenRefresh was not set because of an invalid g_variant_builder_add()
"sv" parameter.]
It's not sync-evolution task to complete the account creation; in those
platforms where this backend is deployed (Ubuntu, Elementary, KDE) the
account should already be created with a proper SignonIdentity attached
to it.
Remove the failure count from the getOAuth2Bearer() method, and add an
invalidateCachedSecrets() method instead.
This moves the logic of how to deal with failures back to the
AuthProvider backend and simplifies the session code, which only needs
to call invalidateCachedSecrets() when the token is wrong.
This will help implementing a similar logic for the getCredentials()
method, where authentication errors could lead to requesting a new
password.
[PO: avoid issue found via cppcheck: returning std::string::c_str() in
a method returning a std::string forces a string copy, which is
inefficient.]
The libaccounts-based backends ("gsso" and "uoa") are going to use more
of libaccounts-glib, making sharing of code with the plain "signon"
plugin much harder to maintain.
Since 1.4.99.4, syncing WebDAV collections always checks first
whether there are items in the collections. This was partly done for
slow sync prevention (which is not necessary for empty collections),
partly for the "is the datastore usable" check.
However, this did not take into account that for CalDAV collections,
the entire content gets downloaded for this check. That is because
filtering by item type (VEVENT vs. VJOURNAL) is not implemented
correctly by all servers. So now all CalDAV syncs, whether incremental
or slow, always transfered all items, which is not the
intention (incremental syncs should be fast and efficient).
This commit adds a more efficient isEmpty() check: for simple CardDAV
collections, only luid and etag get transferred, as in
listAllItems(). This is the behavior from 1.5.
For CalDAV, a report with a filter for the content type is used and
the transfer gets aborted after the first item, without actually
double-checking the content of the item. This is different from
listAllItems(), which really transfers the content. This extra content
check would only be needed for some old servers (Radical 0.7) and is
not essential, because reporting "not empty" even when empty is safe.
Capture the item status and pass it to the response handler.
Response handlers are allowed to return a non-zero integer when using
the initAbortingReportParser(), which then aborts processing of the
response.
This leads to errors being returned by
ne_xml_dispatch_request(). Session::run() needs to be told if the
request was aborted, in which case all errors are ignored.
It turned out that finding databases on an Apple Calendar server accessed via
http depends on sending Basic Auth even when the server does not ask for it:
without authentication, there is no information about the current principal,
which is necessary for finding the user's databases.
To make this work again, sending the authentication header is now forced for
plain http if (and only if) the request which should have returned the
principal URL fails to include it. This implies sending the same request
twice, but as this scenario should be rare in practise (was only done for
testing), this is acceptable.
Occasionally the script shutdown got stuck on Ubuntu Vivid because killing
the background daemon failed although it was still runnning, thus causing the
wait to hang forever.
Not exactly sure what caused this. The enhancement tries to fall back to
killing the process instead of the process group (in case that there is
a race condition, which shouldn't be the case when waiting for the daemon),
preserves stderr from the kill commands and adds ps output when there is
an unexpected failure.
The test started to fail in 2015 because the VTIMEZONE generated by
the code copied from libical has a minor dependency on the current
time: it finds the transitions for the current year, then pretends
that the same rule applied since 1970. While doing that, it uses this
years transition date and just replaces the year. Because the exact
date varies between years, the result depends on the current year.
The test now simply ignores the day of the month, while still checking
year and month. Those should always be the same.
When storing a pointer with PlainGStr::reset(), the default delete []
was used to free the memory. Should have used g_free(), as in the
constructor. Affects the signon backend.
cppcheck warned about the non-portable initialization of a float
via memset. It is okay on all current architectures, but better
assign a real float 0 value anyway, while keeping the memset for
the rest of the elements.
The second memset() had already become redundant when adding the
PullParams constructor.
When the SuspendFlags::handleSignal() signal handler got called while
the program was allocating memory in libc, a deadlock occurred when
the mutex locking code in RecMutex::lock() tried to allocate the guard
instance (see backtrace below).
This could be fixed with new mutex code which gives up the guard
concept, a guard concept using C11 mechanisms (copy semantic of a
stack-allocated guard) or not locking at all in getSuspendFlags().
This commit uses the latter. It is simpler and can still be done
correctly because locking is not necessary except when the singleton
has not been allocated yet. That part gets moved into the process
startup phase via SyncContext::initMain().
Thread 1 (Thread 0x7fec389e6840 (LWP 16592)):
at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:95
from /lib/x86_64-linux-gnu/libc.so.6
from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
d=<optimized out>, p=0x7fec38196f60 <SyncEvo::suspendRecMutex>,
this=<synthetic pointer>)
at /usr/include/boost/smart_ptr/detail/shared_count.hpp:167
p=0x7fec38196f60 <SyncEvo::suspendRecMutex>, this=<synthetic pointer>)
at /usr/include/boost/smart_ptr/shared_ptr.hpp:363
this=<synthetic pointer>)
at /data/runtests/work/sources/syncevolution/src/syncevo/ThreadSupport.h:60
at /data/runtests/work/sources/syncevolution/src/syncevo/ThreadSupport.h:81
at /data/runtests/work/sources/syncevolution/src/syncevo/SuspendFlags.cpp:58
at /data/runtests/work/sources/syncevolution/src/syncevo/SuspendFlags.cpp:280
oldp=0x115b7d0, oldsize=80, nb=48) at malloc.c:4208
at malloc.c:3029
n_bytes=n_bytes@entry=32)
Because of the confustion around the libical 1.0 ABI, Debian ended
up renaming the package for Debian Jessie while keeping the libical.so.1
soname. Pre-built binaries from older distros work, so just adapt
the package meta data.
The class is used for non-GObject types, so we must use a
type-specific method to increase the
ref-count. intrusive_ptr_add_ref() does that automatically for us.
Calling g_object_ref() was a cut-and-paste error.
The previous commit for the problem caused a compilation error
in the PBAP backend, because Bluez uses a 64 bit type that no longer
had a type trait on 32 bit architectures.
Adding "[unsigned] long long" traits fixes that. It seems to work
also on 64 bit architectures, so I am not adding any ifdefs and just
enable it unconditionally.
This fixes a potential security risk. It also avoids connection problems
with clients that don't support SSLv3 anymore, like the syncevolution SyncML
client itself.
Closes: #772040
size_t is defined as unsigned long on s390 in 32bit mode, which is is not the
same type as uin32_t = unsigned int, which caused a compile failure due to the
missing type trait.
The solution is to define traits for all the standard language types instead
of the fixed-size types. This removes the 64 bit types on 32 bit platforms but
adds the types that were missing on s390.
All the types that we use in the D-Bus bindings are among these standard
types, so that is okay.
This fixes a crash/memory corruption issue in the plain signon backend
as used in Tizen.
The real bug was that HashTable2Variant() creates a GHashTableCXX by
stealing a floating GVariant without actually acquiring the reference,
leading to various glib errors caused by double free and/or invalid
memory access.
The code in signon.cpp receiving the GHashTableCXX was correct, albeit
overly complicated (creating a new reference via g_variant_ref_sink()
explicitly instead of relying on GHashTableCXX semantic).
A local sync or Bluetooth sync do not need the 'username' property.
When it is set despite that, issue a warning.
Previously, the value was checked even when not needed, which
caused such syncs to fail when set to something other than a plain
username.
Useful for correlating events in the daemon with events in testing.
We need to use a process group to deliver SIGINT/SIGTERM, otherwise
we cannot be sure that we catch all processes created by the daemon.
The return code of the daemon was not checked before (accidentally?!)
and this patch does not change that. Might be fixed in the future.
The daemon log gets reused when starting daemons multiple times.
When starting a second time, the simple grep used to find the
output of the previous and did not wait properly. We need to count
matching lines and continue once a new one appears.
Multiple different wrappercheck instances are running concurrently
in the nightly testing, so better include a prefix with unique PID
in the "set -x" output to identify where it came from.
Occassionally, syncevo-dbus-server locked up after receiving
a CTRL-C. This primarily affected nightly testing, in particular (?)
on Ubuntu Lucid.
The reason was two-fold:
- indirectly allocating memory when getSuspendFlags() had to
allocated the guard instance of the mutex
- glib mutex held by some other thread when trying to do
g_main_loop_quit().
SuspendFlags already handles signals without these problems. So
instead of overriding that, let SuspendFlags signal the receival
of a signal by writing to its FD and monitor that in our main
event loop, then quit the main event loop from inside the main
thread itself.
The issue with malloc being called in niam() in syncevo-dbus-server
and deadlocking was only found after adding gdb's output of the
syncevo-dbus-server stack backtrace.
"primary 80" ends up being recognized as phone number, adding the E.164
parameter if (and only if) EDS was compiled with libphonenumber support
(Debian Jessie!). Adding that breaks the comparison, so avoid the situation
by updating to a string which is not a phone number.
Akonadi uses different quoting for special characters. Probably would have to
be looked into and reported, but for now ignore it by not testing these
special cases.
When syncing Akonadi with file source, neither side recognizes duplicates
based on UID/RECURRENCE-ID. Should be added to Akonadi source. For now ignore
it.
The more complex "avoid data loss during merging" scripting ran for longer
than 5s limit under extreme conditions (full logging, busy system, running
under valgrind), which resulted in aborting the script and a 10500 "local
internal error" sync failure.
The endless loop prevention should only be necessary to detect programming
mistakes, so better disable it entirely.
When client-test starts, it determines all tests that would get
run, then runs all of them one-by-one in new instances. A single
test gets run directly.
The output changes slightly: the CppUnit summary with number of
failures or errors is no longer available and there are additional
blank lines between tests.
The advantage is that each single test is properly isolated from the
other, which is closer to how real syncs run. It also helps when
running under valgrind, because a leak can be attributed exactly to
one test and because it avoids permanently growing memory consumption
in long-running client-test runs (seen in the nightly testing even
when there were no leaks, perhaps because of memory fragmentation).
A potential downside of this change is that unexpected and undesirable
side effects of modules might no longer show up in testing, only when
combining them in real syncs. This should still be covered by
Client::Sync tests involving multiple modules.
The shared providersignon.so ended up being compiled with "gsso" as
prefix for the username. We need to check that this really works by also
testing for USE_ACCOUNTS.
Bug #84710 was not found by the automated testing because the file
source still sent re-encoded items. Switching it to raw items
triggered the bug and now helps to avoid regressions in the modified
merge script.
After changing PBAP to send raw items, caching them led to unnecessary
disk writes and bogus "contacts changed" reports. That's because
the merge script relied on the exact order of properties, which was
only the same when doing the redundant decode/encode on the PBAP side.
Instead of reverting back to sending re-encoded items, better enhance
the contact merge script such that it detects contacts as unchanged
when just the order of entries in the property arrays is different.
This relies on an enhanced libsynthesis with the new RELAXEDCOMPARE()
and modified MERGEFIELDS().
Setting SYNCEVOLUTION_PBAP_CHUNK_TRANSFER_TIME to value <= 0 is not
working like described in README file.
Currently in such case desired chunk size is very quickly decreasing
to 0 and causing synchronization finish without downloading all
contacts.
This patch is disabling tuning desired chunk size in such case,
keeping transfer size constant and equal to initially set value.
When sent
SUMMARY:Simple
DESCRIPTION:Simple
Memotoo returns
SUMMARY:Simple
DESCRIPTION:Simple\nSimple
Remove this particular test case until the problem in the server
is fixed.
That the backend is based on a refresh token is an implementation
detail; it might even change at some point if we figure out how
to do OAuth2 internally.
Better use the shorter "oauth2" name in the "username" property.
Also better align .so name and global variable names/defines in
configure and Makefile with the name of the backend.
There is no guarantee that the password can be updated. Allow the attempt
to fail without treating it as an error (that would be confusing in the most
common case, where "password" was given on the command line) and instead
just print an INFO message with the new token and some instructions.
All the values in the username hash are strings. This is unlikely to
change and if it does, we can always do some string-to-value
conversion in our own code. Therefore use the simpler syntax without
the angle brackets marking a variant value.
Simplifies our C++ code, too, and now also compiles on Ubuntu Trusty
(which doesn't have a recent enough glib for G_VARIANT_TYPE_VARDICT).
Extracting actual values from a hash with GVariant as value is more
work than reading string values directly. For cases where only strings
may occur as values, the simpler string hash is therefore prefered.
Avoid the term HMI because it is only used in certain communities.
Make the example username work for CalDAV and CardDAV and base it on
GOA, because that's what most users will have installed.
The only difference is the name of the module and the path to json.h,
which can be found as just json.h when using the -I statements from
the .pc files.
New backend implements identity provider for obtaining OAuth2 access
token for systems without HMI support.
Access token is obtained by making direct HTTP request to OAuth2 server
and using refresh token obtained by user in some other way.
New provider automatically updates stored refresh token when OAuth2
server is issuing new one.
It used to be necessary to specify a CA file for libsoup to enable SSL
certificate checking. Nowadays libsoup uses the default CA store
unless told otherwise, so the check in SyncEvolution became
obsolete. However, now there is a certain risk that no SSL checking is
done although the user asked for it (when libsoup is not recent enough
or compiled correctly).
So far, only the "username" property was used once identity providers
were involved. The upcoming oauth2 provider uses the "password"
property for the refresh token and needs the ability to store a new
token if the OAuth2 server updates it.
Setting the new value will not always be possible (for example,
when running a command line operation where all properties were
provided on the command line without a permanent config). This still
needs to be handled.
The same code will also be needed by the upcoming oauth2 backend. The
header file depends on glib, so don't install it as a public header
file of libsyncevolution (i.e. only allow using it internally).
The original signon code used a custom GVariant auto_ptr because it
needed the simple assignment semantic of "transfers ownership". Now
that the GVariantCXX class is in a shared header file, better make it
identical to the other CXX classes (i.e., a shared pointer). Avoid
returning plain pointers because ownership of those cannot be checked
by the compiler.
The signon case can be handled with GVariantStealCXX, which by default
takes ownership in the constructor and the default assignment
operator.
When using "raw/text/calendar" or "raw/text/vcard" as SyncEvolution
"databaseFormat", all parsing and conversion is skipped. The backend's
data is identical to the item data in the engine. Finding duplicates
in a slow sync is very limited when using these types because the entire
item data must match exactly.
This is useful for the file backend when the goal is to store an exact copy
of what a peer has or for limited, read-only backends (PBAP).
The implementation uses the "raw" base type which does nothing but storing
the item content verbatim. That type requires a field named "ITEMDATA". After
renaming the variable used by our MAKETEXTWITHPROFILE/PARSETEXTWITHPROFILE
with other, more complex types, all we have to do is skip this conversion
by not providing a profile name.
In a refresh-from-remote local sync of 1000 contacts with photo data
using the file backend, this reduces the number of CPU cycles from
2861476184 to 1840952543 on the server side.
The Synthesis API does not say explicitly, but in practice all map
items get updated in a tight loop. Rewriting the m_mappingNode (case
insensitive string comparisons) and serialization to disk
(std::ostrstream) consume a significant amount of CPU cycles and cause
extra disk writes.
It is better to allow changes to be cached in memory instead and only
flush to disk at the end. There is no explicit flush API, so we use
SaveAdminData as trigger for flushing instead.
In a refresh-from-remote local sync of 1000 contacts with photo data
using the file backend, this reduces the number of CPU cycles from
4138538276 to 2861476184 (measured with callgrind on x86_64).
Encoding/decoding of the uint8_t array in D-Bus took a surprisingly
large amount of CPU cycles relative to the rest of the SyncML message
processing. Now the actual data resides in memory-mapped temporary
files and the D-Bus messages only contain offset and size inside these
files. Both sides use memory mapping to read and write directly.
For caching 1000 contacts with photos on a fast laptop, total sync
time roughly drops from 6s to 3s.
To eliminate memory copies, memory handling in libsynthesis or rather,
libsmltk is tweaked such that it allocates the buffer used for SyncML
message data in the shared memory buffer directly. This relies on
knowledge of libsmltk internals, but those shouldn't change and if they
do, SyncEvolution will notice ("unexpected send buffer").
Unusually small values were ignored during syncing. Doing this in the config
access methods instead of the sync session setup ensures that all components
in SyncEvolution use the same value.
For this to work, the memory must be mapped MAP_SHARED (otherwise
writes are not visible outside of the process) and the recipient
of a file descriptor must be able to create a TmpFile for it, which
then can be used to create a memory mapping.
During TestCmdline.testSyncOutput also verify that syncs which don't
change the database also don't update the ~/.config meta data.
The listall() method from testpim.py is used and extended for that
and therefore moved to test-dbus.py.
The sync meta data (sync anchors, client change log) get updated after
a sync even if nothing changed and the existing meta data could be
used again. This can be skipped for local sync, because then
SyncEvolution can ensure that both sides skip updating the meta
data. With a remote SyncML server that is not possible and thus
SyncEvolution has to update its data.
This optimization is only used for local syncs with one source. It is
based on the observation that when the server side calls
SaveAdminData, the client has sent its last message and the sync is
complete. At that point, SyncEvolution can check whether anything has
changed and if not, skip saving the server's admin data and stop the
sync without sending the real reply to the client.
Instead the client gets an empty message with "quitsync" as content
type. Then it takes shortcuts to close down without finalizing the
sync engine, because that would trigger writing of meta data
changes. The server continues its shutdown normally.
This optimization is limited to syncs with a single source, because
the assumption about when aborting is possible is harder to verify
when multiple sources are involved.
During syncs where nothing changes, reseting the database revision and
setting it again at the end of the sync causes undesirable disk
writes. To avoid that, reset the database revision the first time an
item change is done.
Sometimes the new content of the file is exactly the same as the old
one, for example when the tracking node gets reset and
recreated. Detect that for all cases at the lower level by reading the
existing file first and comparing it against the new content.
Synthesis cleanup was supposed to happen between "closing session" and
"session closed" log messages. Because we kept additional references
to the session, this only happened later. Need to drop also references
to shared buffers, because those hold a reference to their session.
This allows signal handlers to affect the operation execution and
result without having to throw an exception, which would get logged as
error and is not desirable when the operation gets skipped
intentionally. Needed for skipping SaveAdminData in the next patch.
The engine gets instantiated twice: once for logging, once for
syncing. The config generated in each case is different because
sources only become active for syncing. Therefore the config change
check unnecessarily triggered during each sync (first getConfigXML
call created one hash, then the second another), causing DevInf to be
sent unnecessarily and always rewriting the .ini file.
The check is only needed when instantiating the engine that is getting
used for syncing. However, the config date must always be valid.
This adds "protocol: CardDAV" as a valid value, with corresponding
changes to the interpretation of some existing properties and some new
ones. The API itself is not changed.
Suspending a CardDAV sync is possible. This freezes the internal
SyncML message exchange, so data exchange with the CardDAV server may
continue for a while after SuspendPeer().
Photo data is always downloaded immediately. The "pbap-sync" flag
in SyncPeerWithFlags() has no effect.
Syncing can be configured to be one-way (local side is read-only
cache) or two-way (local side is read/write). Meta data must be
written either way, to speed up caching or allow two-way syncing. The
most common case (no changes on either side) will have to be optimized
such that existing meta data is not touched and thus no disk writes
occur.
A SuspendPeer() only succeeded while the underlying Bluetooth transfer
was active. Outside of that, Bluez errors caused SyncEvolution to
attempt a cancelation of the transfer and stopped the sync.
When the transfer was still queueing, obexd returns
org.bluez.obex.Error.NotInProgress. This is difficult to handle for
SyncEvolution: it cannot prevent the transfer from starting and has to
let it become active before it can suspend the transfer. Canceling
would lead to difficult to handle error cases (like partially parsed
data) and therefore is not done.
The Bluez team was asked to implement suspending of queued transfers
(see "org.bluez.obex.Transfer1 Suspend/Resume in queued state" on
linux-bluetooth@vger.kernel.org), so this case might not happen
anymore with future Bluez.
When the transfer completes before obexd processes the Suspend(),
org.freedesktop.DBus.Error.UnknownObject gets returned by
obexd. SyncEvolution can ignore errors which occur after the active
transfer completed. In addition, it should prevent starting the next
one. This may be relevant for transfer in chunks, although the sync
engine will also stop asking for data and thus typically no new
transfer gets triggered anyway.
When syncevo-dbus-server was shutting down due to a signal and there
was a sync session running that was triggered via the PIM Manager API,
then a segfault occurred because the manager instance got destroyed
before invoking a Session callback holding a pointer to it.
The fix is to use slot tracking of the weak pointer.
CTRL-C while waiting for the end of a sync causes an interactive
prompt to appear where one can choose been suspend/resume/abort and
continuing to wait. CTRL-C again in the prompt aborts the script.
The help text used single quotes for the JSON example instead of
the required double quotes. Running without --sync-flags was broken
because of trying to parse the empty string as JSON.
When configuring a new sync config, the command line checks whether a
datastore is usable before enabling it. If no datastores were listed
explicitly, only the usable ones get enabled. If unusable datastores
were explicitly listed, the entire configure operation fails.
This check was based on listing databases, which turned out to be too
unspecific for the WebDAV backend: when "database" was set to some URL
which is good enough to list databases, but not a database URL itself,
the sources where configured with that bad URL.
Now a new SyncSource::isUsable() operation is used, which by default
just falls back to calling the existing Operations::m_isEmpty. In
practice, all sources either check their config in open() or the
m_isEmpty operation, so the source is usable if no error is
enountered.
For WebDAV, the usability check is skipped because it would require
contacting a remote server, which is both confusing (why does a local
configure operation need the server?) and could fail even for valid
configs (server temporarily down). The check was incomplete anyway
because listing databases gave a fixed help text response when no
credentials were given. For usability checking that should have
resulted in "not usable" and didn't.
The output during the check was confusing: it always said "listing
databases" without giving a reason why that was done. The intention
was to give some feedback while a potentially expensive operation
ran. Now the isUsable() method itself prints "checking usability" if
(and only if!) such a check is really done.
Sometimes datastores were checked even when they were about to be
configure as "disabled" already. Now checking such datastores is
skipped.
The log prefix seems to be unused (except for some debug message) at
the moment but will be soon, so make sure it gets embedded in the
string sent to the syncevo-dbus-server.
Will become relevant with the next commit, which moves the
source name from being part of the text into the prefix. So far,
the prefix is unused during Cmdline tests.
When deriving from DummySyncSource we need to implement less
pure virtual functions. Only the ones where the behavior
needs to differ from DummySyncSource are needed.
When syncing memos with a peer which also supports iCalendar 2.0 as
data format, the engine will now pick iCalendar 2.0 instead of
converting to/from plain text. The advantage is that some additional
properties like start date and categories can also be synchronized.
The code is a lot simpler, too, because the EDS specific iCalendar 2.0
<-> text conversion code can be removed.
When sending a VEVENT, DESCRIPTION was set to the SUMMARY if empty. This may
have been necessary for some peers, but for notes (= VJOURNAL) we don't know
that (hasn't been used in the past) and don't want to alter the item
unnecessarily, so skip that part and allow empty DESCRIPTION.
When receiving a plain text note, the "text/calendar+plain" type
used to store the first line as summary and the rest as description.
This may be correct in some cases and wrong in others.
The EDS backend implemented a different heuristic: there the first
line is copied into the summary and stays in the description. This
makes a bit more sense (the description alone is always enough to
understand the note). Therefore and to avoid behavioral changes
for EDS users when switching the EDS backend to use text/calendar+plain,
the engine now uses the same approach.
For this to work in all cases, the datatype must always call
MEMO_INCOMING_SCRIPT.
The word "source" implies reading, while in fact access is read/write.
"datastore" avoids that misconception. Writing it in one word emphasizes
that it is single entity.
While renaming, also remove references to explicit --*-property
parameters. The only necessary use today is "--sync-property ?"
and "--datastore-property ?".
--datastore-property was used instead of the short --store-property
because "store" might be mistaken for the verb. It doesn't matter
that it is longer because it doesn't get typed often.
--source-property must remain valid for backward compatility.
As many user-visible instances of "source" as possible got replaced in
text strings by the newer term "datastore". Debug messages were left
unchanged unless some regex happened to match it.
The source code will continue to use the old variable and class names
based on "source".
Various documentation enhancements:
Better explain what local sync is and how it involves two sync
configs. "originating config" gets introduces instead of just
"sync config".
Better explain the relationship between contexts, sync configs,
and source configs ("a sync config can use the datastore configs in
the same context").
An entire section on config properties in the terminology
section. "item" added (Todd Wilson correctly pointed out that it was
missing).
Less focus on conflict resolution, as suggested by Graham Cobb.
Fix examples that became invalid when fixing the password
storage/lookup mechanism for GNOME keyring in 1.4.
The "command line conventions", "Synchronization beyond SyncML" and
"CalDAV and CardDAV" sections were updated. It's possible that the
other sections also contain slightly incorrect usage of the
terminology or are simply out-dated.
Google has turned off their SyncML server, so the corresponding
"Google Contacts" template became useless and needs to be removed. It
gets replaced by a "Google" template which combines the three
different URLs currently used by Google for CalDAV/CardDAV.
This new template can be used to configure a "target-config@google"
with default calendar and address book database already enabled. The
actual URL of these databases will be determined during the first
sync using them.
The template relies on the WebDAV backend's new capability to search
multiple different entries in the syncURL property for databases. To
avoid listing each calendar twice (once for the legacy URL, once with
the new one) when using basic username/password authentication, the
backend needs a special case for Google and detect that the legacy URL
does not need to be checked.
The syncURL property may contain multiple different space or tab
separated URLs. Previously, the WebDAV backend only used the first one
when scanning for databases. Now it tries all of them.
This will be useful for configuring all Google endpoints in one
template.
Certains domains (like googleapis.com) are aliases. The lookup tools
do not fail with NXDOMAIN in this case, causing the
syncevo-webdav-lookup script to return only an unknown lookup error,
in which case the WebDAV backend will retry for a while.
We need to detect aliases resp. missing SRV information and return a
"no information error".
The asyncError relied on calling
folks_individual_aggregator_remove_individual() incorrectly
with a NULL pointer. folks in Debian Testing just crashes
now, so we can no longer do this.
Previously, only syncURL=local://@<context name> was allowed and used
the "target-config@context name" config as target side in the local
sync.
Now "local://config-name@context-name" or simply "local://config-name"
are also allowed. "target-config" is still the fallback if only a
context is give.
It also has one more special meaning: "--configure
target-config@google-calendar" will pick the "Google_Calendar"
template automatically because it knows that the intention is to
configure the target side of a local sync. It does not know that when
using some other name for the config, in which case the template (if
needed) must be specified explicitly.
The process name in output from the target side now also includes the
configuration name if it is not the default "target-config".
The first progress signal gets emitted after sleeping for 10 seconds
at the start of the sync and then killing syncevo-dbus-server races
with completing the sync. What we want is to kill during the 10 second
wait, so we better wait for the debug output directly before it and
then kill directly.
When configuring a WebDAV server with username = email address and no
URL (which is possible if the server supports service discovery via
the domain in the email address), then storing the credentials in the
GNOME keyring used to fail with "cannot store password in GNOME
keyring, not enough attributes".
That is because GNOME keyring seemed to get confused when a network
login has no server name and some extra safeguards were added to
SyncEvolution to avoid this.
To store the credentials in the case above, the email address now gets
split into user and domain part and together get used to look up the
password.
When doing PBAP caching, we don't want any meta data written because
the next sync would not use it anyway. With the latest libsynthesis
we can configure "/dev/null" as datadir for the client's binfiles and
libsynthesis will avoid writing them.
The PIM manager uses this for PBAP syncing automatically. For testing
it can be enabled by setting the SYNCEVOLUTION_EPHEMERAL env variable.
The recent change which introduced checking of the target session only
worked if the entries were listed in the "right" (= target-config
entry last) order. Don't reply on that, instead check for the one new entry
that we should find and use that.
As pointed out by compiler warnings on recent distros, the write()
result in the SuspendFlags signal handler was ignored. Not sure
whether it really can fail in practice. Handle some
possible errors with retrying.
The Cmdline.cpp unit tests did not check the symlink() result,
causing a warning resp. error depending on the configure options.
Reported by Emiliano Heyns.
This reverts commit c435e937cd.
Commit 7b636720a in libsynthesis fixes an unitialized memory read in
the asynchronous item update code path.
Testing confirms that we can now used batched writes reliably with EDS
(the only backend currently supporting asynchronous writes +
batching), so this change enables it again also for local and
SyncEvolution<->SyncEvolution sync (with asynchronous execution of
contact add/update overlapped with SyncML message exchanges) and other
SyncML syncs (with changes combined into batches and executed at the
end of each message).
Instead of downloading contacts one-by-one with GET, SyncEvolution now
looks at contacts that are most likely going to be needed soon and
gets all of them at once with addressbook-multiget REPORT.
The number of contacts per REPORT is 50 by default, configurable by
setting the SYNCEVOLUTION_CARDDAV_BATCH_SIZE env variable.
This has two advantages:
- It avoids round-trips to the server and thus speeds up a large
download (100 small contacts with individual GETs took 28s on
a fast connection, 3s with two REPORTs).
- It reduces the overall number of requests. Google CardDAV is known
to start issuing intermittent 401 authentication errors when the
number of contacts retrieved via GET is too large. Perhaps this
can be avoided with addressbook-multiget.
This is similar to read-ahead in EDS contacts. However, because we
cannot run Neon requests asynchronously (at least not easily), a
batched read must complete before any contact from it can be
returned to the caller.
Arbitrary sync flags can be passed to SyncPeerWithFlags() with the
new --sync-flags command line parameter. The value of that parameter
must be a JSON formatted hash.
The advantage of this approach over explicit command line parameters
(like --progress-frequency) is that we do not need to add further code
when adding new flags. We can also pass intentionally broken flags
to check the PIM Managers error handling.
The downside is that it is a bit less user-friendly.
With this change, progress frequency and PBAP sync can be set in two
ways, either as part of --sync-flags or with the individual command
line parameters. The latter override --sync-flags.
When the value of a flag has the wrong type, then dumping the value
may provide some hint about what is wrong. Note that values of types
not expected at all by the SyncFlags type already get filtered out
already by our D-Bus binding, in which case the error message shows
no value.
syncevo-webdav-lookup is needed during testing when doing
WebDAV database scans, so build it in "src" like the rest
of the binaries and link to it in the installed test suite.
Google recently enhanced support for RECURRENCE-ID, so SyncEvolution
no longer needs to replace the property when uploading a single
detached event with RECURRENCE-ID. However, several things are still
broken in the server, with no workaround in SyncEvolution:
- Removing individual events gets ignored by the server;
a full "wipe out server data" might work (untested).
- When updating the parent event, all child events also get
updated even though they were included unchanged in the data
sent by SyncEvolution.
- The RECURRENCE-ID of a child event of an all-day recurring event
does not get stored properly.
- The update hack seems to fail for complex meetings: uploading them
once and then deleting them seems to make uploading them again
impossible.
All of these issues were reported to Google and are worked on there,
so perhaps the situation will improve. In the meantime, syncing with
Google CalDAV should better be limited to:
- Downloading a Google calendar in one-way mode.
- Two-way syncing of simple calendars without complex meeting
serieses.
While updating the Google workarounds, the alarm hack (sending a
new event without alarms twice to avoid the automatic server side
alarm) was simplified. Now the new event gets sent only once with a
pseudo-alarm.
Google seems to have changed its PHOTO rewriting. If that keeps
happening, we need to stop comparing the actual data.
However, comparing the actual data is useful to detect when we
do not properly handle it. For example, in testUpdateRemoteWins/local-synced
it was a bit surprising that only one contact had to be updated at first.
It turned out that libsynthesis did not compare the entire photo data,
only the part before embedded null bytes.
The bugs caused by introducing merging of arrays were not detected
by the automated testing. We need to repeat syncing after we have
data and check that nothing changes in the default, no-phone mode,
and we need to check that nothing got changed on the remote side.
In PBAP mode that would have triggered an error, but when using the
file backend we silently modified its data.
Removing the URL ensures that the failure to avoid writes of
incomplete contacts gets covered. Rearranging the properties
also may be relevant (but was handled already).
Adding grouping to the contact datatype broke PBAP caching: when
sending an empty URL, for example, during the sync, the parsed contact
had different field arrays than the locally stored contact, because the
latter was saved without the empty URL.
This caused the field-based comparison to detect a difference even when
the final, reencoded contact wasn't different at all.
To solve this, syncing now uses the same "don't send empty properties"
configuration as local storages. Testing shows that this resolves
the difference for EDS.
A more resilient solution would be to add a check based on the encoded
data, but that's more costly performance wise.
The new "preserve repeating properties during conflict resolution"
feature was only active when using EDS as storage. The relevant
merge script must be applied to all datatypes, not just the EDS
flavor.
The feature was also unintentionally active when running in
caching mode. This caused two problems:
- The cached item was updated even though only the
ordering of repeating properties had been modified during
merging.
- The merged item was sent back to the client side, which
was undesirable (caching is supposed to be one-way) or even
impossible (PBAP is read-only, causing sync failures eith error 20030).
We must check for caching mode and disable merging when it is active.
We also must not tell the engine that we updated the photo property
in the winning item, because then that item would get sent to the
read-only side of the sync.
Perhaps a better solution would be to actually tell the engine
that the remote side is read-only when we activate caching mode.
When handling an update/update conflict (both sides of the sync have an
updated contact) and photo data was moved into a local file by EDS, the engine
merged the file path and the photo data together and thus corrupted the photo.
The engine does not know about the special role of the photo property.
This needs to be handled by the merge script, and that script did not
cover this particular situation. Now the loosing side is cleared,
causing the engine to then copy the winning side over into the loosing
one.
Found by Renato Filho/Canonical when testing SyncEvolution for Ubuntu 14.04.
If enabled via env variables, PullAll transfers will be limited to
a certain numbers contacts at different offsets until all data got
pulled. See README for details.
When transfering in chunks, the enumeration of contacts for the engine
no longer matches the PBAP enumeration. Debug output uses "offset #x"
for PBAP and "ID y" for the engine.
Using a pipe was never fully supported by obexd (blocks
obexd). Transfering in suitably sized chunks (FDO #77272) will be a
more obexd friendly solution with a similar effect (not having to
buffer the entire address book in memory).
Passing 0.1 as delay did not work as intended because it was converted
to an integer value of 0 seconds. Found by gcc 4.9. Must use a one second
delay.
The "func" variable was correctly initialized to NULL if no comparsion
matches, but cppcheck 1.65 warns anyway. Use the more readable
intialization to NULL in the final else clause.
libphonenumber 1.6.x has the header files reorganized such that the
caller has to define how he is using the library. We want to use
boost.
Setting this doesn't hurt with older libphonenumber releases.
folks_note_field_details_new() takes an additional uid parameter.
Passing NULL is okay, but SyncEvolution wasn't doing that due to
an incorrect function type cast. Found by valgrind only after
a valgrind and tool chain update. Probably we passed a valid
value accidentally before.
Fixed by using a wrapper function.
Ideally the function typecast should get replaced entirely with just casting
the returned pointer.
The parsed number always has a country code, whereas SyncEvolution expected it
to be zero for strings without an explicit country code. This caused a caller
ID lookup of numbers like "089788899" in DE to find only telephone numbers in
the current default country, instead of being more permissive and also finding
"+189788899". The corresponding unit test was broken and checked for the wrong
result. Found while investigating an unrelated test failure when updating
libphonenumber.
EDS handles this differently, by calling ParseAndKeepRawInput() if necessary
(checked by configure) and looking at the source of the country code. Instead
of replicating that logic, let's use EPhoneNumber. This means that EDS has to
be compiled with libphonenumber support, because SyncEvolution can no longer
fall back to using libphonenumber directly.
For testing purposes it is still useful to not depend on X-EVOLUTION-E164.
testLocaledPhone uses this at the moment, because re-generating
X-EVOLUTION-E164 during a locale change seems to be broken at the moment
in the intel-work-3-12 branch.
The test itself has to be updated for the newer libphonenumber (6.1.1 instead
of 5.3.2). The "12345" string it relied upon now gets parsed consistently in
US and DE. Instead we use the "01164 3 331 6005" string (as in libphonenumber
tests) which is treated differently.
The "Fixed compilation error when using libphonenumber from revision
>= 568" patch caused a double free error because SetLogger() owns the
logger instance and, with libphonenumber >= r571 actually frees the
instance.
Old libphonenumber release are compatible with the revised call,
however, they never free the instance.
Empty field filter is supposed to mean "return all supported
fields". This used to work and stopped working with Android phones
after an update to 4.3 (seen on Galaxy S3); now the phone only
returns the mandatory TEL, FN, N fields.
The workaround is to replace the empty filter list with the list of
known and supported properties. This means we only pull data we really
need, but it also means we won't get to see any additional properties
that the phone might support.
The new "signon" provider only depends on lib[g]signon-glib. It uses
gSSO if found, else UOA. Instead of pulling parameters and the
identity via libaccounts-glib, the user of SyncEvolution now has to
ensure that the identity exists and pass all relevant parameters
in the "signon:" username.
This does not have to be user-friendly, so the machine-readable
GVariant text dump format is used to pass all parameters.
gSSO >= 2.0 requires a list of realms to which the identity
applies. We take this list from a new "Realms" setting for the
provider in Accounts.
This is how SyncEvolution does it; it hasn't been checked what
upstream will do around this. In Tizen, libaccounts is not used
and thus a different solution is needed there.
The API of libgsignond >= 2.0 was also changed to be more compatible
with the original libsignon. This allows (and requires) removing
some gSSO ifdefs.
The calendar home set URL on iCloud (the one ending in /calendars/) is
declared as containing calendar data. That was enough for
SyncEvolution to accept it incorrectly as calendar. However, the home
set only contains calendar data indirectly.
We must use the stricter check for leaf collections containing the
right data.
When finding a new URL, we must be prepared to reinitialize the Neon
session with the new host settings. To implement this, candidates are
now full URIs, not just paths on the initial host.
A home set on iCloud contains full URLs, not just paths. We need to
parse the individual entries, which happens to work for paths and URLs
because paths are just special URLS without an explicit host.
iCloud does not have .well-known support on its www.icloud.com
server. To support lookup with a non-icloudd.com email address, we
must do DNS SRV lookup when access to .well-known URLs fails. We do
this without a www prefix on the host first, because that is what happens
to work for icloud.com.
With these changes it becomes possible to do database scans on Apple
iCloud, using syncURL=https://www.icloud.com or
syncURL=https://icloud.com. The former is a bit faster because the
icloud.com redirects to www.icloud.com before we end up doing the DNS
SRV lookup to find the CalDAV resp. CardDAV hosts.
Treat URI with explicit port as equal to an URI where the port number
is implied by the scheme. Add compare() operation similar to
std::string::compare and add full set of compare operators based on it.
501 means "not implemented", in which case resending the same request
is unlikely to succeed. This is relevant for scanning iCloud, because
PROPFIND on http://www.icloud.com returns that.
Apple iCloud servers reject requests unless they contain a User-Agent
header. The exact value doesn't seem to matter. Making the string
configurable might be better, but can still be done later when it
is more certain whether and for what it is needed.
Moved into a separate function and fixed return code handling: due to
missing decoding of the shell scripts return code, all errors were
treated as potentially temporary errors and thus lookup was retried
instead of giving up immediately when there is no DNS SRV entry.
Funambol turned of the URL redirect from my.funambol.com to
onemedia.com. The Funambol template now uses the current URL. Users
with existing Funambol configs must updated the syncURL property
manually to https://onemediahub.com/sync
Kudos to Daniel Clement for reporting the change.
For performance reasons we only run selected Client::Sync
tests with Google CardDAV (each PUT/POST and DELETE take 10
seconds because of some intentional delay on the server).
Some Google-specific tests should also be run.
Traditionally, SyncEvolution only modified remote data via syncing.
This is not enough because it does not cover data on the remote side
that SyncEvolution cannot sync.
The new tests solve this by using the command line's import/update
operations which modify data more or less directly.
Now we can test:
- downloading data created remotely
- uploading data via sync, export directly, compare
- simulate conflicts with changes made remotely
The download and upload test include full round-trips, i.e.
the initial transfer plus a change that needs to be synced back.
Because this is highly dependent on the exact data stored by the
peer, all these tests depend on per-peer test data and scripts
for modifying that data. The tests only get enabled if the test
data is found.
The initial test data covers Apple Calendar server, EDS<->EDS
and Google Contacts.
When resolving a merge conflict, repeating properties were taken
wholesale from the winning side (for example, all email addresses). If
a new email address had been added on the loosing side, it got lost.
Arguably it is better to preserve as much data as possible during a
conflict. SyncEvolution now does that in a merge script by checking
which properties in the loosing side do not exist in the winning side
and copying those entries.
Typically only the main value (email address, phone number) is checked
and not the additional meta data (like the type). Otherwise minor
differences (for example, both sides have same email address, but with
different types) would lead to duplicates.
Only addresses are treated differently: for them all attributes
(street, country, city, etc.) are compared, because there is no single
main value.
When copying properties which may have labels, the new entries must be
added such that a unique label can be copied or set. Other properties
can be added at the end of their array.
The modern profile typically doesn't generate empty properties, and therefore
it is useless to create an IMPP, X-ABRELATEDNAMES or X-ABDATE entry with has
an empty value. It just has the effect of creating X-ABLabels which are not
attached to any property.
In principle, CardDAV servers support arbitrary vCard 3.0
data. Extensions can be different and need to be preserved. However,
when multiple different clients or the server's Web UI interpret the
vCards, they need to agree on the semantic of these vCard extensions.
In practice, CardDAV was pushed by Apple and Apple clients are
probably the most common clients of CardDAV services. When the Google
Contacts Web UI creates or edits a contact, Google CardDAV will
send that data using the vCard flavor used by Apple.
Therefore it makes sense to exchange contacts with *all* CardDAV
servers using that format. This format could be made configurable in
SyncEvolution on a case-by-case basis; at the moment, it is
hard-coded.
During syncing, SyncEvolution takes care to translate between the
vCard flavor used internally (based on Evolution) and the CardDAV
vCard flavor. This mapping includes:
X-AIM/JABBER/... <-> IMPP + X-SERVICE-TYPE
Any IMPP property declared as X-SERVICE-TYPE=AIM will get
mapped to X-AIM. Same for others. Some IMPP service types
have no known X- property extension; they are stored in
EDS as IMPP. X- property extensions without a known X-SERVICE-TYPE
(for example, GaduGadu and Groupwise) are stored with
X-SERVICE-TYPE values chosen by SyncEvolution so that
Google CardDAV preserves them (GroupWise with mixed case
got translated by Google into Groupwise, so the latter is used).
Google always sends an X-ABLabel:Other for IMPP. This is ignored
because the service type overrides it.
The value itself also gets transformed during the mapping. IMPP uses
an URI as value, with a chat protocol (like "aim" or "xmpp") and
some protocol specific identifier. For each X- extension the
protocol is determined by the property name and the value is the
protocol specific identifier without URL encoding.
X-SPOUSE/MANAGER/ASSISTANT <-> X-ABRELATEDNAMES + X-ABLabel
The mapping is based on the X-ABLabel property attached to
the X-ABRELATEDNAMES property. This depends on the English
words "Spouse", "Manager", "Assistant" that Google CardDAV
and Apple devices seem to use regardless of the configured
language.
As with IMPP, only the subset of related names which have
a corresponding X- property extension get mapped. The rest
is stored in EDS using the X-ABRELATEDNAMES property.
X-ANNIVERSARY <-> X-ABDATE
Same here, with X-ABLabel:Anniversary as the special case
which gets mapped.
X-ABLabel parameter <-> property
CardDAV vCards have labels attached to arbitrary other properties
(TEL, ADR, X-ABDATE, X-ABRELATEDNAMES, ...) via vCard group tags:
item1.X-ABDATE:2010-01-01
item1.X-ABLabel:Anniversary
The advantage is that property values can contain arbitrary
characters, including line breaks and double quotation marks,
which is not possible in property parameters.
Neither EDS nor KDE (judging from the lack of responses on the
KDE-PIM mailing list) support custom labels. SyncEvolution could
have used grouping as it is done in CardDAV, but grouping is not
used much (not at all?) by the UIs working with the vCards in EDS
and KDE. It seemed easier to use a new X-ABLabel parameter.
Characters which cannot be stored in a parameter get converted
(double space to single space, line break to space, etc.) during
syncing. In practice, these characters don't appear in X-ABLabel
properties anyway because neither Apple nor Google UIs allow entering
them for custom labels.
The "Other" label is used by Google even in case where it adds no
information. For example, all XMPP properties have an associated
X-ABLabel=Other although the Web UI does not provide a means to edit
or show such a label. Editing the text before the value in the UI
changes the X-SERVICE-TYPE parameter value, not the X-ABLabel as for
other fields.
Therefore the "Other" label is ignored by removing it during syncing.
X-EVOLUTION-UI-SLOT (the parameter used in Evolution to determine the
order of properties in the UI) gets stored in CardDAV. The only exception
is Google CardDAV which got confused when an IMPP property had both
X-SERVICE-TYPE and X-EVOLUTION-UI-SLOT parameters set. For Google,
X-EVOLUTION-UI-SLOT is only sent on other properties and thus ordering
of chat information can get lost when syncing with Google.
CardDAV needs to use test data with the new CardDAV vCard flavor.
Most CardDAV servers can store EDS vCards and thus
Client::Source::*::testImport passed (with minor tweaks in
synccompare) when using the default eds_contact.vcf, but
Client::Sync::*::testItems fails when comparing synced data with test
cases when the synced data uses the native format and the test cases
are still the ones from EDS.
A libsynthesis with URLENCODE/DECODE() and sharedfield parameter for
<property> is needed.
The previous, regular expression based reordering of parameters broke
parameters containing semicolons in quoted strings. The splitting must
be done using a proper parser which knows whether a semicolon
delimits a parameter (outside of quotation marks) or is part of the
parameter value (inside quotation marks).
synccompare now also normalizes the quoting of parameter values:
quotation marks are used for anything more complex than alphanumeric
plus underscore and hyphen.
Google CardDAV always adds the "Other" label to IMPP properties.
Ignore this by replacing the group of these two properties
with just the IMPP property. The normalization is intentionally
only done for two grouped properties. If we end up with more,
we need to check what that means instead of removing the label.
It's also more efficient and easier to implement this way.
Grouped properties are sorted first according to the actual property
name, then related properties are moved to the place where their group
tag appears first. The first grouped property gets a "- " prefix, all
following ones are just indended with " ". The actual group tag is not
part of the normalized output, because its value is irrelevant:
BDAY:19701230
- EMAIL:john@custom.com
X-ABLabel:custom-label2
...
FN:Mr. John 1 Doe Sr.
- IMPP;X-SERVICE-TYPE=AIM:aim:aim
X-ABLabel:Other
...
- X-ABDATE:19710101
X-ABLabel:Anniversary
Redundant tags (those set for only a single property) get removed as
part of normalizing an item.
The code checking the session URL was never used because the
session did not exist yet during syncing. Adding a contactServer()
call fixes that.
The default WebDAV code in the dead code was redundant. The
default is set before checking for a specific server.
Different backend functions had to ensure that a connection
to the server existed. That work was meant to be done only
once during the existence of the backend instance, but due to
a missing "return" it was potentially (?) repeated multiple
times. Didn't cause any noticable problems.
The fields and their offsets were originally used for Synthesis vCard
extensions (X-CustomLabel-* and X-Synthesis-Ref). SyncEvolution never
used those. The extra fields cause unecessary overhead (logging,
merging, memory even when empty), so better remove them.
testcase files used to be installed in src/testcases.am, using a duplicated
list of files. Better do it in test/test.am with the canonical list of files
and only install generated files in src/testcases.am. This makes it easier
to extend the list in the future.
As confirmed by Cyrus Daboo, Apple Calendar Server 5.2 should
return VTIMEZONE embedded in the item data if matching against
well-known timezones fails. This broken when Apple implemented
support for timezones via reference.
Long term we need to support that feature, but for now it and
this bug are not important because for most timezones, we should
be fine with our TZID based mapping. Ignore the issue during
testing...
When doing a recursive scan of the home set, preserve the order of
entries as reported by the server and check the first one first. The
server knows better which entries are more relevant for the user (and
thus should be the default) or may have some other relevant
order. Previously, SyncEvolution replaced that order with sorting by
URL, which led to a predictable, but rather meaningless order.
For example, Google lists the users own calendar first, followed by
the shared calendars sorted alphabetical by their name. Now
SyncEvolution picks the main calendar as default correctly when
scanning from https://www.google.com/calendar/dav/.
Zimbra has a principal URL that also serves as home set. When using it
as start URL, SyncEvolution only looked the URL once, without listing
its content, and thus did not find the databases.
When following the Zimbra principal URL indirectly, SyncEvolution did
check all of the collections there recursively. Unfortunately that
also includes many mail folders, causing the scan to abort after
checking 1000 collections (an internal safe guard).
The solution for both includes tracking what to do with a URL. For the
initial URL, only meta data about the URL itself gets
checked. Recursive scanning is only done for the home set. If that
home set contains many collections, scanning is still slow and may run
into the internal safe guard limit. This cannot be avoided because the
CalDAV spec explicitly states that the home set may contain normal
collections which contain other collections, so a client has to do the
recursive scan.
When looking at a specific calendar, Google CalDAV does not report
what the current principal or the home set is and therefore
SyncEvolution stopped after finding just the initial calendar. Now it
detects the lack of meta information and adds all parents also as
candidates that need to be looked at. The downside of this is that it
doesn't know anything about which parents are relevant, so it ends up
checking https://www.google.com/calendar/ and
https://www.google.com/.
In both cases Basic Auth gets rejected with a temporary redirect to
the Google login page, which is something that SyncEvolution must
ignore immediately during scanning without applying the resend
workaround for "temporary rejection of valid credentials" that can
happen for valid Google CalDAV URLs.
The sorting of sub collections is redundant and can be removed,
because sorting already happens when storing the information for each
path. When scanning Google CalDAV starting at
https://www.google.com/calendar/dav/, the main calendar gets listed by
Google first, but because we reorder by sorting by path, it ends up
last in the SyncEvolution database list and thus is not picked as
default. This needs to be fixed separately by preserving the server's
order.
The previous change to the syncevo-webdav-lookup broke out-of-tree
compilation because it created a symlink to syncevo-webdav-lookup.sh
inside the current directory, which is not where the file resides when
compiling out-of-tree.
Instead of trying to be smart with symlinks (was only done for to avoid
a small file copy), simply copy the file.
syncevo-webdav-lookup.sh uses bash features (pipefail) and thus
must choose /bin/bash as shell explicitly. On systems with dash
as /bin/sh the script failed.
Server::run() still does some initialization (backend loading and file
watching). We need to wait with processing calls until that is done,
otherwise we have a race condition in TestFileNotify.testSession3:
- server starts
- test calls GetVersion()
- server answers as part of sending out log messages or loading backends (?),
sending incomplete information (no backends loaded yet)
- test touches syncevo-dbus-server
- server sets up watch
=> test fails because server missed the file modification and incorrect information returned
by GetVersions()
When we have attached a name to GDBusConnectionPtr in
dbus_get_bus_connection(), we must copy that name when
passing the connection on.
Otherwise requesting a name via dbus_get_bus_connection()
and later calling dbus_bus_connection_undelay() on a copy
fails to request the name.
This crept in during the conversion to the non-recursive build.
It was not a problem for "make dist", but when using the list in rules
it causes make warnings.
This takes advantage of the engine setting (new items) and preserving
(existing items) the UID for backends and then generating it as part of
serialization.
The older hacks in create/setResourceName() relied on resource name matching
the UID, which is not guaranteed for items created by us via POST (depending
on the server) and/or other CardDAV clients.
The hacks could be removed; they are simply not used anymore at the moment.
Before, the UID property in a vCard was ignored by the engine.
Backends were responsible for ensuring that the property is
set if required by the underlying storage.
This change moves this into the engine:
- UID is now field. It does not get used for matching
because the engine cannot rely on it being stored
by both sides.
- It gets parsed if present, but only generated if
explicitly enabled (because that is the traditional
behavior).
- It is never shown in the DevInf's CtCap
because the Synthesis engine would always show it
regardless whether a rule enabled the property.
That's because rules normally only get triggered
after exchanging DevInf and thus DevInf has to
be rule-independent. We don't want it shown because
then merging the incoming item during a local sync
would use the incoming UID, even if it is empty.
- Before writing, ensure that UID is set.
When updating an existing item, the Synthesis engine reads
the existing item, preserves the existing UID unless the peer
claims to support UID, and then updates with the existing UID.
This works for local sync (where SyncEvolution never claims
to support UID when talking to the other side). It will break
with peers which have UID in their CtCap although they
rewrite the UID and backends whose underlying storage cannot
handle UID changes during an update (for example, CardDAV).
A backend must still activate the UID property by setting m_backendRule
to LOCALSTORAGE-WITH-UID instead of the default LOCALSTORAGE.
Custom backend rules must include the new sub-rule HAVE-VCARD-UID.
See "[os-libsynthesis] UID + CardDAV" for a discussion of this change.
Long term it would be better to configure profiles without depending
on the rule hack.
The "show" attribute of the property values is not supported
and was silently ignored by the Synthesis engine and thus had
no effect. Remove the useless attributes.
Check whether a collection is read-only by reading ACL properties.
The check is intentionally a bit fuzzy to avoid accidentally marking
a collection as read-only.
All read-only collections are moved to the end of the found databases, to
avoid picking them as default when there are read/write collections. This
needs to be done in both "list all databases" mode (aka --print-databases)
and in "pick default database" mode (aka syncing with a source which
had no explicit 'database' set).
To use a read-only collection, configure it in the 'database'
property.
This is relevant for OwnDrive, aka ownCloud, which has a read-only "birthday"
calendar which was picked as default instead of the real calendar.
The WebDAV backend determines what database to use based on 'database',
'syncURL', and (for DNS SRV discovery) 'databaseUser' respectively
'username'. The new messages help figuring out which settings were
really used.
Read-only database are shown with a new "<read-only>" tag.
There is no explicit read/write flag, so a lack of that tag
is no guarantee that a database is really writable. It depends
on the backend whether it checks for write acccess.
This is relevant for WebDAV database scanning (read-only
databases should not be the default) and could also be used
to enhance automatic setups (for example, do not use two-way
syncing for read-only databases).
The test assumed that it can rename the main syncevo-dbus-server
executable to trigger the file watch mechanism. That's not correct:
- It might be the system's /usr/libexec/syncevo-dbus-server,
which a normal user cannot rename.
- The binary might be also active in some other, parallel tests.
Renaming it interferes with those other tests.
The latter happened in the nightly testing: HTTP server tests with
a long-running syncevo-dbus-server failed because the daemon terminated
during the tests.
Include the name of the file which got modified. This helped track down why
the server sometimes shut down unexpectedly during parallel testing (main
executable was renamed by D-Bus testing).
Not all exceptions thrown while executing a source operation contain
the source name. SyncSource::throwError() does that, but SE_THROW() as
used in helper code does not. It is better to add the source name as
logging prefix. The other advantage is that all lines will have the
prefix, not just the first one.
The SyncSource operations need access to the SyncSource class which
contains them to access the display name of the SyncSource instance.
A positive a side effect of telling the wrappers about the instance at
construction time is that the caller no longer has to pass the source
reference.
This change allows removing the SyncSource::throwError() special
cases, which completes the transition towards having correct source
code location information for all exceptions.
This could happen when using item operations on the command line.
In that case a source without name was created and that empty
name was inserted as prefix before errors encountered while
using the source.
Now empty string is the same as no string.
Raising exceptions via throwError() looses the original source code
location information. Fixing that by explicitly passing that
information as additional parameter, created with the preprocessor
macro SE_HERE.
Several files included the complex SyncContext.h only because needed
throwError(). A better place for the revised implementation is the
Exception class which used to be in util.h, now Exception.h.
Simplifying the include statements exposed indirect include
dependencies and removed "using namespace std" from the compilation of
some source files which happened to rely on it. We want to get rid of
that name space polution, so fix the code instead of adding it back.
Additional databases where not found for several
reasons. SyncEvolution ignored all shared calendars
(http://calendarserver.org/ns/shared) and Google marks the additional
calendars that way. The other problem was that the check for leaf
collections (= collections which cannot contain other desired
collections) incorrectly excluded those collections instead of only
preventing listing of their content.
With this change,
https://www.google.com/calendar/dav/?SyncEvolution=Google can be used
as starting point for Google Calendar.
The WebDAV backends contained a hack where the UID inside the data was forced
to be identical to the resource name. This is wrong for items created by us
via POST (because the server may choose a resource name != UID) or by some
other entity (where we have no idea how the resource name got chosen).
This commit removes the hack. Testing must be updated to pass correct data
with the same UID as on the server when updating an item, because the backend
will no longer ensure that and changing the UID of a resource gets rejected by
some servers.
The hack was introduced for peers which do not store the UID (for example, a
vCard or iCalendar 1.0 based SyncML client). A better solution must be found,
probably involving the Synthesis engine and its read/update/write cycle.
PUT has the disadvantage that a client needs to choose a name and then
figure out what the real name on the server is. With Google CardDAV that
requires sending another request and only works because the server happens
to remember the original name (which is not guaranteed!).
POST works for new items without a name and happens to be implemented
by Google such that the response already includes all required
information (new name and revision string).
POST is checked for as described in RFC 5995 once before creating a new
item. Servers which don't support it continue to get a PUT.
The 403 status code must be checked to detect UID conflicts when adding
an item that already exists on the server.
The life time and content of the member was not defined. It got used
in multiple places. It's cleaner to bind the openPropCallback() to
a local Props_t instance.
The "addressbook" source was written for the original iPhone and
Mac OS X. It hasn't been in use for a long time, keeping the code
around without actually compiling it makes no sense anymore.
The "--update <dir name>" operation was supposed to take the
item luids from the file names inside the directory. That part
had not been implemented, turning the operation accidentally
into an "--import".
Also missing was the escaping/unescaping of luids. Now the
same escaping is done as in command line output and command
line parsing to make the luids safe for use as file name.
When running Akonadi with the reference home dir, it creates
a socket symlink that we must not copy. Otherwise all Akonadi
instances end up sharing the same Unix domain socket name and
thus Akonadi instance.
When testing for specific output of shell tools, we need to ensure that the
system locale (typically engineering English) is active, otherwise we end
up with mismatches when the error messages get translated.
The macro was supposed to help Clang scan-build detect that error
paths are not taken. However, it caused the expression to be
evaluated twice, which is not okay for some more recent checks.
Instead we now enable either the real assert or our simplified check,
depending on a custom CT_ASSERT_SIMPLE that has to be set when running with
scan-build.
The same applies to "no throw" asserts. To keep the wrapper macros
simple, we pass the same statement twice.
The --shell value used for Clang build-scan builds includes
scan-build and an additional wrapper script. These extra commands
break result checking. Instead of trying to filter them out,
use the new, optional --simple-shell.
When aborting, our AbortHandler gets called to close down logging.
This may involve memory allocation, which is unsafe. In FDO #76375, a
deadlock on a libc mutex was seen.
To ensure that the process shuts down anyway, install an alarm and give
the process five seconds to shut down before the SIGALRM signal will kill
it.
The information that the sync is freezing is now also handed down to
the transport and all sources. In the case of PBAP caching, the local
transport notifies the child where the PBAP source then uses Bluez
5.15 Transfer1.Suspend/Resume to freeze/thaw the actual OBEX transfer.
If that fails (for example, not implemented because Bluez is too old
or the transfer is still queueing), then the transfer gets cancelled
and the entire sync fails. This is desirable for PBAP caching and
Bluetooth because a failed sync can easily be recovered from (just
start it again) and the overall goal is to free up Bluetooth bandwidth
quickly.
When using GIO, it is possible to avoid the DBusServer listening on a
publicly accessible address. Connection setup becomes more reliable,
too, because the D-Bus server side can detect that a child died
because the connection will be closed.
When using libdbus, the traditional server/listen and client/connect
model is still used. GDBus GIO mimicks the same API to minimize
changes in ForkExec. The API gets simplified to support both
approaches:
- Choosing an address is no longer possible (was only used in the
test example anyway)
- The new connection callback must be specified already when starting
the server.
The GDBus GIO implementation uses SyncEvolution utility code. Strictly
speaking, this creates a circular dependency
libsyncevolution<->libgdbus. This is solved by not linking libgdbus
against libsyncevolution and expecting executables to do that instead.
The GDBus GIO example no longer linked because of that; it gets removed
because it was of little value.
Now that we started using CLOEXEC, we might as well use it for the
permanent file descriptors created for each ForkExec instance.
g_clear_object() unnecessarily creates a dependency on a more recent glib
version. It probably isn't used correctly here anyway (g_clear_object() checks
for NULL itself, so the code around it doesn't need and should do that), so we
can use the older g_object_unref() instead.
The code is a verbatim copy of the corresponding files from gSSO. The
only difference is that we compile the implementation as C++ code,
because that is what we set compile flags for in libsyncevolution.
This helper class will be used for D-Bus over plain files in
combination with g_dbus_connection_new().
Currently implements the new API by stopping to consume data on the
local side of the sync. The Bluetooth transfer on the remote side
continues to run and isn't even notified about the suspend; need a new
obexd API to also suspend that and then notify the remote side when we
want it to do that.
This adds GetPeerStatus() and "progress" events.
To detect DB changes as they happen, the SyncSource operations are
monitored. Upon entry, a counter gets increased and transmitted
through to the PIM manager in syncevo-dbus-server using extended
SourceProgress structs (only visible internally - public API must not
be extended!). This will count operations which fail and count those
twice which get resumed, so the counts will be too high
occasionally. That is in line with the API definition; because it is
not exact, the API only exposes a "modified" flag.
Progress is reported based on the "item received" Synthesis event and
the total item count. A modified libsynthesis is needed where the
SyncML binfile client on the target side of the local sync actually
sends the total item count (via NumberOfChanges). This cannot be done
yet right at the start of the sync, only the second SyncML message
will have it. That is acceptable, because completion is reached very
quickly anyway for syncs involving only one message.
At the moment, SyncContext::displaySourceProgress() holds back "item
received" events until a different event needs to be emitted. Progress
reporting might get more fine-grained when adding allowing held back
events to be emitted at a fixed rate, every 0.1s. This is not done yet
because it seems to work well enough already.
For testing and demonstration purposes, sync.py gets command line
arguments for setting progress frequency and showing progress either
via listening to signals or polling.
The code was meant to check whether a contact really has contact
data; because the if check was missing, the sync result checking
in testSync failed for test data where one or more contacts had
not photos.
The function now keeps the event loop running while waiting for
completion. That ensures that events get received immediately.
testpim.py now matches the po scanning process because of the _(' text
pattern. We need to exclude it explicitly.
The is new API and flag grant control over the PBAP sync mode. It is
implemented by setting the SYNCEVOLUTION_PBAP_SYNC env variable in the
forked helper process.
The main advantage is that processed data can be discarded
immediately. When using a plain file, the entire address book must be
stored in it.
It also enables suspending a transfer by stopping to read from the
pipe, either via some internal API or simply freezing the
syncevo-local-sync process with SIGSTOP.
The drawback is that obexd does not react well to a full pipe. It
simply gets stuck in a blocking write(); in other words, all obexd
operations get frozen and obexd stops responding on D-Bus.
SyncEvolution was meant to load the syncecal or syncebook shared object
which uses the most recent libraries (libical, libecal/libebook) on
the system and then stop loading further backends. Due to a string
handling bug the check for already backends always found nothing,
leading to multiple conflicting backends loaded on some systems (for
example, those with libical0 and libical1 installed).
If that happened, the backend became unusable.
In testAutoSyncNoNetworkManager, syncs keep happening while the test
terminates. The processes get killed correctly, but that ocassionally triggers
notifictions which the post-test D-Bus log checking treats as test failures.
To avoid this, truncate the D-Bus log before shutting down. Then all extra
events are available for debugging, but will not be seen by the check code.
Check for an already running instance. Otherwise the new one
will fail to start up, which will be hard to diagnose. Happened
recently when a test was broken and failed to kill its daemon
instance.
Sometimes time stamps are necessary to understand timing-dependent
issues. A separate flag for timestamps on/off might be useful.
For now, enable them in debug mode.
Occasionally in the nightly testing reading the existing akonadi.db
produced incomplete output which could not be used to create the
modified akonadi.db. This patch works around that by using pre-generated
akonadi.db.dump files as input if found.
"evolution" tests used to be a traditional name for all Client::Source
and SyncEvolution unit tests. This caused problems when adding
Akonadi (crash in Akonadi client lib when EDS was used before) and
was not nice in the test reporting (multiple different tests all mixed
into one summary line). It also prevented running these tests in parallel.
All of this gets fixed by splitting "evolution" into "eds", "kde",
"file" and "unittests". To avoid updating the nightly test run scripts,
--enable evolution=.... is still understood.
More recent distros (for example, Ubuntu Saucy) rely on
XDG_RUNTIME_DIR. Each time dbus-session.sh runs, it must
ensure that the runtime dir exists and is empty.
Not doing so may have caused random concurrency issues in
parallel nightly testing.
It also was a problem when trying to run activesyncd + SyncEvolution
on a headless Ubuntu Saucy server (see FDO #76273).
Better assume that the program may change directories. In that
case we need an absolute path for the output file.
Nightly testing failed once because valgrind was not able to
write its output file in the new directory. It remained unclear
why the directory was changed and to what, though. But that is
a different problem.
Nightly testing uses a "Client_Sync_Current" symlink to point
the server to a location where it shall write its per-test logs
without having to reconfigure the server.
When the server-side session runs longer than a specific test, it ends up
using the "Client_Sync_Current" link of the next test. "permission denied"
errors were also seen, although it is less certain whether that had the same
root cause.
To avoid such issues, the sync session now tries to resolve the log dir
path to an absolute path at the start of the session.
When spawning the dlt daemon under "unbuffer", a lingering tclsh process
exists before the test starts and the test ended up waiting for that to
stop.
We need to be more specific with identifying sync processes: only those
processes started as part of syncing matter.
Specifying the language of the email address did not make much sense
to start with, even if EDS does (did?) it that way originally. Akonadi
strips LANGUAGE=en. Instead of filtering that out for Akonadi, better
simplify the test data.
The KDE Notes resources store items under a different MIME type than the one
used in AKonadi (see "[Kde-pim] note format"). SyncEvolution use the same type
as Akonadi and thus did not find existing KDE Notes resources.
To support both while KDE and Akonadi transition to the same type,
SyncEvolution now looks for notes resources using both MIME types and accepts
both kinds of items when reading. When writing, SyncEvolution picks the MIME
type that is supported by the resource, which hopefully avoids confusing the
KDE app using the resource (untested).
As a positive side effect, the "database" value used for opening a resource is
now checked more thoroughly. Non-existent resources and the type mismatches
like pointing a "kde-contacts" backend to a calendar resource are now detected
early.
Akonadi resources do not enforce iCalendar 2.0 semantic like
"each VEVENT must have a UID" (see "[Kde-pim] iCalendar semantic").
When receiving an event from a peer which itself does not enforce
that semantic (Funambol, vCalendar 1.0 based phones), then we
need to generate a UID, otherwise KOrganizer will ignore the
imported event.
The tests depend on the source re-adding UID and/or RECURRENCE-ID during an
update. The Akonadi source and file source do not do that and thus we cannot
run the test against them.
Akonadi and Akonadi resources do not enforce iCalendar 2.0 semantic
like "UID+RECURRENCE-ID must be unique in a collection". Therefore we
must disable all tests expecting that behavior.
Regular syncing should work okay as long as the peer behaves. A misbehaving
peer will be able to send us invalid sets of items and syncing will not be
able to detect that because the sync engine itself is also agnostic to the
special iCalendar 2.0 semantic.
See "[Kde-pim] iCalendar semantic".
Under heavy load while testing, syncing occasionally failed with a D-Bus
message timeout, for example in the Close() call. Let's disable these timeouts
by explicitly passing something very large (infinite is not supported by the
Python bindings).
If HTTP processing takes to long, the client may resend, but in contrast to a
failed D-Bus method call that case is handled.
The increased timeout has no negative effect on detecting when
syncevo-dbus-server dies, because then the D-Bus daemon will notify
us immediately.
The previous solution left the "sleep 60" shell command running despite
killing the shell that spawned it. That lingering process caused schroot to
wait during session shutdown, which slowed down result checking considerably.
Now a single Perl process does the sleep and the killing. Killing that
process allows schroot to proceed immediately.
filekde covers running Akonadi in HTTP server mode, which was broken in
SyncEvolution 1.4 (see FDO 75672). kdekde and filefile get added for
the sake of completeness.
When cloning the home dir template, take care of adapting
some hard-coded paths in akonadiserverrc and the akonadi.db
used for sqlite. This only works when Akonadi was configured
to use sqlite as backend in the home dir template.
When used as storage in a server, Akonadi got called in a background thread
that gets created to handle slow initialization of sources and preventing
ensuing timeouts in HTTP clients (probably not needed for Akonadi itself,
but may still be useful when combining it with other sources).
Akonadi cannot be used like that, leading to false "Akonadi not running"
errors or (if one got past that check) failing item operations.
This commit avoids the situation by shifting the work back into the main
thread.
This is needed to move one-time operations from AkonadiSyncSource into the
main thread if (and only if) the source is used in a background
thread. Akonadi can't be used in a thread unless the thread was created with
QThread, which is not the case in SyncEvolution.
The basic functionality is the same as the older GRunWhile(). It just has to
be extended so that the action is guaranteed to be called only in the main
thread and there runs only once. This is achieved by turning the action
into a check which immediately returns false.
AkonadiSyncSource cannot include GLibSupport.h due to conflicting usage of
"signal" (a define in Qt, parameter name in glib), therefore these functions
are now made available via util.h.
Writing a test that ensures that libsynthesis gets the right
timezone was hard, because the original problem only occurs
when syncing a vCalendar 1.0 based device with SyncEvolution.
Emulating libsynthesis at least comes close.
libical 1.0 started to return VTIMEZONE definitions with multiple
absolute transition times instead of RRULEs. This causes problems when
exchanging data with peers (see
https://sourceforge.net/p/freeassociation/bugs/95/).
In SyncEvolution, this affected sending an event using New Zealand
time in vCalendar 1.0 format to a phone, because the internal,
out-dated definition of the time zone in libsynthesis was used as
fallback when loading RRULE-based timezone definitions from libical
failed (see "[SyncEvolution] Some events showing wrong time on
phone"). It might also affect exchanging data with CalDAV peers (not
tested).
The workaround is to include the original code from libical from
before the change in SyncEvolution and override
icaltimezone_get_component() with a version that uses the original
timezone loading code. This does not fix cases where other code causes
libical itself to load a timezone, but for libsynthesis it is good
enough because it does the loading early when no other code should
have used libical.
The downside is that now we need to maintain the RRULE heuristics and
ensure that they actually work. Copying libical/src/test/timezones.c
would be useful for that.
Long-term it would be good to enhance libical itself such that it can
return a VTIMEZONE with suitable RRULEs matching a specific event,
point in time or time range.
Check for icaltimezone_get_component() before actually loading libical.
That way our own code uses our own copy of that method from the copied
icaltz-util.c instead of always using the one from libical.
For that we must not overwriting already found functions.
gcc 4.4 complains about "num_types" being used potentially uninitialized.
This probably refers to bailing out before num_types gets set. However,
the use is conditional on "types", which is only set after "num_types",
so this is a false warning.
Prevent the warning anyway for the sake of clean -Wall -Werror compilation.
When EFREAD() does a "goto error", the code leaks the "r_trans"
memory. Found with cppcheck. Adding cleanup code fixes that.
To avoid a false cppcheck warning, the free() call must use the same
variable name as the calloc() assignment. It is a bit more readable
that way, too.
Can be compiled separately when linking against libical. For this
to work it must use header file locations as in libical client
programs.
icaltzutil_get_zone_directory() can come from either libical or
the icatz-util.c copy, depending on DISABLE_ICALTZUTIL_GET_ZONE_DIRECTORY.
We have to use singleton methods to ensure that the global std::string
instances are initialized before use. There was some non-determinism before
because SyncSource loading happens as part of the executable initialization,
too, and can cause a call to EDSAbiWrapperInit().
On some platforms, LDFLAGS is used during configure to set rpath
options. runtests.py must pass the -no-install to the linker
of client-test without overwriting those flags.
This fixes an issue where configure fails to find Akonadi when
test programs do not compile because QString is not found:
checking for Akonadi/Collection... no
configure: error: akonadi.pc not found. Install it to compile with the Akonadi backend enabled.
...
configure:21857: checking Akonadi/Collection presence
configure:21857: g++ -E -I/usr/include/ -I/usr/include//KDE -I/usr/include/qt4 conftest.cpp
In file included from /usr/include//KDE/Akonadi/../../akonadi/collection.h:25:0,
from /usr/include//KDE/Akonadi/Collection:1,
from conftest.cpp:44:
/usr/include/akonadi/entity.h:24:19: fatal error: QString: No such file or directory
#include <QString>
Add Replaces: and Breaks: lines for older syncevolution-libs versions to
avoid file conflicts with new syncevolution-libs-gnome and
syncevolution-libs-kde packages.
Thanks: Simon McVittie
Closes: #739662
The libsynthesis_3.4.0.47+syncevolution-1-3-99-7-20140203 tag
accidentally matched the check for a non-exact-tag "git-describe"
output (-<number of changes>-g<hash>). Therefore the actual tags
weren't even checked.
Reversing the check such that we look at tags ourselves and proceed with
them if one matches avoids this problem.
The test randomly failed under load because the client's GetVersion
call did not make it to the server in time. It seems to work better
with a smaller delay.
We must use the temporary file that was created for us securily, not
a temp file named after that file. This caused a temp file vulnerability
and the real temporary files were not deleted by the script.
We need the right file suffix for the compiler. "mktemp --suffix"
would be useful, but it not yet available in Ubuntu Lucid. So instead
create a temp dir (which works in Ubuntu Lucid) and create files
inside that.
With DAViCal, the "EST/EDT" VTIMEZONE gets renamed by e_cal_check_timezones()
because DAViCal reorders properties and thus breaking the simple string
comparison even though the semantic of the timezone did not change.
If this happens multiple times (for example, in testTwinning), the test
failed because the " 1" suffix was only stripped once. We also need to
ignore multiple repeated suffices (" 1 1") because e_cal_check_timezones()
is not intelligent enough to bump an existing suffix.
Long term, it would be better to teach e_cal_check_timezones() how to
normalize timezones before the comparison and thus avoiding the duplicated
timezones in the first place.
The test relied on fixed timeouts for a) killing syncevo-dbus-server
and b) completion of the sync. The second timeout sometimes occurred
to soon, causing the testNoParent itself to fail and also the following
test (because log files from testNoParent were not removed).
Now be more careful about really killing in the middle of the sync (thanks
to delaying the sync and not the helper startup) and really wait for process
termination (with getChildren() and os.kill() checking).
Google CalDAV server does not handle \\\n properly. It adds
an extra backslash. Avoid this aspect of the test case because
a fix on the server side has been slow in coming.
eds_event.ics.googlecalendar.tem.patch is used with
Client::Sync::eds_event and Google CalDAV as server. google_event.ics
is used for Client::Source::google_caldav.
Google CardDAV preserves X- properties when they are not included
in an update. Perhaps sending them as empty properties would work.
In the meantime, ignore the failure by stripping X- props before
comparison.
Disabling the Akonadi test in the backend which used to run
as part of "client-test SyncEvolution" and fail unless Akonadi
was started first. The test is not important enough to justify
running Akonadi.
glib 2.39.0 (aka GNOME 3.10) as found in Ubuntu Trusty introduces
warnings when g_source_remove() is passed an unknown tag. SyncEvolution
did this in two cases: in both, the source callback returned false and thus
caused the source to be removed by the caller. In that case, the explicit
g_source_remove() is redundant and must be avoided.
Such a call is faulty and might accidentally remove a new source with the same
tag (unlikely though, given that tags seem to get assigned incrementally).
The only noticable effect were additional error messages with different
numbers:
[ERROR] GLib: Source ID 9 was not found when attempting to remove it
libical 1.0 broke the ABI, leading to libical.so.1. The only relevant change
for SyncEvolution is the renumbering of ICAL_*_PROPERTY enum values. We can
adapt to that change at runtime, which allows us to compile once with
libical.so.0, then patch executables or use dynamic loading to run with the
more recent libical.so.0 if we add 1 to the known constants. Done without
changing all code via define tricks.
This new mode is enabled for --enable-evolution-compatibility or (limited to
ical) for --enable-evolution-compatibility=ical.
Testing on one platform can only be sped up further by parallelizing
it. Each action started by runtests.py may potentially run in parallel
to other actions, if it either does not need files in the home
directory (like checking out source) or can be run in its own, private
home directory.
The new --home-template parameter specifies the location of a home
directory that runtests.py can copy to create these private home
directory of each test. Each action is run in a fork of the main
runtests.py, so env and working directory changes are confined to that
fork and do not affect other actions.
When --home-template is given, runtests.py will also set up a new home
directory and point to it with HOME,
XDG_CACHE/CONFIG/DATA_HOME. Because test-dbus.py and testpim.py use a
non-standard layout of the XDG dirs without directories hidden by the
leading dot, runtests.py must move the standard directories to conform
with the other scripts' expectation.
testpim.py itself must be more flexible and allow running with a root
for the XDG dirs that is not called "temp-testpim". To allow parallel
tests, GNOME keyrings must be located in XDG_DATA_HOME, which is
supported since gnome-keyring 3.6. On older distros, parallel testing
does not work because gnome-keyring-daemon would always look in the
home directory as specified in /etc/passwd, which we cannot override.
testpim.py must not delete the keyrings when cleaning up the XDG dirs
for a test.
Locking Murphy resources and allocating jobs from GNU make jobserver
gets moved into a separate script which wraps the actual execution of
the action. Some change would have been necessary anyway (we cannot
connect to D-Bus and then fork) and the new approach is cleaner. It
ensures that cut-and-paste of the action command line into a shell
will only run with the necessary Murphy resource locked. Previously,
it might have conflicted with a running test.
As a new feature, test names as passed to resources.py can be mapped
to actual resource names via RESOURCE_<test name> env
variables. Useful for tests with different names which use the same
resources (currently DAViCal for the DAV server tests).
Use the new wrappercheck.sh option --wait-for-daemon-output to ensure
that the syncevo-http-server is really running. This solves both a
timing issue (tests might start too soon) and make it more obvious when
the HTTP server is not working properly.
Diagnosing that was hard. Now we detect it and also write more debug
output from the HTTP server side into the syncevohttp.log.
When starting a daemon via wrappercheck, optionally wait for the
daemon to write a certain message to its log file. The message is
searched for with "grep -e", so regular expressions matching one line
are acceptable. Only works in combination with writing that log file.
This will be used to wait for syncevo-http-server to confirm that it
is running and ready to accept connections.
When starting a daemon via wrappercheck, optionally wait for the
daemon to write a certain message to its log file. The message is
searched for with "grep -e", so regular expressions matching one line
are acceptable. Only works in combination with writing that log file.
This will be used to wait for syncevo-http-server to confirm that it
is running and ready to accept connections.
Don't assume that each server actually has an entry in the result
list. In case of a failure to write that, treat the server as failed
to highlight the problem instead of failing in the resultchecker.py
script.
On current Debian Testing, boost libraries are installed in /usr/lib/<arch>
directories where they are not found by the Boost m4 macros. This breaks the
boost-locale detection because it only tries linking against libs that it
finds in the file system first.
To fix this, at least try linking with -lboost-locale and use that if it
works.
At least on OpenSUSE, ldconfig must be run after installing
SyncEvolution to update the shared library information cache.
Do that automatically as part of the rpm's post and postun steps.
The syncevolution-bundle description was wrong. It incorrectly
used the description of a meta package.
Sometimes, on a more recent Linux distro, the helper is said to terminate
with SIGKILL although we only sent a SIGTERM. Not sure why that happens.
It randomly breaks tests, so let's ignore the unexpected signal. Everything
else seems to work correctly.
When timing out, SyncContext nevertheless sometimes resent a message despite
being close to the final timeout deadline. Relax the math further so that more
time must remain before the deadline when attempting a resend.
The earlier behavior randomly broke some of the resend tests.
When testLocaledPhone was run after testLocale, it used the localed installed
by the earlier test. We must ensure that daemons on the system bus get removed
after each test.
Picking a port dynamically was broken: if 9999 was in use, the test script
connected to the wrong process and then incorrectly continued, instead of
detecting that syncevo-http-server failed. Use the auto-alloc feature
of syncevo-http-server (port == 0) instead and get the actual port
from the output.
Now that we redirect that output into a file, it makes sense to also include
that log in test failure reports.
When an assert triggers, the following code does not get executed.
clang's scan-tool could not detect that because CppUnit's templates
are not annotated with the "noreturn" attribute. We already use
wrappers around CppUnit, so we can fairly easily add a fake "noreturn"
call to exit() to the assertion failure path.
While at it, avoid unnecessary inlining and update the macros to
handle parameters with commans (the necessary brackets were missing).
With this and the preceeding changes, a build with most SyncEvolution features
enabled passes scan-build without errors. There are still errors in
libsynthesis, though. See https://bugs.freedesktop.org/show_bug.cgi?id=73393
clang's scan-tool fails to detect that the variable is bound
to a boost::lambda expression and indeed does get read later on.
Hide the write via ifdefs to suppress the warning when doing
static code analysis.
Control flow analysis from clang's own C++ compiler and clang's
scan-tool come to different results: the compiler fails to detect
that the variable will be initialized in all cases and thus requires
a redundant initialization to avoid an "uninitialized memory read"
error with -Wall, while scan-tool complains about the redundant write.
To satisfy both, avoid the initialization when doing static code analysis.
The intermediate "msg" variable caused a warning from clang's scan-build about
setting but not reading it. To get rid of the warning we are now writing into
the final buffer directly.
From scan-build: size argument is greater than the free space in the
destination buffer, for the line with strncat().
This might be a false positive in scan-build, the size looks right
at first glance. To be on the safe side and get rid of the warning,
allocate one one more byte...
clang's scan-tool complains about keeping a local address stored
in a global structure, the argument description. This is not a real
problem because the structure is not going to be used after leaving
main(), but perhaps it is cleaner nevertheless to allocate the
structure on the stack.
Allow marking methods and functions as "does not return". This may
help static code analysis tools to determine that certain code paths
will not be taken.
The new runtest.py --cppcheck flag can be used in combination with
sources checked out earlier. The source checkout actions then
run cppcheck on each source tree and fail if cppcheck finds any
non-suppressed problems.
A wrapper around cppcheck must be used because --error-exitcode caused
non-zero exit codes also when errors where suppressed, at least in cppcheck
1.61.
We check the SyncEvolution source a bit more thoroughly than the rest because
we can fix or suppress issues inline, which might not be acceptable for
libsynthesis or activesyncd.
The remaining warnings are either not valid or refer to code which
is intentionally broken. Suppress these warnings.
SyncEvolution now passes scanning with cppcheck 1.61 without warnings.
The log file guard instance which should have added a "copy" part to log files
was deleted again before it could have the desired effect. Found by cppcheck
("Instance of 'SyncPrefix' object is destroyed immediately.").
cppcheck warned about this. This wasn't actually a problem (one member
was not used at all, the other was set later), but for the sake of
clean cppcheck scans let's fix it.
When configuring a local sync on the target side, the code in
SyncConfig::getSyncSourceNodes() created redundant ConfigNode instances. This
was pointed out by cppcheck as redundant variable assignments. The code which
created these additional instances may have had side-effects, but it doesn't
look like these were desired and/or relevant.
In this case it is fairly obvious that new_active and old_active
will both be set before use. There is no need for setting old_active
to FALSE. cppcheck warned about this.
cppcheck cannot know that one of the two if checks must succeed
and warns about the case when both are false. This shouldn't
happen, but it doesn't hurt to initialize the const char * pointer
to a default value.
cppcheck complained about "con" not being initialized in the constructor.
In general it is better to fully initialize all members, even if (as in this
case) they are not used outside of a second initialization step (open/close).
This warning pointed to a second potential problem: should SyncEvolution ever
fail to call close() after open(), resources would leak because all allocated
resources get freed in close(). For the sake of defensive programming it is
better to initialize in the constructor, then free in both close() and the
destructor.
That way we can initialize m_accessMode immediately in the constructor,
which keeps cppcheck happy and avoids uninitalized data in the instance
between creation and open(), which is better than before.
cppcheck correctly identified some places where the weaker empty()
call can be used instead of size() to determine whether an STL
data structure contains something.
None of them were really performance critical, but let's fix them
anyway for the sake of clean cppcheck scan results.
cppcheck complains about sub-optimal performance (std::string should be passed
via const reference, not by value). It is not critical here, but for the sake
of getting clean scan results let's fix the issues...
One of the offending methods was never implemented, nor called. Removed.
The Collection class did not initialize all its members. Even if the code
never relied on them being initialized (not checked), it is better to be
safe and predictable, which means initializing all members in the constructor.
Renaming the backends now needs to be done before calling make.
This is used to include two versions of syncecal.so:
- one for EDS >= 3.6 < 3.10
- one for EDS >= 3.10 with the libecal-1.2 soname patched
The second flavor became necessary because EDS 3.10 accidentally
changed the soname. The API and ABI actually is the same, so
we don't need to compile twice.
The package meta data must include additional alternatives for
libecal and libebook when including extra backends. This must
be passed in via EXTRA_BACKENDS_ECAL/EBOOK_REQUIRES; the makefile
does not attempt to determine what those are.
kdebase-runtime became kde-runtime in Debian Wheezy. Accept both
as prerequisite of syncevolution-kde to allow installation on
newer distros without pulling in the transitional kdebase-runtime
package.
"dist" must complete before being able to install and test packages on
different platforms. By running "make distcheck" separately, we can
start with parallel testing sooner and separately enable/disable
distcheck testing.
When run with an existing resultdir, do not remove the resultdir
and append to output.txt. This is useful for splitting compilation and
testing into two separate phases where testing on other platforms
can be started in parallel once compilation is complete.
Depends on the enhanced resultchecker.py.
Compilation and actual testing will be split into two runtest.py invocations
to support parallelizing of testing. resultchecker.py must handle the
resulting output.txt, which will have multiple entries for for each action:
first compilation followed by skipped testing, then skipped compilation
followed by testing. resultcheckper.py must report the non-skipped
results and only fall back to "skipped" if none are found.
In addition, make it clear that the initial steps are about the source code by
renaming the step in the report with a "-source" suffix.
When testing pre-compiled binaries it may happen that we cannot
determine library versions with pkg-config because development
files are not installed. Ignore these errors and skip adding
library version numbers.
When already started with D-Bus env variables, do not kill those
daemons. When killing daemons, report it on stderr so that it can
be tracked down who killed which processes.
Compile jobs are not timing sensitive, so run them with lower priority than
test jobs. This becomes relevant when multiple runtests.py instances are
active.
Reordering the command line just cannot cope with the complex command line
setup for checking out source code. Disable the reordering for that command.
Both env variables might be set multiple times, once as part of the shell
prefix and again for the specific test. The second instance used to overwrite
the previous one unless one was very careful about embedding the environment's
current value of these variables.
The problem with that was that LD_LIBRARY_PATH had to be set in the
environment of runtest.py, where it may break running other commands outside
of the test shell.
Better be a bit more smart about the final command and merge the two values
into one.
When multiple different runtests.py instances are active, they must coordinate
access to shared resources like accounts on servers. We can use Murphy
(https://01.org/murphy/) for that.
If runtest.py is started with a D-Bus session address set in the environment,
it expects Murphy to be running there and will lock a resource named after
each operation before running the operation. Some operations (like "compile")
can be granted to each instance, but some (like "memotoo") must be exclusive.
Here's a complete Murphy lua config:
m = murphy.get()
-- try loading console plugin
m:try_load_plugin('console')
-- load the native resource plugin
if m:plugin_exists('resource-native') then
m:load_plugin('resource-native')
m:info("native resource plugin loaded")
else
m:info("No native resource plugin found...")
end
-- load the dbus resource plugin
m:try_load_plugin('resource-dbus', {
dbus_bus = "session",
dbus_service = "org.Murphy",
dbus_track = true,
default_zone = "driver",
default_class = "implicit"
})
m:info("dbus resource plugin loaded")
-- define application classes
application_class { name="implicit" , priority=0 , modal=false, share=true ,
order="fifo" }
-- define zone attributes
zone.attributes {
}
-- define zones
zone {
name = "driver"
}
-- define resource classes
resource.class {
name = "audio_playback",
shareable = true,
attributes = {
role = { mdb.string, "music", "rw" },
pid = { mdb.string, "<unknown>", "rw" },
policy = { mdb.string, "relaxed", "rw" }
}
}
-- SyncEvolution resources: one per runtest.py
-- Some tests can run in parallel. Those resources are shareable.
for i,v in pairs {
-- compiling the source on one platform
"compile",
"install",
"dist",
-- checking out source
"libsynthesis",
"syncevolution",
"activesyncd",
-- local tests
"evolution",
"dbus",
"pim",
} do
resource.class {
name = v,
shareable = true
}
end
-- TODO (in runtests.py): some of these resources overlap
for i,v in pairs {
-- tests involving unique peers
"googlecalendar",
"googlecontacts",
"owndrive",
"yahoo",
"oracle",
"davical",
"apple",
"googleeas",
"exchange",
"edsfile",
"edseds",
"edsxfile",
"davfile",
"edsdav",
"mobical",
"memotoo",
} do
resource.class {
name = v,
shareable = false
}
end
-- test for creating selections: don't remove, murphyd won't start without it
-- (E: Failed to enable resolver autoupdate.)
mdb.select {
name = "audio_owner",
table = "audio_playback_owner",
columns = {"application_class"},
condition = "zone_name = 'driver'",
}
The resultchecker.py script must not rely on being able to cd into
a specific directory and then having the chroot use that same directory,
because paths may be different. Instead always use absolute paths inside
the schroot commands.
Only the "client-test" executable and its test data were installed so far, in
/usr/bin resp. the doc dir. Now also test-dbus.py and testpim.py plus their
data files get installed, using a new /usr/lib/syncevolution/test directory
for all test files and commands.
"runtests.py --prebuilt" can take one or more .deb files including these
tests, will install them as the "compile" operation and then will use the
installed SyncEvolution, without overriding any paths. This relies on having
suitable rights for "dpkg" and /usr/lib/syncevolution/test; the way how that
is done in the nightly testing is via chroots where the entire file system
is writable by the normal test user.
Normal users without such write access to /usr/lib/syncevolution/test need
to copy that directory into their home directory first.
This adds hooks into doAutoSyncLocalSuccess() which are used by
testAutoSyncLocalMultiple() to start a session while waiting for auto-sync,
access the config which will get auto-synced, then continue waiting.
While at it, also measure the actual auto sync interval and roughly
compare against the configured one.
The actual timeout duration was too small by a factor of 1000 (ms
versus s mismatch). Did not affect existing tests which just had their
(anyway small) timeouts triggered too soon.
When enabling auto-sync for a config and then accessing or syncing the
config manually via the command line tool, the server would abort at
the time when the auto-sync was originally scheduled.
The reason is that rescheduling reset the timeout which caused the
rescheduling, only to be cleared when the rescheduling callback
returns. Then when it triggered next, an empty boost::method was
called. The fix is to to track whether the Timeout instance still
refers to the same glib timeout and only clear the instance if that's
still the case.
A unit test will be committed separately.
The interval is measured from the start of last sync. The rationale for that
is that it makes the time when syncs run independent from the duration of each
sync, which is more predictable.
The implementation already works like that, it just was not clear from the
property description.
The ESourceRegistry singleton used to be handled by the lib, now it
only hosts a pointer to it created elsewhere. To avoid a false
dependency, the lib should not be linked against EDS.
A user reported via email that the Nokia 515 sends
'application/vnd.syncml+wbxml; charset=UTF-8' as type of its messages
this tripped up the syncevo-http-server, leading to:
[ERROR] syncevo-dbus-server: /org/syncevolution/Server: message type 'application/vnd.syncml+wbxml; charset=UTF-8' not supported for starting a sync
We need to strip the '; charset=UTF-8' suffix also when checking for
WBXML.
Somehow some policy crept in that said that only versions < .99.6 are
pre-releases. Not sure where that came from. The goal ist to mark all
.99 releases as pre-release in the version string.
The earlier fix did not cover the case of compiling SyncEvolution
statically. In that case the *Register.cpp files do get compiled for
all backends. When nothing is enabled, signonRegister.cpp cannot
register any kind of provider.
Source backends can register an inactive source backend to document
that the backend exists and merely was not enabled during compilation;
there is no such support for providers.
When trying to add provideruoa.so compiled on Ubuntu Quantal to
the archive compiled on Ubuntu Lucid, undefined references to
future libstdc++ tripped up the symbol check.
This is actually okay for system libraries, the .so will simply
not load and/or work on older systems. Filter out such warnings,
starting with libstdc++. Other system libs can be added as needed.
When neither gSSO nor UOA were enabled, the provideruoa.so was
still enabled and failed to link because its LDFLAGS were not
set. It shouldn't have been enabled at all.
While we are at it, allow both to be enabled in the build rules,
even though the code doesn't support it (symbol clashes).
Add a new configuration flag (--enable-uoa) to enable building the
signon backend for Ubuntu Online Accounts. The work done for gSSO is
reused and just slightly adapted (using preprocessor conditions) to work
with UOA as well.
In the future, nightly testing will use chroots which copy their root
filesystem and all additional files, instead of bind mounting them as it is
done now. The advantage of that setup is that the test runs become independent
from each other and thus can be parallelized. The downside is that paths to
files created inside the chroot has to be accessed via a different path from
the outside.
runtests.py will be invoked with those outside paths and needs to translate
them to the shorter paths used inside the chroot when invoking commands that
run inside it. This affects the current directory and any paths in the
argument list.
Because of an uninitialized memory read introduced for DLT after
1.3.99.5, output from the syncevolution command line tool was not
shown randomly.
Other usages of the second constructor for MessageOptions may also
have been affected.
That this was not caught by nightly testing points towards a problem
which will have to be solve separately: test-dbus.py needs to run
syncevolution under valgrind.
Change the EDS backend to not batch by default and only enable it
for PIM Manager PBAP syncs.
This avoids unnecessary overhead when running a normal SyncML sync
with a non-SyncEvolution peer, because then the batched operation
would be flushed immediately by the Synthesis engine, and more
importantly, it fixes a segfault in
Client::Sync::eds_event_eds_contact::testLargeObject EDS<->DAV tests.
Enabling batching during general syncs obviously needs further work...
"Testing" was updated from Debian Wheezy to the current Debian Testing
with GNOME 3.8. This triggered some minor new leaks and changed some existing
ones.
When TEST_DBUS_DLT is set, start the DLT daemon and tell syncevo-dbus-server
to use it. This relies on dlt-daemon being in the PATH, which requires
some changes in setting that env variable via runtests.py.
unbuffer from expect-dev (on Debian) is needed to get dlt-daemon to
print each message immediately.
This only adds the infrastructure for testing DLT logging. The actual
tests still need to be added.
libsynthesis leaks some small amount of memory when it gets unloaded.
That's better than trying to free on unload, because it is not clear
how long the instances are needed.
The hyphen introduced with the alternate EDS backends wasn't expected
by the regex. Now simply assume that backend names are non-whitespace
characters.
In testSyncOutput, valgrind really chewed more than one minute
on syncevo-dbus-server during process shutdown. We need to give
it enough time, otherwise the test failed with "process had to be
killed".
The hack for the EDS < 3.6 deficiency where EDS adds an EXDATE must not
be used when running a client-test executable compiled for old EDS with
an EDS backend for new EDS. This caused incorrect test failures with
new EDS.
Therefore replace the compile-time check with a runtime check.
libical 1.0 changed the library name to libical.so.1, causing dependency
problems becaus syncevolution.org binaries required libical.so.0.
It turned out that the relevant parts of the API and ABI had not changed,
so simply allowing both versions of the library is sufficient.
In addition, a detection of libical.so.1 gets added, just in case.
Most glib warnings are too technical for normal users. Better restrict
their logging to "for developers", which keeps them out of stderr of normal
invocations.
Triggered by a new warning showing up with GNOME 3.8 which didn't seem
to cause any problems.
The code wasn't used and incorrectly made it look like SyncEvolution used an
enum which changed its value between libical.so.0 and libical.so.1. Better
remove the code...
obexd 0.48 is almost the same as obexd 0.47, except that it dropped
the SetFilter and SetFormat methods in favor of passing a Bluex 5-style
filter parameter to PullAll.
SyncEvolution now supports 4, in words, four different obexd
APIs. Sigh.
The PIM Manager should be able to start normally and the default sort
order should be used instead of the invalid one from the config.
Invalid address books are also tested, without checking for any
specific value from GetActiveAddressBooks().
Failure to set the sort order from pim-manager.ini should not
prevent the startup of the PIM Manager because the client
cannot really diagnose and fix the problem. It is better
to try again with the default sort order.
The content of pim-manager.ini should be monitored to find issues
like the one reported in FDO #70772. For that particular problem
to appear we also need to use SetActiveAddressBooks().
Removing a peer accidentally wrote the updated list of active address
books into the "sort" property of pim-manager.ini, which then
prevented starting the PIM Manager (a different problem which will
be addressed separately).
The normalization of phone numbers depends on the locale. This commit
adds a test for that and code which works without relying on EDS to
re-calculate those normalized numbers.
This is not the normal mode of operation and thus this can't be the
final solution. The right solution would be for EDS to notice the
locale change, re-check all precomputed phone numbers
(X-EVOLUTION-E164 parameter) and emit change notifications. Those then
would get picked up by folks and from there, update the PIM views.
The tests rely on running dbus-session.sh with
DBUS_SESSION_SH_SYSTEM_BUS set.
If started with DBUS_SESSION_SH_SYSTEM_BUS non-empty, dbus-session.sh
will create two private buses and use one as the system bus, the other
as session bus.
The purpose is to allow fake daemons on the system bus without having
to modify the clients. At the moment, those clients still need to be
told to contact a fake daemon on the session bus.
Listen to signals from localed D-Bus system service and update all
internal state which depends on the current locale. This state includes:
- pre-computed data in all loaded contacts
- filtering (for example, case sensitivity is locale dependent)
- the sort order
In the current implementation, the entire LocaleFactory gets replaced
when the environment changes. The new instance gets installed in
FullView, then the sort comparison gets created anew and the existing
setSortOrder() updates the main view, which includes re-computing all
pre-computed data.
As a last step, the search filter is re-recreated and filtering gets
updated in all active filter views.
There is a minor risk that unnecessary changes get emitted because
first filtered views react to modified contacts and/or reshuffling
them, then later their filter gets replaced.
The listening functionality is meant to be optional, which includes
running inside an environment which doesn't even have a system D-Bus,
like the nightly testing.
This is the client side for the org.freedesktop.locale1 "Locale"
property. It ensures that new values of that property are made
available locally.
Optionally it can also update the current process' environment and
send a simpler "locale environment changed" if it made some changes.
This needs to be enabled by connecting the setLocale() method to the
m_localeValues signal.
After testing more with EDS 3.8 on Debian Testing, random hangs
were observed inside the glib main loop while waiting for the
registry creation callback. It's uncertain why that happens,
and due to lack of time at the moment it is easier to switch
back to the synchronous method. Needs to be investigated later.
Accent-insensitive search ignores accents, using the same code as in
EDS. Transliterated search ignores foreign scripts by transliterating
search term and contact properties to Latin first. That one is using
ICU directly in the same way as EDS, but doesn't use the EDS
ETransliterator class to avoid extra string copying.
This commit changes the default behavior such that searching is by
default most permissive (case- and accent-insensitive, does
transliteration). Flags exist to restore more restrictive matching.
By default, syncevo-dbus-server logs to syslog. This can be changed
with the new parameter of --enable-dbus-service. For example, this
changes the logging to DLT (must be available, or course):
configure "--enable-dbus-server=--dlt --no-syslog"
Diagnostic Log and Trace (DLT) manages a sequence of log messages,
with remote controllable level of detail. SyncEvolution optionally
(can be chosen at compile time and again at runtime) uses DLT instead
of its own syncevolution-log.html files.
The nightly testing always built client-test. When compiling
for EDS 3.6, we don't actually run any tests on the build platform,
so in that case it was not immediately possible to build client-test
(no cppunit installed). We could build it now, but there's simply
no need for it, so save the time and skip it.
Adding link flags to the libsyncevlution.la dependencies only works
for .la file. This started to break on Debian Testing (?) because
a -L<something> flag became a dependency.
A proper fix would be to set libs and dependencies separately,
but for now it is easier to rely on GNU make and filter for the
desired .la files.
We unintentionally and unnecessarily included boost/signals.hpp instead of
boost/signals2.hpp, which started to trigger warnings with later versions of
boost.
This fixes compilation with EDS 3.6. libebook-contacts was needed
for a while when configuring address books directly. It is not needed
anymore because now we simply clone the system database.
The rewrite for non-recursive make made the cppunit utility mandatory.
This is too strict, it really isn't needed when both integration and
unit tests are disabled.
We still want the make variables set, though, if cppunit was found,
because then "make src/client-test" still works.
The backends must be compiled differently for EDS < 3.6 (using the old
API before EBookClient, ECalClient and ESource, ideally in
compatibility mode) and for EDS >= 3.6 (using the new API, with hard
linking against libebook-1.2.so.14, libecal-1.2.so.15,
libedataserver-1.2.so.17).
With these changes, a SyncEvolution binary compiled for the older EDS
API will be able to load and use backends compiled against the current
one. Both backends can be installed in the same
lib/syncevolution/backends dir under different names. The newer ones
must have an additional -<version> appendix before the .so suffix.
Then loading will attempt to use those backends first and if
successful, skip the older ones. This is necessary because loading
both into the same process will fail due to symbol clashes. If
unsuccessful (as on systems with only the older EDS), the dynamic
linker will fail and SyncEvolution proceeds with the backends for the
old API.
Packaging of binaries as .dev/.rpm/.tar.gz includes these additional
backends if given in the EXTRA_BACKENDS variable when invoking "make
deb rpm distbin".
The EDS backends for >= 3.6 failed to work property in SyncEvolution
binaries compiled for EDS < 3.6, because the EDS compatibility
layer reported "EDS not found" to the backend.
A backend that gets compiled without compatibility hacks doesn't
need the runtime check, because merely loading already guarantees
that the necessary libs were found. Therefore make the check
a compile-time define when compatibility mode is disabled.
A backend compiled with compatibility mode enabled then fails to load
in a binary compiled without because libsyncevolution.so does not
export the necessary symbols. That's actually desirable, because the
backend would not work anyway.
Setting the SYNCEVOLUTION_EBOOK_QUERY env variable to a valid EBook
query string limits the results to contacts matching that
query. Useful only in combination with --print-items or --export. Only
implemented for EDS >= 3.6.
Previously, the current default country was used to turn phone numbers
without an explicit country code into full E164 numbers, which then
had to match the search term when doing a caller ID lookup.
This was inconsistent with EDS, where a weaker
EQUALS_NATIONAL_PHONE_NUMBER was done. The difference is that a
comparison between a number with country code matches one without if
the national number of the same, regardless of the current default
country. This is better because it reduces the influence of the hard
to guess default country on matching.
SyncEvolution also differed from EDS by doing a prefix comparison,
which in theory might have also ignored differences caused by
extensions. It is uncertain whether that was useful, so for the sake
of consistency, a full number comparison of the national number is now
done.
Another advantage of this change is the lower memory consumption and
faster comparison, because strings are now stored in 4 + 8 byte
numbers instead of strings of varying length.
The array operator happens to work on some platforms, but not others
(see previous commit). Make it private without an implementation to
catch the undesired usage of it on platforms whether the code would
happen to work otherwise.
This fixes the following problem, seen with Boost 1.53.0 on altlinux
when compiling for EDS >= 3.6:
/usr/include/boost/smart_ptr/shared_ptr.hpp: In instantiation of 'typename boost::detail::sp_array_access<T>::type boost::shared_ptr<T>::operator[](std::ptrdiff_t) const [with T = char*; typename boost::detail::sp_array_access<T>::type = void; std::ptrdiff_t = long int]':
src/backends/evolution/EvolutionSyncSource.cpp:163:38: required from here
/usr/include/boost/smart_ptr/shared_ptr.hpp:663:22: error: return-statement with a value, in function returning 'void' [-fpermissive]
make[2]: *** [src/backends/evolution/src_backends_evolution_syncecal_la-EvolutionSyncSource.lo]
The "void" type above is wrong, so it looks like a missing type trait
for the pointer type used in the smart_ptr. PlainGStrArray already had
an at() method to work around such issues, so use it. Not sure why this
one usage of [] slipped through.
While running a sync with a binary compiled with -fPIE -pie, a crash
in strlen() occured because a 64 bit string pointer coming from D-Bus
was incorrectly passed through a 32 bit unsigned variable.
These special compile flags merely caused the problem to occur
reliably, it may also have crashed under other circumstances.
Kudos to Tino Keitel for reporting the problem and identifying the
relation to the compile flags.
Running a sync while the UI had no service selected caused a crash in
find_updated_source_progress() because the code dereferences the NULL
prog_data->data->current_service pointer. Affected both the GTK2 and
GTK3 UI.
Fix it by checking for NULL and not doing anything if NULL.
When compiling source files of client-test, use -g as default CXXFLAGS
instead of the "-g -O2" that autotools normally picks. That speeds up
compilation significantly (on some platforms, gcc can't deal with the
many templates in ClientTest.cpp well) and leads to more useful
executables (suitable for interactive debugging) even when the rest of
the code gets optimized.
Explicitly specifying CXXFLAGS still overrides this per-target
default.
This feature depends on GNU make. A configure check is in place
to disable it when not using GNU make.
It seems that sometimes setting up a session with GNOME keyring fails such
that all further communication leads to decoding problem.
There is an internal method to reset the session, but it cannot be called
directly. As a workaround, fake the death of the GNOME keyring daemon
and thus trigger a reconnect when retrying the GNOME keyring access.
This drops the support for libgnome-keyring < 2.20, because older
versions did not have the error->text conversion method which is now
used in revised error and log messages.
This code also adds a retry loop around reading/writing passwords.
This was meant to work around these intermittent Gkr errors:
Gkr: received an invalid, unencryptable, or non-utf8 secret
Gkr: call to daemon returned an invalid response: (null).(null)()
These lead to an "Error communicating with gnome-keyring-daemon" status
code from libgnome-keyring.
However, once the error occurred in a process, it persists for at least
two seconds, possibly forever. Therefore the retry loop is not enabled.
This enables interoperability testing with owndrive.com, a hosted OwnCloud
provider. owndrive.com was chosen because a user picked it and support was
able to help with some problems. It's also a bit simpler than running OwnCloud
ourselves. But it may also be slower, so perhaps ultimately a local
installation will be needed.
This adds testing of Google Contacts syncing via CardDAV. It passes
only thanks to the simplified EDS contacts test cases, with one exception:
removing extensions is known to fail because the server keeps them even
when receiving an update without them.
Google CardDAV has one peculiarity: it renames new contacts during PUT without
returning the new path to the client. See also
http://lists.calconnect.org/pipermail/caldeveloper-l/2013-July/000524.html
SyncEvolution already had a workaround for that (PROPGET on old path, extract
new path from response) which happened to work. This workaround was originally
added for Yahoo, which sometimes merges contacts into existing ones. In
contrast to Yahoo, Google really seems to create new items.
Without some server specific hacks, the client cannot tell what happened.
Because Google is currently supported and Yahoo is not, let's change the
hard-coded behavior to "renamed items are new".
Google CardDAV parser/encoder swallows white spaces, possibly because
of the differences between vCard 2.1 and 3.0. It also escapes the colon
although it shouldn't.
Google is looking into this, so I am not trying to work out a workaround and
merely remove the problematic fields (mostly NOTEs and one URL) to get the
tests to pass.
An explicit CHARSET parameter causes the Google CardDAV server to drop
the NOTE, even if the charset is UTF-8. Removing this know problem from
the test set because it shouldn't happen in practice.
The nightly testing configures some platforms such that
XDG_CONFIG/DATA/CACHE_HOME are inside the build dir. It also populates these
dirs with files (for example, GNOME Online Accounts) which must survive all
cleaning of these directories.
Long term it would be better to separate test files from build files,
but that's a task for some other time...
When running a command with dbus-session.sh, all additional
output should got to stderr. That keeps stdout available for output
from the command (like "syncevolution --print-databases") without
getting poluted by something else.
This extra output also showed up in the nightly tests reports.
When running a local sync, the syncURL/username/password are not meant
for the sync and cannot be used if they refer to an AuthProvider which
cannot return plain username/password.
In all other cases, this may or may not work, so at least try it instead
of hard-coding the IdentityProviderCredentials.
"username = goa:..." selects an account in GOA and retrieves the
OAuth2 token from that.
The implementation uses the GOA D-Bus API directly, because our C++
D-Bus bindings are easier to use and this avoids an additional library
dependency.
It is highly annoying that D-Bus method calls (synchronous and
asynchronous!) time out fairly quickly (a few minutes) when doing
interactive debugging of syncevo-dbus-server.
Traditionally, every single method call had to be made with
timeout=<some high value>, which made the test code unnecessarily
larger and often was forgotten.
Now all method calls via the session bus connection are done with a
high timeout if none was set by the caller. This is achieved by
wrapping the default (internal?) method send methods on the bus object
with versions which add that timeout. This should be safe, because
these methods are part of the public D-Bus Python API.
When clients like the GTK sync-ui stored a password, it was always
stored as plain text in the config.ini file by the
syncevo-dbus-server. The necessary code for redirecting the password
storage in a keyring (GNOME or KWallet) simply wasn't called in that
case.
The command line tool, even when using the D-Bus server to run the
operation, had the necessary code active and thus was not affected.
The client template is also used in cases where passwords are not
needed (local sync) and where passwords cannot be stored in a keyring
due to the missing syncURL/remoteDeviceID. Therefore don't set dummy
username/password values in the template.
When configuring a new peer and looking for databases, we need the
database password of an already existing source config, otherwise the
lookup will fail if that password is hidden in a keyring.
The command line tool in --daemon=no mode did not use the GNOME
keyring or KWallet even if the syncevo-dbus-server did, leading
to failing test cases when actually starting to use it by default
there.
Now all components use the same default: use safe password storage if
any was enabled during compilation, don't use if not.
This also makes SyncEvolution work without user intervention on
systems without a password storage.
Figuring out where credentials come from became harder. These debug
messages help. Perhaps they should even be logged as INFO messages
such that normal users can see them?
The code works with gSSO (https://01.org/gsso). With some tweaks to
the configure check and some ifdefs it probably could be made to work
with Ubuntu Online Accounts.
The code depends on an account accessible via libaccounts-glib which
has a provider and and (optionally) services enabled for that
provider. It is not necessary that the account already has a signon
identity ID, the backend will create that for the provider (and thus
shared between all services) if necessary.
Therefore it is possible to use the ag-tool to create and enable the
account and services. Provider and service templates are in the next
commit.
If given an AuthProvider which can handle OAuth2, then OAuth2 is
used instead of plain username/password authentication.
Obtaining the OAuth2 token must be done at a point where we can still
abort the request. If obtaining the token fails, then this should be
considered a fatal error which aborts scanning for resources. Other
errors cause the current URL to be skipped while scanning continues.
This commit moves the "execute request" functionality back into the
Neon::Session class, because that is where most of the logic (retry
request?) and state is (access tokens which persist across requests).
The real username is only relevant when running a sync. When looking
at a config with a D-Bus client like the GTK UI, the username should
always be "id:<config>", to avoid accidentally removing the
indirection, while the password should be the real one, to allow the
user to edit like he normally would with passwords stored in a
keyring.
To achive this, overriding the username must be suppressed when
resolving as part of the D-Bus config API. While at it, move the
entire "iterate over properties" into a common utility function in
PasswordConfigProperty.
Storing a password with just "user=foo" as lookup attributes is problematic
because it is too unspecific. Different services or configs with the same
user, but different passwords end up overwriting each other's passwords. In
practice, the config with "user=foo" even had the effect of removing the entry
for "user=foo server=bar".
The situation can be avoided by using the remotePeerId as fallback when the
syncURL is empty. There is a (minor?) risk that some configs were stored
in the past without that additional key and now won't be found anymore in the
keyring. Users need to re-set such passwords.
If an attempt is made to store a password with insufficient lookup attributes,
GNOME keyring will now reject the attempt.
When instantiating multiple SyncConfig instances, it is important that
they share filter nodes and the underlying tree, because the nodes
store copies of already retrieved credentials (lookup shall only be
done once!) and the trees represent the current content of the config
(which must be consistent when making changes).
Currently the new code is not thread-safe, but nor are nodes and trees,
so a lot more work would be needed to make this safe. Instead we avoid
concurrency.
SyncConfigTest::normalize() only passed because FileConfigTree accidentally
created the "peers" directory inside the peer. That will change, so don't rely
on that. Instead ensure that the config.ini file of the peers gets written
because it contains something.
Instantiating LogDirTest used to create a SyncContext and use that as logger
for the entire duration of testing inside client-test, even when not running
LogDirTest tests at all. This is undesirable and together with caching of the
config tree while in use, broke some other tests (EvolutionCalendarTest)
because obsolete DB names were shared.
It is better to create the context during setUp() and remove it in tearDown().
The previous approach made FileConfigTree more complex than necessary.
Having an abstract ConfigTree::getRootPath() with undefined behavior
is bad design.
The code was had undesiredable side effects: inside a peer config,
another "peers" directory was created because FileConfigTree didn't
know whether creating that dir was required or not.
Now all of the complexity is in SyncConfig, which arguably knows
better what the tree stands for and how to use
it.
In practice, the methods are always called for a specific SyncConfig.
Passing that allows removing several other parameters and, more
importantly, also grants access to the config and through that other
configs. This will be needed for the indirect credential lookup.
"username", "proxyUsername" and "databaseUser" used to be simply a
string containing the name of the respective user or (in the case of
the ActiveSync backend) the account ID in gconf.
Now it is also possible to provide credentials (username + password)
indirectly: when any of these properties is set to "id:<config name>",
then the "username/password" properties in that config are used
instead. This is useful in particular with WebDAV, where credentials
had to be repeated several times (target config, in each database when
used as part of SyncML) or when using a service which requires several
configs (Google via SyncML and CalDAV).
For user names which contain colons, the new "user:<user name>" format
must be used. Strings without colons are assumed to be normal user
names.
This commit changes the SyncConfig APIs for this extension. More work
is needed to make the indirect lookup via "id" functional.
Passwords are cached after the initial check as temporary property
values. The explicit string members are obsolete and can be removed
together with the code using them.
The leak started showing up after switching to GNOME 3.8, independent
of any change in SyncEvolution. Looks like a bug in GNOME. I tried
to debug it, but gave up because it seems to be timing dependent such
that the error path was not triggered when running without valgrind.
Using asynchronous method calls did not eliminate the default timeout,
as expected. In particular, SyncPeer() still timed out when syncing many
contacts.
Instead set up all necessary parameters (= callbacks and now also a
long timeout) in a hash and pass that to all D-Bus method calls. It's
less code duplication, too.
Using the underscore in the UID has been wrong all along, it only
happened to work because UID sanity checking was missing. After adding
it, the example broke.
Now simply remove the colon. It makes the UID less readable, but it
doesn't have to be, and ensures that file names and database names
contain the UID as-is.
The "D-Bus testing: don't depend on server output during startup, truely quiet
TEST_DBUS_QUIET" change broke TestFileNotify.testSession3: because the test
started after syncevo-dbus-server obtained the bus name and before the server
could finish its startup (which test-dbus.py previously waited for by waiting
for the server's output), the executable was touched too soon and the server
often didn't notice that it had to shut down.
This inherent race condition can't be fixed, but in reality it should be a lot
rarer (not happening at all?) than in testing. Therefore fix testing by having
test-dbus.py wait for a response from syncevo-dbus-server (which implies that
it is done with its startup) before starting the test.
This actually applies to all three ways of starting syncevo-dbus-server (with
gdb, with and without logging), so now all of them use the same code.
Only saw this once in nightly testing and couldn't reproduce it:
$ make -j 16
perl /data/runtests/work/sources/syncevolution/src/syncevo/readme2c.pl
/data/runtests/work/sources/syncevolution/README.rst
>src/syncevo/CmdlineHelp.c
/usr/bin/xsltproc -o src/dbus/interfaces/syncevo-server-doc.xml
/data/runtests/work/sources/syncevolution/src/dbus/interfaces/spec-to-docbook.xsl
/data/runtests/work/sources/syncevolution/src/dbus/interfaces/syncevo-server-full.xml
/usr/bin/xsltproc -o src/dbus/interfaces/syncevo-connection-doc.xml
/data/runtests/work/sources/syncevolution/src/dbus/interfaces/spec-to-docbook.xsl
/data/runtests/work/sources/syncevolution/src/dbus/interfaces/syncevo-connection-full.xml
/usr/bin/xsltproc -o src/dbus/interfaces/syncevo-session-doc.xml
/data/runtests/work/sources/syncevolution/src/dbus/interfaces/spec-to-docbook.xsl
/data/runtests/work/sources/syncevolution/src/dbus/interfaces/syncevo-session-full.xml
/usr/bin/glib-genmarshal
/data/runtests/work/sources/syncevolution/src/dbus/glib/syncevo-marshal.list
--header --prefix=syncevo_marshal > src/dbus/glib/syncevo-marshal.h
runtime error
xsltApplyStylesheet: saving to src/dbus/interfaces/syncevo-session-doc.xml may
not be possible
/usr/bin/xsltproc -o src/dbus/glib/syncevo-server.xml
/data/runtests/work/sources/syncevolution/src/dbus/interfaces/spec-strip-docs.xsl
/data/runtests/work/sources/syncevolution/src/dbus/interfaces/syncevo-server-full.xml
runtime error
xsltApplyStylesheet: saving to src/dbus/interfaces/syncevo-server-doc.xml may
not be possible
make: *** [src/dbus/interfaces/syncevo-server-doc.xml] Error 9
make: *** Deleting file `src/dbus/interfaces/syncevo-server-doc.xml'
make: *** Waiting for unfinished jobs....
make: *** [src/dbus/interfaces/syncevo-session-doc.xml] Error 9
make: *** Deleting file `src/dbus/interfaces/syncevo-session-doc.xml'
Looks like multiple xsltproc commands ran in parallel and then stepped on each
others toes while creating the src/dbus/interfaces directory, which does not
exist after an out-of-tree configure.
To address the issue, serialize creating that directory by having make create
it as a prerequisite.
Since the "EDS contacts: avoid unnecessary DB writes during slow sync due to
FILE-AS" change, eds_contact::testItems failed because the "bday;special UTC
offset" (specific to Exchange testing) test user had no FILE-AS set, while the
one eventually stored in EDS had (because of the slow sync patch). Using EDS
3.8 probably would have triggered the same issue.
Fixing this by adding the FILE-AS property to the reference data. This is
a hard requirement now for tests to pass with EDS.
While there are sessions pending or active, the server should not shut down.
It did that while executing a long-running PIM Manager SyncPeer() operations,
by default after 10 minutes.
This was not a problem elsewhere because other operations are associated with
a client, whose presence also prevents shutdowns. Perhaps PIM Manager should
also track the caller and treat it like a client.
Like everything else that waits for a certain event on the main loop,
SYNCEVO_GLIB_CALL_SYNC() should also better use GRunWhile(). This is
necessary to be usable in threads.
obexd 0.48 is almost the same as obexd 0.47, except that it dropped
the SetFilter and SetFormat methods in favor of passing a Bluex 5-style
filter parameter to PullAll.
SyncEvolution now supports 4, in words, four different obexd
APIs. Sigh.
The code works with gSSO (https://01.org/gsso). With some tweaks to
the configure check and some ifdefs it probably could be made to work
Ubuntu Online Accounts.
The code depends on an account accessible via libaccounts-glib which
has a provider and and (optionally) services enabled for that
provider. It is not necessary that the account already has a signon
identity ID, the backend will create that for the provider (and thus
shared between all services) if necessary.
Therefore it is possible to use the ag-tool to create and enable the
account and services. Provider and service templates are in the next
commit.
If given an AuthProvider which can handle OAuth2, then OAuth2 is
used instead of plain username/password authentication.
Obtaining the OAuth2 token must be done at a point where we can still
abort the request. If obtaining the token fails, then this should be
considered a fatal error which aborts scanning for resources. Other
errors cause the current URL to be skipped while scanning continues.
This commit moves the "execute request" functionality back into the
Neon::Session class, because that is where most of the logic (retry
request?) and state is (access tokens which persist across requests).
The real username is only relevant when running a sync. When looking
at a config with a D-Bus client like the GTK UI, the username should
always be "id:<config>", to avoid accidentally removing the
indirection, while the password should be the real one, to allow the
user to edit like he normally would with passwords stored in a
keyring.
To achive this, overriding the username must be suppressed when
resolving as part of the D-Bus config API. While at it, move the
entire "iterate over properties" into a common utility function in
PasswordConfigProperty.
A SHARED_LAYOUT config tree caches config nodes. Allow a second config
to use those same nodes as an already existing config. This will be
useful in combination with indirect password lookup, because then the
credentials can be stored as temporary property values and be reused
when used multiple times in a process (for example, by CardDAV and by
CalDAV).
In practice, the methods are always called for a specific SyncConfig.
Passing that allows removing several other parameters and, more
importantly, also grants access to the config and through that other
configs. This will be needed for the indirect credential lookup.
"username", "proxyUsername" and "databaseUser" used to be simply a
string containing the name of the respective user or (in the case of
the ActiveSync backend) the account ID in gconf.
Now it is also possible to provide credentials (username + password)
indirectly: when any of these properties is set to "id:<config name>",
then the "username/password" properties in that config are used
instead. This is useful in particular with WebDAV, where credentials
had to be repeated several times (target config, in each database when
used as part of SyncML) or when using a service which requires several
configs (Google via SyncML and CalDAV).
For user names which contain colons, the new "user:<user name>" format
must be used. Strings without colons are assumed to be normal user
names.
This commit changes the SyncConfig APIs for this extension. More work
is needed to make the indirect lookup via "id" functional.
Passwords are cached after the initial check as temporary property
values. The explicit string members are obsolete and can be removed
together with the code using them.
Like everything else that waits for a certain event on the main loop,
SYNCEVO_GLIB_CALL_SYNC() should also better use GRunWhile(). This is
necessary to be usable in threads.
When clients like the GTK sync-ui stored a password, it was always
stored as plain text in the config.ini file by the
syncevo-dbus-server. The necessary code for redirecting the password
storage in a keyring (GNOME or KWallet) simply wasn't called in that
case.
The command line tool, even when using the D-Bus server to run the
operation, had the necessary code active and thus was not affected.
uint16 happened to work when compiling with a recent GNOME stack, but
without that uint16 is not defined. The right approach is to use
stdint.h and uint16_t.
We used to kill it when it showed the first sign of life via D-Bus log output
- any output! Depending on timing, it may or may not have been able to send
the "target side ready" INFO message. If it did, our strict output check
failed.
Fix that by waiting for that message, which should be the only INFO message
and thus the only one which will appear in the output text, before killing the
process.
Both maintaining the map items inside the Synthesis engine and storing
them in .ini hash config nodes inside SyncEvolution are fairly heavy
operations which are not needed at all during an ephemeral sync (= no
meta data stored, done by the PIM Manager when triggering a pure PBAP
sync).
Using the new Synthesis CA_ResumeSupported DB capability it is
possible to suppress these operations without having to fiddle with
the Synthesis DB API that SyncEvolution provides. To make it possible
at the DB layer to detect that the meta data is not needed, the
ConfigNode passed to it must be marked as volatile.
This change sped up a sync with 10000 unmodified, known items from 38s
to 23s.
The synthesisID value is required for each Synthesis source progress
event, which can be fairly frequent (more than one per item). Instead
of going down to the underlying .ini config node each time, cache the
value in the SyncSourceConfig layer.
Syncing was slowed down by fowarding all log messages from the local
sync helper to its parent and from the D-Bus helper to
syncevo-dbus-server. Quite often, the log messages then were simply
discarded by the recipient. To speed up syncing, better filter at the
source.
The syncevo-dbus-helper is told which messages are relevant by
forwarding the syncevo-dbus-server "D-Bus log level" command line
setting to the helper process as part of its argv parameters.
The synevo-local-sync helper applies its own log level now also to the
messages sent to the parent. This ensures that messages stored in the
client log also show up in the parent log (unless the parent has more
restrictive settints, which is uncommon) and that INFO/SHOW messages
still reach the user.
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
When asked to run quietly via a non-empty TEST_DBUS_QUIET,
test-dbus.py still enabled INFO messages to determine that the server
is ready. Waiting for bus name (as already done when starting with a
debugger) avoids that and allows us to disable all LogOutput signals.
If a client gave up waiting for the server's response and resent its message
while the server was still processing the message, syncing failed with
"protocol error: already processing a message" raised by the
syncevo-dbus-server because it wasn't prepared to handle that situation.
The right place to handle this is inside the syncevo-http-server, because it
depends on the protocol (HTTP in this case) whether resending is valid or
not. It handles that now by tracking the message that is currently in
processing and matching it against each new message. If it matches, the new
request replaces the obsolete one without sending the message again to
syncevo-dbus-server. When syncevo-dbus-server replies to the old message, the
reply is used to finish the newer request.
This situation is covered by
Client::Sync::eds_event_eds_contact::testManyDeletes with 100 items and
client and server running under valgrind. The test failed earlier and works
now.
Set SYNCEVOLUTION_DBUS_HELPER_VGDB=1, add --vgdb-error=1 --vgdb=yes
to VALGRIND_ARGS, run test, wait for vgdb message in valgrind*.out files,
attach as instructed.
With --vgdb-error=0, all processes block during startup, waiting for
the debugger to attach.
The previous attempt with concurrent reading while listing IDs did not
work, that listing must complete before the SyncML client contacts the
server. What works is transfering and parsing after the engine starts
to ask for the actual data.
For that we need to list IDs in advance. We use GetSize() for that.
If contacts get deleted while we read, getting the data for the
contacts at the end of the range will fail with 404, which is
understood by the Synthesis engine and leads to ignoring the ID, as
intended.
If contacts get added while we read, we will ignore them even if they
happen to be in the result of PullAll. The next sync will include
them.
When doing a PBAP sync, PIM manager asks the D-Bus sync helper to set
its SYNCEVOLUTION_PBAP_SYNC to "incremental". If the env variable
is already set, it does not get overwritten, which allows overriding
this default.
Depending on the SYNCEVOLUTION_PBAP_SYNC env variable, syncing reads
all properties as configured ("all"), excludes photos ("text") or
first text, then all ("incremental").
When excluding photos, only known properties get requested. This
avoids issues with phones which reject the request when enabling
properties via the bit flags. This also helps with
"databaseFormat=^PHOTO".
When excluding photos, the vcard merge script as used by EDS ensures
that existing photo data is preserved. This only works during a slow
sync (merge script not called otherwise, okay for PBAP because it
always syncs in slow sync) and EDS (other backends do not use the
merge script, okay at the moment because PIM Manager is hard-coded to
use EDS).
The PBAP backend must be aware of the PBAP sync mode and request a
second cycle, which again must be a slow sync. This only works because
the sync engine is aware of the special mode and sets a new session
variable "keepPhotoData". It would be better to have the PBAP backend
send CTCap with PHOTO marked as not supported for text-only syncs and
enabled when sending PHOTO data, but that is considerably harder to
implement (CTCap cannot be adjusted at runtime).
beginSync() may only ask for a slow sync when not already called
for one. That's what the command line tool does when accessing
items. It fails when getting the 508 status.
The original goal of overlapping syncing with download has not been
achieved yet. It turned out that all item IDs get requested before
syncing starts, which thus depends on downloading all items in the current
implementation. Can be fixed by making up IDs based on the number of
existing items (see GetSize() in PBAP) and then downloading later when
the data is needed.
If PHOTO and/or GEO were the only modified properties during a slow
sync, the updated item was not written into local storage because
they were marked as compare="never" = "not relevant".
For PHOTO this was intentional in the sample config, with the
rationale that local storages often don't store the data exactly as
requested. When that happens, comparing the data would lead to
unnecessary writes. But EDS and probably all other local SyncEvolution
storages (KDE, file) store the photo exactly as requested, so not
considering changes had the undesirable effect of not always writing
new photo data.
For GEO, ignoring it was accidental.
A special merge script handles EDS file:// photo URIs. When the
loosing item has the data in a file and the winning item has binary
data, the data in the file may still be up-to-date, so before allowing
MERGEFIELDS() to overwrite the file reference with binary data and
thus forcing EDS to write a new file, check the content. If it
matches, use the file reference in both items.
Performance is improved by requesting multiple contacts at once and
overlapping reading with processing. On a fast system (SSD, CPU fast
enough to not be the limiting factor), testpim.py's testSync takes 8
seconds for a "match" sync where 1000 contacts get loaded and compared
against the same set of contacts. Read-ahead with only 1 contact per
query speeds that up to 6.7s due to overlapping IO and
processing. Read-ahead with the default 50 contacts per query takes
5.5s. It does not get much faster with larger queries.
While returning items from one cache populated with a single
e_book_client_get_contacts() call, another query is started to overlap
processing and loading.
To achieve efficient read-ahead, the backend relies on the hint given
to it via setReadAheadOrder(). As soon as it detects that a contact is
not the next one according to that order, it switches back to reading
one contact at a time. This happens during the write phase of a sync
where the Synthesis engine needs to read, update, and write back
changes based on updates sent by the peer.
Cache statistics show that this works well for --print-items, --export
and slow syncs.
Writing into the database must invalidate the corresponding cached
contact. Otherwise the backup operation after a sync may end up
reading stale data.
Trying to predict in the SyncSource which items will be needed is
hard. It depends what the item is retrieved for (sync or
backup/printing) and on the sync mode (during a sync).
Instead of guessing, better have the user of the source tell the
source in advance what it will read. In most cases this is cheap
because it does not involve copying of items luids ("read all items",
"read new or modified items"). The source then can use its own internal
mechanisms to figure out what that means.
Only extracting specific items specified on the command line and the
backup operation must explicitly pass an ordered list of luids. For
the sake of simplicity, do that in a std::vector even though it
involves copying.
The
while (<something>) g_main_context_iterate(NULL, true);
pattern only works in the main thread. As soon as multiple
threads are allowed to process events, race conditions occur,
as described before.
But the pattern is useful, so support it in all threads by
shifting the check into the main thread, which will then notify
other threads via a condition variable when something changes.
gcc complained about the "protected" m_mutex access. Apparently
it did not realize that the access was to a base class. An explicit
get() solves that. Another way to get the pointer is a type cast.
Derive from SyncSource and SyncSourceSession directly instead of going
through TrackingSyncSource. This allows removing several dummy methods
that we have to implement for TrackingSyncSource and allows
reporting existing items incrementally, instead of having to load all
of them at once for the old listAllItems().
Contacts are now already reported to the engine while their transfer
still runs. That depends on monitoring the temporary file, remapping
the larger file and continuing parsing where the previous parsing
stopped.
This only works with obexd 0.47 and obexd from Bluez 5, because it
depends on the temporary file interface. The older PullAll did not
return any data until it had downloaded everything.
Because it isn't known when the contact data will be needed, the backend
still maintains the mapping from ID to vCard data for all contacts seen
in the current session. Because that memory is backed by a temporary file
system, unused memory can be swapped out (and in) well by the OS.
If the file is in a ram-based temp file system, then it may also not
matter at all that the file gets mapped multiple times.
Instead of reading all item IDs, then iterating over them, process
each new ID as soon as it is available. With sources that support
incremental reading (only the PBAP source at the moment) that provides
output sooner and is a bit more memory efficient.
remove() is useful when want to continue using the file but also want
to ensure that it gets deleted when we crash.
moreData() can be used to determine if the file has grown since the
last time that map() was called. In other words, it supports
processing data while obexd still writes into the file.
The previous commit "PBAP: fix support for obexd >= 0.47 and < Bluez 5"
made the backend work with obexd 0.48 and broke it with 0.47. That's because
there was another API change between 0.47 and 0.48, which wasn't known
at the time of that commit.
SyncEvolution now works with 0.47 and does not work with 0.48. This choice
was made because 0.47 supports the file-based data transfer (same as in Bluez 5
and thus useful for testing when Bluez 5 is not available) and 0.47 still
compiles against older Bluez versions (which makes it easier to use than 0.48).
Several of the recent changes are meant for exchanging many items.
Tests that when running syncs locally, that way we also cover
both sides (client and server).
More recent GNOME Python bindings are provided by gobject
introspection. The traditional gobject/glib modules no
longer exist.
The API is similar enough that we just need to adapt importing: if
importing the normal modules fails, try importing from gi.repository
instead.
EDS 3.8 sets X-EVOLUTION-FILE-AS to "last, first" when a contact is
created without it. This leads again to unnecessary DB updates because
the incoming item that the engine works with doesn't have that field
set.
To mitigate that issue, set FILE_AS (renamed to make the field name
valid in a script) like EDS would do during writing.
The downside is that all incoming items now have FILE_AS set, which
will overwrite a locally stored copy of that property even if the peer
did not store X-EVOLUTION-FILE-AS. Previously, as tested by
testExtension, the local value was preserved. There is no good solution
that works for both use cases, so allow X-EVOLUTION-FILE-AS to get lost
and relax the test.
Traditionally, contacts were modified shortly before writing into EDS
to match with Evolution expectations (must have N, only one CELL TEL,
VOICE flag must be set). During a slow sync, the engine compare the
modified contacts with the unmodified, incoming one. This led to
mismatches and/or merge operations which end up not changing anything
in the DB because the only difference would be removed again before
writing.
To avoid this, define datatypes which include the EDS modifications in
its <incomingscript>. Then the engine works with an item as it would
be written to EDS and correctly determines that the incoming and DB
items are identical.
Found in testpim.py's testSync when applied to the test cases
generated by the Openismus test case generator.
If the sources are used in a mode which doesn't need revision strings,
then skip the retrieval of items from the EDS daemons when we would
normally do it to get the revision string.
Recording the revision of items during a caching sync is unnecessary
because the next sync will not use the information. For item
modifications, the information was not even recorded.
Now a "don't need changes" mode can be set in SyncSource, which is
done for caching and command line item operations. This is better than
second-guessing the mode in which a source is used.
SyncSourceRevision checks that flag and skips updating the
luid->revision mapping if set.
Individual backends can also check the flag and skip calculating the
revision to begin with.
Only works when compiled for EDS >= 3.6. Uses the new ITEM_AGAIN in
SyncSourceSerialize. Combines multiple add/update operations into one
asynchronous batch operation which can overlap with network IO if
the engine supports it.
To take full advantage of batch operations, we must enable out-of-order
execution of commands. Otherwise the engine will never issue more than
one change to us. Do it for SyncEvolution as peer.
TODO: do it for all peers or disable batched processing when it is not
enabled.
SyncSourceSerialize uses ITEM_AGAIN plus a callback in
InsertItemResult to indicate that and how an insert (aka add or
update) operation must be continued.
TrackingSyncSource merely passes that result through until it finally
completes.
Direct, raw access to items as done by the command line and restore
operations is not done asynchronously. The backend does not need to
know how it is used, the SyncSourceSerialize wrapper around the
backend will flush and wait.
Asynchronous deleting is not supported. It would require throwing an
exception or an API change (better - exceptions are for errors!)
because removeItem() has no return code which could be extended.
The wrapper around the actual operation checks if the operation
returned an error or result code (traditional behavior). If not, it
expects a ContinueOperation instance, remembers it and calls it when
the same operation gets called again for the same item.
For add/insert, "same item" is detected based on the KeyH address,
which must not change. For delete, the item local ID is used.
Pre- and post-signals are called exactly once, before the first call
and after the last call of the item.
ContinueOperation is a simple boost::function pointer for now. The
Synthesis engine itself is not able to force completion of the
operation, it just polls. This can lead to many empty messages with
just an Alert inside, thus triggering the "endless loop" protection,
which aborts the sync.
We overcome this limitation in the SyncEvolution layer above the
Synthesis engine: first, we flush pending operations before starting
network IO. This is a good place to batch together all pending
operations. Second, to overcome the "endless loop" problem, we force
a waiting for completion if the last message already was empty. If
that happened, we are done with items and should start sending our
responses.
Binding a function which returns the traditional TSyError still works
because it gets copied transparently into the boost::variant that the
wrapper expects, so no other code in SyncSource or backends needs to
be adapted. Enabling the use of LOCERR_AGAIN in the utility classes
and backends will follow in the next patches.
Check conditions each time they change as part of view updates. This
would have helped to find the invalid check earlier and catches
temporary violations of the invariants.
The assertEqual() expected 'quiesence' to be seen, but runUntil() was not
asked to run until that was seen. Therefore tests failed randomly depending on
timing: they passed if the quiescence signal was processed in the same event
loop run as the change that was waited for and failed if it arrived later.
Fixed by waiting for 'quiesence' explicitly. It should arrive last.
When "Abraham" falls out of the view, it is possible that we have temporarily
no contacts in the view, because first "Abraham" gets reviewed and then
"Benjamin" gets added. The check "at least one contact" therefore was too
strict.
So far, a "check" callback was passed to runUntil() which checked the state
after each main loop run and at regular time intervals. The downside was that
an incorrect state change inside a single main loop run (for example, removing
a contact and adding one in testMerge) was not detected.
To increase coverage of state changes, the check is now also invoked after
each method call in the ViewAgent. If a check fails, it gets recorded as
error in the view and then will get noticed outside of the event loop by the
check that is passed to runUntil().
These checks needs to be installed before running the command line which
changes the state, not just as part of runTime(), because running the command
line also processes glib events.
This insufficient coverage was found when changes in timing caused testMerge
to fail: the previous check for "always exactly one contact" turned out to be
too strict and merely passed when remove + add were processed together. In
practice, the contact's ID changes because folks ties the merged contact to
a different primary store than the unmerged, original contact.
TESTPIM_TEST_SYNC_TESTCASES can be set to the name of a file
containing contacts in vCard 3.0 format with an empty line as
separator. In this mode, testSync() measures the duration of cleaning
the local cache, importing all test cases, matching them against the
cache (which must not write into the cache!) and removing the cached
contacts.
Typically the peer configs get created from scratch, in particular
when testing with testpim.py. In that case the log level cannot be set
in advance and doing it via the D-Bus API is also not supported.
Therefore, for debugging, use SYNCEVOLUTION_LOGLEVEL=<level> to create
peers with a specific log level.
The process name has become essential to distinguish messages
from local sync parent and child, because the parent will include
the child's output in its own log.
We cannot pass a process name to the Synthesis logging mechanism, so
use the logging prefix instead. Relies on a patch to libsynthesis to
enable the logging of that prefix.
The localized DisplayName strings must be removed explicitly. To find them,
list all properties in the main section and then remove if the prefix
matches.
Automatically calls g_strfreev(). Accessing entries in the array must
be done with at(), overloading the [] operator led to a "operator
ambiguous" warning from gcc on Debian Stable in x86.
CondTimedWaitGLib() would not have returned immediately for
milliSecondsToWait == 0, as expected by the Synthesis engine.
Not sure whether it mattered in practice, though.
Running the sync and/or the host machine became slower, leading to
timeouts and general slowness in the nightly testing. Test with less
items to stay within the allotted time.
Escaping the ( in the regular expression cannot be done when setting
it, because running the command will parse and re-generate the command
line. That code must be made aware that ( requires special care.
doFilter() gets extended to take (<full name>, <vcard>) tuples in
addition to the full name alone. Then this is used to create one large
vCard that is suitable for testing (field content unique, all fields
set, etc.) with various filters. All field tests are covered with at
least one positive and one negative case.
The operation is a runtime parameter of different classes, whereas
extracting the right values to compare via the operation is hard-coded
at compile time. This is a rather arbitrary compromise between code
duplication, simplicity and performance (which, in fact, was not
measured at all).
The code for selecting case-sensitivity and the normalization before
the string operations is shared with the older 'any-contains'
operation.
The new TestContacts.testFilterLogic uses the same infrastructure as
the language tests and covers some combinations of 'and' and 'or'.
In some cases, Python lists had to be avoided in favor of tuples,
because Python cannot automatically map the lists to a 'av' array of
variants, while sending a tuple enumerating its types can be sent and
gets accepted by PIM Manager.
This changes the signature of the filter parameter in Search(),
RefineSearch() and ReplaceSearch() from 'as' (array of strings) to
'av' (array of variants). Allowed entries in the variant are arrays
containing strings and/or other such arrays (recursive!).
A single string as value is not supported, which is the reason why
'av' instead of just plain 'v' makes sense (mismatches can already be
found in the sender, potentially at compile time). It also helps
Python choose the right type when asked to send an empty list. When
just using 'v', Python cannot decide automatically.
Error messages include a backtrace of all terms that the current,
faulty term was included in. That helps to locate the error in a
potentially larger filter.
The scope of supported searches has not changed (yet).
The signature of the new type is just a 'v' for variant. Recursion is
only supported in combination with std::vector. This is done to keep
implementation simpler. std::vector was chosen over other collections
because it is easier to work with in method calls, the main purpose
of the new type.
The conversion from D-Bus messages accepts anything which can be mapped
into the variant: arrays, tuples, plain types, and of course variants
containing those. This helps with backward compatibility (one can
turn an interface which took a fixed type like std::vector<std::string>
into something with a recursive variant) and with Python, because
Python sends arrays and tuples without converting into variants
when the app uses those simpler types.
One caveat exists with sending an empty list in Python: when using 'v'
as signature in the interface, Python doesn't know how to send the
empty list without assistance by the programmer (as in dbus.Array([],
signature='s'). If possible, avoid the plain 'v' in an interface in
favor of something like 'av' (= std::vector<boost::variant>).
There is a difference in EDS between removing the database definition
from the ESourceRegistry (which makes the data unaccessible via EDS)
and removing the actual database. EDS itself only removes the definition
and leaves the data around to be garbage-collected eventually. This is
not what we want for the PIM Manager API; the API makes a stronger
guarantee that data is really gone.
Fixed by introducing a new mode flag for the deleteDatabase() method
and deleting the directory of the source directly in the EDS backend,
if requested by the caller.
The syncevolution command line tool will use the default mode and thus
keep the data around, while the PIM Manager forces the removal of
data.
Instead of hard-coding a specific "Backend Summary Setup" in
SyncEvolution, copy the config of the system database. That way
special flags (like the desired "Backend Summary Setup" for local
address books) can be set on a system-wide basis and without having to
modify or configure SyncEvolution.
Because EDS has no APIs to clone an ESource or turn a .source file
into a new ESource, SyncEvolution has to resort to manipulating and
creating the keyfile directly.
When an asynchronous method call failed with an exception, the code
DBusResult got destructed first before handling the exception and then
sent a generic, uninformative error back to the caller. A test
demonstrating that for the PIM Manager SetPeer() with an invalid UID
be committed separately.
To fix this, the DBusResult must know whether it is really responsible
for replying to the method message. The method call handlers
themselves do not know whether any of their parameters is a
DBusResult, so we have to activate the automatic reply indirectly, via
a guard instance with the same life time as the handler.
When the guard destructs, it can check whether the handler has
replied, because the handler then clears the message pointers. If it
hasn't, the DBusResult becomes the owner of the message. For this to
work, an extra try/catch block is necessary. Otherwise exception
handling would still occur too late, at a time when the guard is gone
and the DBusResult has sent the generic error.
Some other classes had to be adapted to allow for this extra level of
indirection. While doing so, another bug was found.
Retrieving the interface and member used when calling a method
returned the path instead, due to a cut-and-paste error. Fixed. Not
used in practice.
Having to look for XDG config files in two different locations was
confusing. The XDG_DATA_HOME=temp-testpim/local/cache was also wrong,
because "cache" is for XDG_CACHE_HOME.
Now XDG_CONFIG/CACHE/DATA_HOME must be set to
temp-testpim/[config|cache|data], which is consistent with the usage
of private XDG dirs in test-dbus.py, and the directories are shared by
EDS and PIM Manager.
EDS keeps running throughout all tests. This must be taken into
account when cleaning temp-testpim. evolution-source-registry does not
cope with removal of its "sources" directory, so we have to keep that,
although we are removing its content. evolution-addressbook-factory
may continue to have an ESource open although its client *and* the
source are gone (should be fixed in EDS by tracking clients *and*
source removals!).
When that happens and we reuse a DB UUID,
evolution-addressbook-factory continues to use the deleted instance,
leading to IO errors. To avoid that inside a test run, we use
different UID prefices for each test, by embedding the test name. To
avoid it between test runs, kill evolution-addressbook-factory
first. When killing evolution-source-registry, use -9, because
otherwise it shuts down too slowly, which may lead to test failures
when it talks to the closing instance.
For the system address book we can't change the UID between tests, so
we must not delete it.
It is useful to let tests add (or more likely, overwrite) files to an
XDG directory shared between tests. Those tests must be aware that
there may be other files in the XDG dir from other tests.
Programs expect that XDG dirs exist. We cannot really guarantee that
in the test wrapper scripts, because the entire directory containing
the XDG dirs may get wiped. So (re-)create the dirs shortly before they
are needed, which is by the programs launched via dbus-session.sh.
As with glib before, now GeeCollCXX also makes the parameter for declaring the
pointer ownership mandatory. The ensuing code analysis showed that in two
cases involving gee_multi_map_get() the transfer of ownership was not handled
correctly, leading to memory leaks.
Traditionally, glib messages were caught via LogRedirect.
Later, they got redirected by installing a handler. That change
disabled the message filtering for known harmless error messages
in LogRedirect, which caused messages from folks to show up
in TestCmdline D-Bus tests randomly, depending on the timing.
Reestablish the filtering by checking it in the glog callback.
To minimize changes, keep the error registry in LogRedirect.
It must be thread-safe now.
The D-Bus method reply may occur when the calling instance of
LocalTransportAgent is already gone. Therefore bind to a weak
pointer instead of this.
Seen when testing against Apple Calendar Server in the nightly
testing. It was a bit uncertain from the logs why it happened,
but it is clear that it can happen and thus the patch makes
sense even though it might not address the real problem.
It can happen that the background thread terminates before libsynthesis checks
for termination. In that case our helper code does not get called and thus
cannot log the message that the test was checking for. Instead check for
proper continuation of the file source open.
Explicitly free the engine before releasing our lock on shutdown
signals. This prevents getting killed by a signal while we are still
freeing resources, which can take some time when a background thread
is involved (seen in some tests covering that).
The D-Bus TestHTTP tests occasionally failed because a message was resend
due to rounding issues. Be a bit (= 0.1 seconds) more conservative about
resending to avoid this.
Sometimes an extra "quiescent" signal was seen. It came from the
FolksAggregator before reporting any contacts. Not exactly sure
why that happens, but because it is not the responsibility of this
test to detect that, let's ignore it here.
We have to hard code this instead of always using it because ICU does
not properly fall back to non-phonebook defaults. Without phonebook as
collation, sorting was not done correctly in Germany.
Now also pay attention to case and punctuation. Previously these were
ignored for performance reasons. The reasoning that case doesn't
matter was wrong, it is useful as a tie breaker (which is what the
higher level does).
Full interleaving of Pinyin transliterations of Chinese names with
Western names can be done by doing an explicit Pinyin transliteration
as part of computing the sort keys.
This is done using ICU's Transliteration("Han-Latin"), which we have
to call directly because boost::locale does not expose that API.
We hard-code this behavior for all "zh" languages (as identified by
boost::locale), because by default, ICU would sort Pinyin separately
from Western names when using the "pinyin" collation.
The SyncPeer() result is derived from the sync statistics. To have
them available, the "sync done" signal must include the SyncReport.
Start and end of a sync could already be detected; "modified" signals
while a sync runs depends on a new signal inside the SyncContext when
switching from one cycle to the next and at the end of the last one.
Following the boost::instrusive_ptr example and making "add_ref =
true" the default in our CXX GLib and GObject wrappers led to some
memory leaks because it didn't enforce thinking about whether the
plain pointer is already owned by us or not.
It is better to use a mandatory enum value, ADD_REF and TRANSFER_REF,
and force explicit construction. Doing that revealed that the
assignment operator was implemented as constructing a CXX instance
with increased ref count and/or that in some places, a real leak was
caused by increasing the ref count unnecessarily.
Running under valgrind gave a false sense of security. Some of the
real leaks only showed up randomly in tests.
This mimics the behavior of boost smart pointers and is useful for
code where variables can be of either type, because "= NULL" is
not supported by our boost::intrusive_ptr wrapper.
Funambol reacts with a 407 "retry later" very quickly when clients
to slow syncs. Avoid that, because it pretty much prevents getting
any of the tests to run.
Avoid referencing pathProps->second when the set of paths that
PROPFINDs returns is empty. Apparently this can happen in combination
with Calypso.
The stack backtrace sent via email looked like this:
Program received signal SIGSEGV, Segmentation fault.
0x4031a1a0 in std::_Rb_tree<std::string, std::pair<std::string const, std::string>, std::_Select1st<std::pair<std::string const, std::string> >, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >::find(std::string const&) const () from /usr/lib/libsyncevolution.so.0
0x4031a1a0 <_ZNKSt8_Rb_treeISsSt4pairIKSsSsESt10_Select1stIS2_ESt4lessISsESaIS2_EE4findERS1_+60>: ldr r4, [r0, #-12]
(gdb) bt
from /usr/lib/syncevolution/backends/syncdav.so
from /usr/lib/syncevolution/backends/syncdav.so
from /usr/lib/libsyncevolution.so.0
Include the pid of the child process in all messages relating to it,
to help in cases where the same program is started multiple times.
Log something in the parent when destroying the fake API instance and
thus canceling the fake method call from the child, because that will
trigger a "lost connection" message in the child which was hard to
correlate with events in the server without those additional log
messages in the parent.
Use type checking to determine whether a timeout was done via
glib or signals. Without removal of a glib timeout, a periodic
timeout keeps firing in other tests.
Ensure that server is currently waiting for background thread, then
abort via Session.Abort(). Needs to be detected by backend and the
lead to normal session shutdown.
Sessions created via the Server.Connect() API were announced via
signals, but not activated. Therefore querying their status or
aborting them via D-Bus was not possible. Found while writing the
TestHTTP.testAbortThread D-Bus test.
Running EDS on the server side is a relevant use case and may
have its own share of problems, in particular when the server
runs the EDS code in a background thread. Better test that with
libebook and libecal...
Since the introduction of SuspendFlags, the ony remaining user
of the virtual aspect of checkForSuspend/checkForAbort() was the
testing code. By using the suspend/abort request mechanism in
SuspendFlags, it becomes possible to move checkForSuspend/Abort()
into SuspendFlags itself.
This will be useful to use it outside of a SyncContext member.
test-dbus.py now knows how to start syncevo-http-server. Therefore
it can test normal HTTP-based syncing as well as several scenarios
which fail or succeed with a slow server depending on the server's
ability to send SyncML messages while still initializing the storage.
To make the server slow, env variables are checked by the file
backend. It may matter whether open() or listAll() are slow, so test
both.
The tests expecting the 2 minute default must check whether the
feature is enabled at all in the binary that they are testing. If not,
the test cannot run. All other tests work, albeit somewhat unsafely
because they force the engine to run multithreaded when the engine was
compiled without mutex locking of global data structures.
We need to keep the glib event loop running while the main thread
waits for the background thread to finish. Otherwise code using
glib events (like EDS) may get stuck (observed with Sleep()
when using g_timeout_add() and synchronous calls in EDS 3.6).
We also need to watch for abort requests. When aborted, we give up
waiting for the thread and tell the engine to proceed. This will
eventually return control to us, where we can shut down.
When libsynthesis shuts down, it'll wait forever for the background
thread's shutdown. In this case we must wait and hope that the background
thread completes or aborts by itself.
HTTP SyncML clients give up after a certain timeout (SyncEvolution
after RetryDuration = 5 minutes by default, Nokia e51 after 15
minutes) when the server fails to respond.
This can happen with SyncEvolution as server when it uses a slow
storage with many items, for example via WebDAV. In the case of slow
session startup, multithreading is now used to run the storage
initializing in parallel to sending regular "keep-alive" SyncML
replies to the client.
By default, these replies are sent every 2 minutes. This can be
configured with another extensions of the SyncMLVersion property:
SyncMLVersion = REQUESTMAXTIME=5m
Other modes do not use multithreading by default, but it can be
enabled by setting REQUESTMAXTIME explicitly. It can be disabled
by setting the time to zero.
The new feature depends on a libsynthesis with multithreading enabled
and glib >= 2.32.0, which is necessary to make SyncEvolution itself
thread-safe. With an older glib, multithreading is disabled, but can
be enabled as a stop-gap measure by setting REQUESTMAXTIME explicitly.
Only one thread may handle events in the default context at any point
in time. If a second thread calls g_main_context_iteration() or
g_main_loop_run(), it blocks until the other thread releases ownership
of the context. In that case, the first thread may wake up because of
an event that the second thread waits for, in which case the second
thread may never wake up. See
https://mail.gnome.org/archives/gtk-list/2013-April/msg00040.html
This means that SyncEvolution can no longer rely on these functions
outside of the main thread. This affects Sleep() and the EDS backend.
As an interim solution, take over permanent ownership of the default
context in the main thread. This prevents fights over the ownership
when the main thread enters and leaves the main loop
repeatedly. Utility code using the main context must check for
ownership first and fall back to some other means when not the owner.
The assumption for the fallback is that the main thread will drive the
event loop, so polling with small delays for the expected status
change (like "view complete" in the EDS backend) is going to succeed
eventually.
A better solution would be to have one thread running the event loop
permanently and push all event handling into that thread. There is C++
utility code for such things in:
http://cxx-gtk-utils.sourceforge.net/2.0/index.html
See in particular the TaskManager class and its
make_task_when()/make_task_compose()/make_task_when_full() functions
for executing asynchronous results via a glib main loop, also the
Future::when() method and a number of other similar things in the
library.
Refactored the code into a new utility base class for use in other
tests.
Replace pipes with temporary files, using the same base name as the
traditional .syncevo.log and .dbus.log. They are numbered (because the
command line might be run multiple times per test) and use .out, .err,
or .outerr as suffix depending on what kind of output they
contain. The advantage is that the output gets recorded permanently.
Use that when waiting for command completion times out: in that case,
the content of the output file(s) gets added to the exception.
The subprocess handle returned by startCmdline() is extended with
information about output redirection that is then used by
finishCmdline(). This makes one parameter of finishCmdline()
redundant.
It means "the remote was initiated", not "we were initiated by the
remote side". Somewhat unfortunate choice of a variable name, but
leave it for now...
Use the monotonic system time. Timespec has sub-second resolution and
by using the monotonic time, we are independent of time corrections.
If retryDuration is smaller than retryInterval, then time out after
retryDuration instead of waiting once for retryInterval and then
checking retryDuration.
When the request timed out *exactly* after retryDuration or the next
resend would happen after the retryDuration, then give up immediately.
When the helper shuts down normally after the parent is done with it,
there was a race between handling the signal and the loss of connection,
leading to random "return 1" errors. This showed up in nightly testing
in particular in the test-dbus.py testConfigure.
The loss of connection at that point is not an error, so don't treat it
as one and simply return 0.
During normal operation, the helper's messages were never printed to any
output stream. They only went out via D-Bus. This was surprising when
debugging syncevo-dbus-server and it's interaction with syncevo-dbus-helper.
It's better to let the parent print the helper's output if the helper
doesn't do it itself, so that console output is the same with and without
SYNCEVOLUTION_DEBUG.
When a sync in syncevo-dbus-helper was aborted, the process did not
wait in main() for the server to acknowledge the D-Bus method
response. If the process quit to early, the parent did not get the
method response, leading to an "internal error" instead of "aborted"
as result of the sync.
The reason is that the acknowledgement was meant to be done via
SIGTERM, but that had already been sent and thus syncevo-dbus-helper
did not wait. Solved by using SIGURG for the acknowledgement.
In addition, the parent must not close its connection when sending
signals to the child, because the child may still need to send its
sync report and/or log output signals.
D-Bus objects in a process exist independently from a specific D-Bus
connection. They are only identified by their path. When syncevo-dbus-server
forked multiple syncevo-dbus-helper instances, it indirectly (in ForkExec)
created multiple callback objects for the childs "watch" functionality (a
pending method call that the parent never replies to). These method calls were
associated with the a random (oldest?) session instead of the current one.
This causes problems once an older session terminates and the callback object
destructs, because then the wrong children get a failure response, which they
treat as "connection lost = parent has terminated". Found while trying to fix
shutdown race conditions.
The solution is to generate a unique counter for each child and communicate
that counter to the child via a new env variable.
It can be useful to use additional signals like SIGURG or SIGIO
(typically not used elsewhere) for simple interprocess
communication. SuspendFlags now supports that by letting the process
catch more than just SIGINT and SIGTERM and recording which signals
were received.
activate() calls can happen when already active, for example when
syncevo-dbus-helper instantiates SyncContext, which activates signal
handling for the duration of the sync. This was not handled well
before, leaking previously allocated FDs and not restoring signal
handlers correctly. Because the process quit anyway soon, this did not
matter in practice.
Now the code explicitly checks for this and returns the existing Guard
in all following activate() calls, without changing the signal
handling.
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
The method became obsolete when introducing fork+exec for local
syncing. Before that, the method was used to remove unsafe loggers in
the child's process. Now exec() does that for us.
Changing the process' name, even temporarily, is not thread-safe
and needs to be avoided. Instead pass the additional parameter
explicitly via the MessageOptions container.
Logger::formatLines() and LogStdout::write() are passed the process
name as "const std::string *" pointer, with Logger::getProcessName()
acting as fallback for NULL. Calling that is delayed as long as possible,
to avoid making the string copy when not necessary.
The previous implementation of formatLines() used a std::string
reference, but only used the content to determine whether to include
the current process name - probably not what was intended, but harmless
because either the empty string or the current name were passed.
The number of parameters for messagev has grown unreasonably, and more
parameters will be needed soon (explicit process name). Better store
the parameters in a struct and pass around a pointer to such a struct
instance. The code becomes shorter and new parameters can be passed
through intermediate functions without modifying them.
Passing an explicit Logger instance to the SE_LOG* macros was hardly
ever used and only made the macros more complex.
The only usage of it was in some backends, which then added a prefix
string automatically. The same effect can be achieved by passing
getDisplayName(). Exception handling no longer needs a Logger instance,
the prefix alone is enough.
The other intended usage, avoiding global variables for logging by
passing a logger known to the caller, was not possible at all.
To make prefix handling more flexible, it is now passed as a "const
std::string *" instead of a "const char *".
The new ThreadSupport.h provides C++ recursive and non-recursive
mutices for static and dynamic allocation, with RAII for unlocking
them automatically.
The current implementation depends on glib >= 2.32.0. Without it, the
C++ classes are dummies which do no real locking. On systems without
glib or a glib older than 2.32.0 (Debian Etch, Ubuntu Lucid 12.04),
thread support does not work because it depends on the revised glib
threading API from 2.32.0.
The call cannot succeed without an empty UID, so there's no point
even trying it. Worse, EDS 3.8 aborts the process when given
an empty UID:
[ERROR 00:00:00] GLib: g_variant_builder_end: assertion
`!GVSB(builder)->uniform_item_types || GVSB(builder)->prev_item_type != NULL
|| g_variant_type_is_definite (GVSB(builder)->type)' failed
[ERROR 00:00:00] GLib: g_variant_get_type: assertion `value != NULL' failed
[ERROR 00:00:00] GLib: g_variant_type_is_subtype_of: assertion `g_variant_type_check (type)' failed
[ERROR 00:00:00] GLib: g_variant_get_type_string: assertion `value != NULL' failed
[ERROR 00:00:00] GLib: g_variant_new: expected GVariant of type `a(ss)' but received value has type `(null)'
See https://bugzilla.gnome.org/show_bug.cgi?id=697705
For interactive debugging it is better to have an infinite timeout of
the D-Bus method call. The overall test will still time out as quickly
as it did before.
When running EDS under valgrindcheck.sh in parallel to running
syncevo-dbus-server under valgrindcheck.sh, then cleaning up after
syncevo-dbus-server must not kill EDS.
To achieve this, look at all log files created for us and extract the
pid from each file.
Also fix sending INT/TERM vs. KILL signals to these processes.
When running test-dbus.py or testpim.py with SYNCEVOLUTION_DEBUG
set, they now skip the redirection of syncevo-dbus-server output
into a file. Useful when doing interactive debugging without running
under a debugger.
When starting a D-Bus session for test-dbus.py and testpim.py, valgrind
gets injected via an env variable and the older check of the program to
be started did not match. Need to check the environment in addition to
the command line, otherwise EDS gets started by the D-Bus daemon and
we never see its output nor does it get checked by valgrind.
The problem has been fixed in the meantime (00f566 "sqlite addressbook: fix
memory corruption in get_revision") and no longer occurs in an release we test
against.
In EDS 3.6.4, opening fails a lot more often with E_CLIENT_ERROR_BUSY.
We must retry until we succeed.
Seen in particular with testpim.py testFilterStartup. Clients are expected to
try the open call again. EDS >= 3.8 does that automatically.
Manually start EDS also when testing with testpim.py, to see the
output of the factory processes and allow running them from places
where the D-Bus daemon would not find them.
The sorting got broken when the number of contacts exceeds the 3
digits reserved via the 03d formatting specifier. Now allow four
digits = up to 9999 contacts.
Binding _done() to self.start reads the *current* value of self.start
when _done() gets called. We need to bind to an inmuted local instance
instead to achieve the desired effect.
The destructor must be virtual, otherwise destructors in dervived
classes are not getting called when destructing via a downcasted
pointer. Should be relevant, but did not show up in valgrind?!
Klocwork complained about not checking the result. Depending on the
situation we now give up or use some kind of fallback (for example,
when logging).
SyncEvolution itself is not multithreaded, but utility libraries
might be, so better avoid the localtime() function and its global
state. May also be relevant when function A gets a tm * pointer,
calls function B which calls localtime() again, then A tries to
use its original value.
When realloc() returns NULL, the already allocated buffer remains
available. Previously we discarded (and leaked!) it. Now we track
whether that memory already contains parts of the next log message
and use that when realloc fails. The dgram will then get discarded
without reading the rest.
When reading from a stream, realloc will only be called once to
get the initial buffer. When that fails, we give up.
Klocwork warned about using a NULL starttime pointer in the case that
the transport is unknown. Not sure whether this can actually happen at the
moment, fix it anyway.
All of the jobs were allocated dynamically without freeing them
explicitly, instead relying on auto-deletion (enabled by default).
This behavior is surprising for SyncEvolution developers and Klocwork,
which expects explicit delete calls for each new. Better use
traditional RAII.
The advantage also is that each job gets deleted right away, instead
of waiting for another round of event processing, which may or may not
happen soon.
Qt has never been happy with the old code, reporting:
QDBusConnection: session D-Bus connection created before QCoreApplication. Application may misbehave.
This error message became visible again after some recent logging
changes and broke tests, so better avoid it by testing for D-Bus
with a private connection.
Klocwork correctly reported that the underlying implementation in
libsynthesis must not be called with NULL pointers. Therefore always
allow it to set the return values, even if our caller was not
interested.
Klocwork failed to detect that the result of getenv() was checked.
Keep Klocwork happy by assigning the result to a temporary variable
and checking that.
The old code triggered a Klocwork warning about the missing self check
in the copy operator. This was not a real problem because of the
temporary instance from which m_error got taken, but that was not
obvious even to a human.
Better use standard coding practices:
- skip the operation when assigning to self
- reuse the set() operation
- return reference instead of copy (probably a bug!)
The code tried to transfer ownership of the GDBusMethodInfo and
GDBusSignalInfo arrays to the refcounted GDBusInterfaceInfo instance,
but that does not work because g_dbus_interface_info_unref() does not
free those arrays.
Let's simplify this so that all memory, including the GDBusInterfaceInfo
instance, is owned by DBusObjectHelper and kept around as long as the
object is active.
The stack-allocated GDBusInterfaceVTable happened to work in practice,
but the API description does not say whether
g_dbus_connection_register_object() makes a copy, so better keep that
around longer, too.
Finally, clean up checking for "is activated" to use only
m_connId. m_activated was redundant and the method resp. signal
pointers are now always non-NULL.
When running syncevo-dbus-server under valgrindcheck.sh,
the following happened occasionally:
- syncevo-dbus-server main thread quits, some threads keep running
=> ps shows the process as <defunct> with ppid = 1 = init
- valgrindcheck.sh notices that the process is done,
reports status and quits
- test-dbus.py fails to wait for the syncevo-dbus-server
process (because it is not the parent) and assumes that
the process is gone
At this point there is a lingering process which occupies the
well-known D-Bus name (= all further tests fail) and which
prevents unmounting the chroot.
It's unknown how the syncevo-dbus-server gets into that state.
Could be valgrind 3.7.0 or the kernel 3.4.28-2.20-xen.
As a workaround, let test-dbus.py collect the pids of all processed that it
couldn't wait for and send them SIGKILLs until that returns with "not
found".
Without the explicit TYPE=INTERNET, email addresses sent to a Nokia
e51 were not shown by the phone and even got lost eventually (when
syncing back?).
This commit ensures that the type is set for all emails sent to any
Nokia phone, because there may be other phones which need it and
phones which don't, shouldn't mind. This was spot-checked with a N97
mini, which works fine with and without the INTERNET type.
This behavior can be disabled again for specific Nokia phones by
adding a remote rule which sets the addInternetEmail session variable
to FALSE again.
Non-Nokia phones can enable the feature in a similar way, by setting
the variable to TRUE.
Compilation takes a lot longer when using -O2 and the result
is less debugger friendly, so filter out the flag. This is
relevant when building a release where the flag is set for the
rest of the compilation.
Complaining about "ERROR SUMMARY" when using valgrind is a false
positive. Ignore that text by making it lower case before searching. Other
attempts based on regex matching somehow failed (UTF-8 encoding error?!).
When starting the server fails, an exception gets thrown when
trying to determine its pid. This used to abort the whole
test script without recording much information. In particular
the server's output was not included.
Now the exception is caught, recorded as error and testing continues
with the next test.
This did not fix the root cause (a stuck process occupied the
D-Bus name) but at least it helped to identify the problem.
With all the interaction between Client::Sync, Client::Source and
CLIENT_TEST_SERVER it can be hard to figure out which test case file
gets used. Log that explicitly.
Because of Google issue with detached recurrences without parent
(http://code.google.com/p/google-caldav-issues/issues/detail?id=58)
and the SyncEvolution workaround (replacing RECURRENCE-ID with
X-SYNCEVOLUTION-RECURRENCE-ID) only one detached recurrence per UID
can be stored.
Removing the second modified recurrence from the test cases for
Google.
Somehow the second modified recurrence made its way back into the
test data while rewriting the generic eds_event.ics.
Some peers have problems with meta data (CtCap, old Nokia phones) and
the sync mode extensions required for advertising the restart
capability (Oracle Beehive).
Because the problem occurs when SyncEvolution contacts the peers
before it gets the device information from the peer, dynamic rules
based on the peer identifiers cannot be used. Instead the local config
must already disable these extra features in advance.
The "SyncMLVersion" property gets extended for this. Instead of just
"SyncMLVersion = 1.0" (as before) it now becomes possible to say
"SyncMLVersion = 1.0, noctcap, norestart".
"noctcap" disables sending CtCap. "norestart" disables the sync mode
extensions and thus doing multiple sync cycles in the same session
(used between SyncEvolution instances in some cases to get client and
server into sync in one session).
Both keywords are case-insensitive. There's no error checking for
typos, so beware!
The "SyncMLVersion" property was chosen because it was already in use
for configuring SyncML compatibility aspects and adding a new property
would have been harder.
In the previous attempt (commit a0375e) setting the <syncmodeextensions>
option was only done on the client side, thus breaking multi-cycle
syncing with SyncEvolution as server. On the server side the option
shouldn't be needed (the server never sends the extensions unless
the client did first), but for symmetry and testing reasons it makes
sense to also offer the options there.
Previously, the database field was interpreted as a Collection ID. This adds
logic to allow the database to be interpreted as a folder path. The logic is:
1) If the database is an empty string, pass it through (this is the most
common case as it is interpreted as "use the default folder for the
source type").
2) If the database matches a Collection ID, use the ID (this is the same as
the previous behaviour).
3) If the database matches a folder path name, with an optional leading "/",
use the Collection ID for the matching folder.
4) Otherwise, force a FolderSync to get the latest folder changes from the
server and repeat steps 2 and 3
5) If still no match, throw an error.
Steps 2 and 3 are in the new function lookupFolder. The remaining logic has
been added to the open function. Note that the result is that m_folder (and
hence getFolder()) are always either empty or a Collection ID -- that is as
before so the sync logic itself is unchanged.
The command line swallowed errors thrown by the backend while listing
databases. Instead it just showed "<backend name>: backend failed". The goal
was to not distract users who accidentally access a non-functional backend.
But the result is that operations like --configure or --print-databases could
fail without giving the user any hint about the root cause of the issue.
Now the error explanation in all its gory details is included.
For example, not having activesyncd running leads to:
INFO] eas_contact: backend failed: fetching folder list:
GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name
org.meego.activesyncd was not provided by any .service files
And running activesyncd without the necessary gconf keys shows up as:
[INFO] eas_contact: backend failed: fetching folder list:
GDBus.Error:org.meego.activesyncd.Error.AccountNotFound: Failed to find
account [syncevolution@lists.intel.com]
Both of these happened when trying to use the "list database" ActiveSync
patches in the nightly test setup...
A new method, findCollections, fetches the folder list from the server and
creates two maps:
m_collections - store all the information about each collection (folder),
indexed by server collection ID
m_folderPaths - map full folder paths to collection IDs
getDatabases uses this data to returns the folder path, collection ID and a
flag indicating if the folder is the default for that type.
Note 1: getDatabases always asks activesyncd to update the folder list from the
server in order to return up to date information.
Note 2: this depends on a new libeasclient routine:
eas_sync_handler_get_folder_list
A quick-and-dirty solution for enabling phone number summaries when
creating contact databases in the PIM Manager: let the EDS backend
recognize the special UIDs used by the PIM Manager and then hard-code
the minimal set of summary fields and indexed fields which allow
executing the E_CONTACT_TEL, E_BOOK_QUERY_EQUALS_NATIONAL_PHONE_NUMBER
query quickly.
A proper solution would use a new EDS function for parsing ESource
defaults from a string and then providing these defaults to the
backend from the PIM Manager.
Also note that configuring the EDS system address book must be covered
elsewhere, because it wouldn't be correct for SyncEvolution as only
one of many clients to change the configuration of that.
To enable the special support, add the following section to
share/evolution-data-server-3.6/rw-sources/system-address-book.source:
[Backend Summary Setup]
SummaryFields=phone
IndexedFields=phone,phone
This patch adds new function calls to code shared by syncecal and syncebook,
so we have to add libebook-contacts to both to avoid link errors.
This reverts commit 2735273ec60b289c5ec2c49f3eacb9d7d04d5ea1.
With this patch, setting up ActiveSync fails with Google as server.
Needs further investigation.
Also note the explicit g_object_unref() and EASFolderUnref - these
should get replace with SE_GOBJECT_TYPE smart pointers.
This reverts commit 7327b23a4dd31abdc9596916743892402bcffe0c.
Depends on 273527 "ActiveSync: added getDatabases support for fetching
folder list" which has to be reverted.
See https://bugzilla.gnome.org/show_bug.cgi?id=694730
==18552== 25 bytes in 1 blocks are definitely lost in loss record 2,353 of
6,616
==18552== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==18552== by 0xADC3D40: g_malloc (gmem.c:159)
==18552== by 0xADDA3BB: g_strdup (gstrfuncs.c:364)
==18552== by 0x1146400B: get_string_cb (e-book-backend-sqlitedb.c:255)
==18552== by 0x854F1CE: sqlite3_exec (in
/usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6)
==18552== by 0x114665A2: book_backend_sql_exec_real
(e-book-backend-sqlitedb.c:301)
==18552== by 0x1146666F: book_backend_sql_exec
(e-book-backend-sqlitedb.c:391)
==18552== by 0x11469FC6: e_book_backend_sqlitedb_get_revision
(e-book-backend-sqlitedb.c:3995)
==18552== by 0x1F90480E: e_book_backend_file_open
(e-book-backend-file.c:656)
==18552== by 0x1146C229: book_backend_open (e-book-backend-sync.c:397)
==18552== by 0x11473F26: operation_thread (e-data-book.c:292)
==18552== by 0xADE2181: g_thread_pool_thread_proxy (gthreadpool.c:309)
==18552== by 0xADE1964: g_thread_proxy (gthread.c:797)
==18552== by 0x8E34B4F: start_thread (pthread_create.c:304)
==18552== by 0xB90F70C: clone (clone.S:112)
The new ReplaceSearch is more flexible than RefineSearch. It can
handle both tightening the search and relaxing it. The downside of it
is the more expensive implementation (must check all contacts again,
then find minimal set of change signals to update view).
Previously, a search which had no filter set at all at the begining
could not be refined. This limitation of the implementation gets
removed by always using a FilteredView, even if the initial filter is
empty.
When the parent of a FilteredView emits signals, it is not necessarily
always in a consistent state and the FilteredView must not invoke
methods in the parent. Stressing the FilteredView by using it in tests
originally written for the FullView showed that the filling up a view
violated that rule and led to contacts being present multiple
times. Fixed by delaying the reading from the parent into either an
idle callback or the parent's quiescence signal processing, whichever
comes first.
Added priorities and proper tracking of "activated" state. The latter
depends on having the Timeout instance valid when the callback
returns, therefore the Server must delay the deletion of the instance
until the callback is done executing.
Priorities are needed to order different tasks in the PIM manager
correctly.
The "activated" state is something that the PIM manager needs to track
reliably, which had to be done by every callback. If a callback forgot
to do that, in the worst case this might have removed an entirely
unrelated source when the tag got reused in the meantime.
Previously, the database field was interpreted as a Collection ID. This adds
logic to allow the database to be interpreted as a folder path. The logic is:
1) If the database is an empty string, pass it through (this is the most
common case as it is interpreted as "use the default folder for the
source type").
2) If the database matches a Collection ID, use the ID (this is the same as
the previous behaviour).
3) If the database matches a folder path name, with an optional leading "/",
use the Collection ID for the matching folder.
4) Otherwise, force a FolderSync to get the latest folder changes from the
server and repeat steps 2 and 3
5) If still no match, throw an error.
Steps 2 and 3 are in the new function lookupFolder. The remaining logic has
been added to the open function. Note that the result is that m_folder (and
hence getFolder()) are always either empty or a Collection ID -- that is as
before so the sync logic itself is unchanged.
A new method, findCollections, fetches the folder list from the server and
creates two maps:
m_collections - store all the information about each collection (folder),
indexed by server collection ID
m_folderPaths - map full folder paths to collection IDs
getDatabases uses this data to returns the folder path, collection ID and a
flag indicating if the folder is the default for that type.
Note 1: getDatabases always asks activesyncd to update the folder list from the
server in order to return up to date information.
Note 2: this depends on a new libeasclient routine:
eas_sync_handler_get_folder_list
If phone number search is possible, then the direct search in EDS now
uses the more accurate E_BOOK_QUERY_EQUALS_NATIONAL_PHONE_NUMBER
comparison, with the E164 formatted caller ID as value to compare
against. That value includes the current country code.
For testing purposes, setting SYNCEVOLUTION_PIM_EDS_SUBSTRING forces
the usage of the traditional suffix match. This is used to test both
the old and new flavor of testFilterStartupRefine.
FilterStartupRefineSmart is the one which uses phone number matching.
When available, the pre-computed E164 number from EDS will be used
instead of doing one libphonebook parser run for each telephone number
while reading. Benchmarking showed that this parsing was the number
one hotspot, so this is a considerable improvement.
Google became even more strict about checking REV. Tests which
reused a UID after deleting the original item started to fail sometime
since middle of December 2012.
To fix this, tests must differentiate reuse of an item by adding a suffix
("-A", "-B", etc.). If CLIENT_TEST_UNIQUE_UID has a value >= 2, that suffix
will be used when mangling the input item. Otherwise the suffix is ignored
and nothing changes.
For testing Google, CLIENT_TEST_UNIQUE_UID=2 is used.
The backends had SYNCEVOLUTION_LIBS in their _DEPENDENCIES entries,
which is wrong because SYNCEVOLUTION_LIBS must include -lrt (which
can't be a dependency).
Fixed by depending on libsyncevolution.la directly.
The library might be used, for example when the PIM Manager is part of
the server, so better add it. Avoids hard linker errors on some
systems when it is used and not specified, which is better than given
one extraneous library that -as-needed can take care of.
The code which removed a photo from a contact did it by creating a
g_file_icon() with a NULL GFile. This is invalid and caused warnings
from glib. Eventually it must have passed NULL to folks, which is what
the new code does directly.
Don't let libphonenumber write to stdout. Instead redirect into
SyncEvolution logging and manage severity level. For example,
previously parsing errors of phone numbers were logged as [ERROR] by
libphonenumber. Now that appears as "phonenumber error: ...".
The previous macros assumed to be used inside the SyncEvo namespace
and had problems when used in a context where "Logger" was resolved to
something other than SyncEvo::Logger. Making that more explicit solved
that problem.
Adds the possibility to check the servers standard output
similar to its D-Bus log output. Both can now also be set
before invoking runTest() because that method no longer
sets the members.
Use e_book_client_connect_direct_sync(), the official API, when
available. Support for e_book_client_new_direct() is still in the
code; it can be removed onces the 3.6 openismus-work branch adapts the
official API.
The new Bluez 5 API is the third supported API for doing PBAP
transfers. It gets checked first, then the PBAB backend falls back to
new-style obexd (file based, similar to Bluez 5, but not quite the
same) and finally old-style obexd (data transfer via D-Bus).
In contrast to previous APIs, Bluez 5 does not report the reason for a
failed PBAP transfer. SyncEvolution then throws a generic "transfer
failed" error with "reason unknown" as message.
When using more than 10 contacts (TESTPIM_TEST_ACTIVE_NUM), logging
and polling are turned towards causing less work by testpim.py itself.
TESTPIM_TEST_ACTIVE_RESPONSE can be used to give a maximum response
time in seconds, which gets checked by watchdog by calling
GetAllPeers().
This is relevant in two cases, data loaded before folks starts reading
from EDS (TESTPIM_TEST_ACTIVE_LOAD set) and folks getting change
notifications as SyncEvolution imports data into EDS (not set).
Related to FDO #60851. Currently SyncEvolution+Folks do not pass the
performance test.
The server must remain responsive at all times. The new Watchdog
class tests that by calling GetAllPeers() at regular intervals. It
can only be used in tests which do not block the event processing
in the Python script itself.
Once the watchdog runs, it must be removed when a test stops. The new
cleanup test member can be used to add cleanup code at runtime which
will be called when tearing down the test.
The D-Bus server has to respond to the entire request at once, which
makes it unresponsive. Needs to be avoided in the client, because
doing it on the server side is too complicated. The server would have
to start gathering results, but those results may become invalid again
in response to processing events before it can send them out.
Related to BGO #60851.
The \n in the testRead vCard was interpreted by Python instead of
being written into the file. Must mark the string as raw. Did not
affect the test itself, but the libebook vCard parser reported
warnings.
Track which changes need to be stored and how that gets processed. Was
useful for finding that modifying groups failed when the groups hadn't
actually changed.
Setting TESTPIM_TEST_ACTIVE_NUM to a value larger than 10 enables a
mode with less logging and more efficient event handling in
runUntil. Useful for performance testing, because otherwise testpim.py
itself consumes considerable CPU cycles.
Log message now contain time stamps. A NullLogging class mimics the
Logging interface and can be used instead of that to suppress logging.
As a side effect of turning the log() method into a wrapper, the D-Bus
signal is now called "log2", which makes it possible to search for it
case-insensitively in emacs without finding the LogOutput signal.
The dbus-monitor output can be very large. Handle that a bit better by
compressing the file with a gzip pipe. Experimental and a bit broken:
output is not flushed properly when killing dbus-monitor + gzip.
An different approach is taken when runUntil() is called with
may_block=True: it lets the main loop run for 0.5 seconds and then
returns to do the status checking. No logging is done for each check.
This is meant for long-running operations where the 0.5 second latency
doesn't matter and too frequent checking and logging cause overhead.
The default code path checks every 0.1 second and recognizes
progress (defined as "there was an event to be processed"), which then
causes debug logging about waiting.
Mark pending reads by adding the start time. Avoid requesting data
again which is still pending. Relevant when dealing with many contacts
in testActive, helps performance by avoiding redundant operations.
Sending DEBUG messages via the LogOutput signal is expensive and often
not necessary. Ideally LogOutput should only send data that someone is
interested in. The problem is that the SyncEvolution D-Bus API has no
way of specifying that and some clients, like dbus-monitor, would not be
able to use it.
Therefore this commit makes the default level of detail configurable
when syncevo-dbus-server is started, via a separate --dbus-verbosity
option.
This gets applied to output generated from syncevo-dbus-server itself
as well as output from sync sessions.
test-dbus.py exposes that via a new TEST_DBUS_QUIET option. The
default is to include DEBUG output. When TEST_DBUS_QUIET is set to a
non-empty string, only ERROR and INFO messages are sent.
That SetPeer() allows modifying and creating a config leads to race
conditions when multiple clients want to create a config. The new
CreateConfig() avoids that by atomically checking that a config does
not exist yet and creating it.
SetPeer() is still available for backwards compatibility. It continues
to be used for modifying an existing config in TestContacts.testSync
to check the effect of the logging settings.
Exposed as "location" -> (lat, long) in the D-Bus bindings.
Reading, writing and updating are supported. Depends on a folks
release which has the patch which adds FolksLocationDetails.
Default EDS test data now includes GEO. Several peers do not support
it, which gets ignored by synccompare.
Was lost earlier during syncing. Must be defined in field list and
vCard profile. Still not supported by PIM Manager, because folks doesn't
support it (see FDO #60373).
Constructing the GValues created additional references instead
of taking over ownership as intended. For refcounted GObject,
the solution is to pass the instance and let GValue take the
reference.
For other cases, ownership of a new instance is transfered to the
GValue using the new constructors with a boolean flag.
The existing constructors with native data pointers
always copied data or increased the reference counter.
It is useful to store the result of creation methods
directly in a GValue, so allow that with a second
variation of the constructors which explicitly take
an "addRef" parameter.
The test must be activated by compiling manager.cpp with
PIM_MANAGER_TEST_THREADING and then runs start() in a new
thread which gets created during startup.
The first attempt was buggy: the idle callback is always active and
prevents sleeping => 100% CPU load. It also seems to prevent proper
shutdown somehow.
This version is much simpler. It removes the task queue and the
manager's idle callback. Instead the non-main thread adds an idle
callback itself which gets run once. To have the main thread notice
the new idle callback, g_main_context_wakeup() is used, as before.
The reason for not using this approach before was that the glib
documentation only mentioned GAsyncQueue as one of its structures
which are thread-safe. A look at the code shows that GMainContext and
GMainLoop are also thread-safe.
This adds the infrastructure for shifting the work of the D-Bus
methods into the main thread if called by another thread, which may
happen when we add other bindings for them.
Work is shifted to the main thread via a GAsyncQueue +
g_main_context_wakeup(). The calling thread then blocks waiting on a
condition variable until signaled by the main thread. Results are
stored for the calling thread as part of the operation that it
provides. Exceptions in the main thread are caught+serialized as
string and deserialized+rethrown in the calling thread - a bit crude,
but should work and reuses existing code.
Google CalDAV currently sends invalid XML back when asked to include
CardDAV properties in a PROPFIND. This gets rejected in the XML
parser, which prevents syncing calendar data:
Neon error code 1: XML parse error at line 55: undeclared namespace prefix
The incorrect XML is this:
<D:propstat>
<D:status>HTTP/1.1 404 Not Found</D:status>
<D:prop>
...
<caldav:max-attendees-per-instance/>
<ns1:addressbook-home-set xmlns:ns1="urn:ietf:params:xml:ns:carddav"/>
==> <ns1:principal-address/>
<ns1:addressbook-description/>
<ns1:supported-address-data/>
<ns1:max-resource-size/>
</D:prop>
</D:propstat>
This was introduced on the server side sometime after December 12nd
2012 (tests run at that time showed a different response) and does not
affect SyncEvolution 1.2 because it did not yet ask for CardDAV
properties.
The workaround on the client side is to ask for only the properties
which are really needed.
Don't try to list non-existent directories, that causes an uncaught
Python exception. Happens when using testpim.py for the first time
in a clean directory.
The error message for an unexpected slow sync still mentioned
the old and obsolete "refresh-from-client/server" sync modes.
Better mention "refresh-from-local/remote".
This reverts commit 3a71a2bf53 and
commit a0375e0160.
Need to back out the workaround for broken peers because it breaks
SyncEvolution<->SyncEvolution syncing.
Allow the runtest.py caller to choose which kind of distcheck
is done. When "dist" has parameters, then those are used
as distcheck configure flags instead of trying out a set
of default ones.
When compiled with PIM Manager, syncevo-dbus-server still should
shut down automatically when idle. Only when the PIM Manager was
started and has (or will have) a unified address book, then keep
the process running.
gcc warns about a "long int" vs. "unsigned int" mismatch when
compiling for x86. All of that is in debugging output printfs.
Cast the index_t into long int to avoid that.
Some gcc version did not like the usage of a smart pointer in a
boolean and expression (something about ambiguous conversion). Fixed
by making the conversion explicit, as NULL check.
Check for e_book_client_new_direct() in configure and use it if
found. This enables direct reading from the EDS file backend in the
direct searching of EDS address books.
Useful for moving the session directories to a temporary file system.
They are essentially just useful for debugging when used as part of
PIM Manager.
- "logdir" - a directory in which directories are created with
debug information about sync session
- "maxsessions" - number of sessions that are allowed to exist
after a sync (>= 0): 0 is special and means unlimited,
1 for just the latest, etc.;
old sessions are pruned heuristically (for example,
keep sessions where something changed instead of
some where nothing changed), so there is no hard
guarantee that the last n sessions are present.
Avoid writing config file changes to disk by enabling a new
"ephemeral" mode for syncing via the PIM Manager. This mode is enabled
via the sync mode parameter of the D-Bus API and from there passed
through to the SyncConfig and local sync helper. It cannot be enabled
in the config files (yet?).
In this mode, config file changes are not flushed resp. discarded
directly in the config nodes. This prevents writing to .ini files in
~/.config.
The "synthesis" binfile client files are still written, but they get
redirected into the session directory, which can (and should) be set
to a temp file system and get deleted again quickly.
Data dumps are turned off now in the configs created by the PIM
Manager.
As mentioned in FDO #56334, automatically starting folks
to get a unified address book together as soon as possible
makes sense. A new command line option now allows that:
-p, --start-pim
Activate the PIM Manager (= unified address book) immediately.
The D-Bus API was locked into the Manager class as private methods.
Make them public so that CreateContactManager() can call start(); the
other methods are public now for the sake of consistenty and because
they might be useful elsewhere.
The new default is to log error messages to syslog.
Command line options can increase (or reduce) verbosity
and choose whether stdout and/or syslog shall be active:
-d, --duration=seconds/'unlimited'
Shut down automatically when idle for this duration
-v, --verbosity=level
Choose amount of output, 0 = no output, 1 = errors,
2 = info, 3 = debug; default is 1.
-o, --stdout
Enable printing to stdout (result of operations) and
stderr (errors/info/debug).
-s, --no-syslog
Disable printing to syslog.
To get the same behavior as before when debugging via the Python
scripts, "--no-syslog --stdout --verbosity=3" is passed to
syncevo-dbus-server when started via Python.
m_startTime and m_processName were not used, can be removed. Instead
add tracking of the parent logger and forward messages to that, to
allow combining LoggerSyslog with some other logger (for example,
stdout or redirection).
Together with storing the sort order persistently, this allows
restarting the daemon and have it create the same unified address book
again.
The set is stored as comma, space or tab separated list of words in
the value of the "active" property in the
~/.config/syncevolution/pim-manager.ini file.
Switch to the new EDSRegistryLoader from libsyncevolution. This will
allow sharing the ESourceRegistry with the EDS backend.
We use the asynchronous loading method, to avoid blocking the
server. This requires turning the search() method into an asynchronous
method. The actual code runs once the ESourceRegistry is available,
whether it is needed or not (keeps the logic simpler and minimizes
code changes).
To avoid reindenting the old code, the try/catch block and error
checking for the callback was added in a separate intermediate
function which then calls the old code.
Add a class, EDSRegistryLoader, which owns the creation of
ESourceRegistry and notifies other components synchronously or
asynchronously when the instance is ready.
Using the UID as part of file names gets more problematic when
allowing colons. Remove that character from the API and enforce
the format in the source code.
Bypass folks while it still loads contacts and search for a phone
number directly in EDS. This is necessary to ensure prompt responses
for the caller ID lookup.
Done with a StreamingView which translates EContacts into
FolksIndividuals with the help of folks-eds = edsf.
Combining these intermediate results and switching to the final
results is done by a new MergeView class.
A quiescence signal is emitted after receiving the EDS results and
after folks is done. The first signal will be skipped when folks
finishes first. The second signal will always be send, even if
switching to folks did not change anything.
Together with an artificial delay before folks is considered done,
that approach make testing more reliable.
The PIM code and the EDS backend both need access to common classes
(like the smart pointers) and eventually will share one instance of
the ESourceRegistry.
Log "quiescence" signal. Avoid starting the initial search if "search
== None". Both will be needed for startup search testing.
While at it, use the "quiescence" signal to continue after the initial
search. Much faster than always waiting 5 seconds.
Move IndividualView, FullView and FilteredView into separate source
files. They are large enough to justify that and it'll avoid header
file dependency cycles.
Introduce a new View base class, which contains the "start"
functionality and a new name for debug messages.
The new StreamingView is based on it and will be used for reporting
results in an EDS query.
Avoid passing NULL IndividualCompare pointer to views. Instead make
the CompareFormattedName class available as the default compare
method.
While it is likely that there is a GError when adding a persona fails,
better check for it explicitly. The code would crash with a NULL
persona pointer.
Some finish functions must return more values than just the return
value of the finish function. They do that via return
parameters. Support that by first switching via the arity of the
finish function and then based on types of the parameters + return
value.
Up to five parameters for the finish function are now supported. To
minimize the number of different combinations, only two variantions
are supported once return values are involved:
- non-void return value, GError * as last parameter
- non-void return value, no GError
The special semantic of GError is preserved (owned by callback
wrapper). All other results must be freed by the C++ callback.
To have a sane ordering of results, return value and GError were
swapped, leading to the following ordering:
- return value of the _finish call, if non-void
- all return parameters of the _finish call, in the order
in which they appear there
- a GError is passed as "const GError *", with NULL if the
_finish function did not set an error; it does not have to
be freed.
This affects only one place, the handling of
folks_persona_store_add_persona_from_details in the PIM module.
A 'limit' search term with a number as parameter (formatted as string)
can be added to a 'phone' or 'any-contains' search term to truncate the
search results after a certain number of contacts. Example:
Search([['any-contains', 'Joe'], ['limit', '10']])
=> return the first 10 Joes.
As with any other search, the resulting view will be updated if
contact data changes.
The limit must not be changed in a RefineSearch(). A 'limit' term may
(but doesn't have to) be given. If it is given, its value must match
the value set when creating the search. This limitation simplifies the
implementation and its testing. The limitation could be removed if
there is sufficient demand.
Fix the order of changes to internal data structures and emitting
change signals so that the changes are completed first. That way, if
the change signals are hooked up to classes which call the view back,
those calls can be processed correctly.
The code for removing a contact did this incorrectly: it skipped
updating the indices when the removed contact wasn't included in the
filtered view, although indices in the parent changed.
Due to not mapping the local index in the view to the parent's index,
refining only worked in views where parent and child had the same
index for the contacts in the search view.
Make the tests more complex to trigger this bug and fix it by adding
the required indirection.
Also include the system address book in testing. May matter because
folks considers it the primary address book and thus has special
cases involving it.
This test creates three address books with (by default) 100 contacts
each and then selects different permutations of them. This found a bug
in folks where disabling address books had no effect (see
https://bugzilla.gnome.org/show_bug.cgi?id=689146, patch is pending
review).
Looking up by phone number spends most of its cycles in normalizing of
the phone numbers in the unified address book. Instead of doing that
work over and over again during the search, do it once while loading.
This has implications for several classes:
- IndividualData must be extended to hold the pre-computed values;
what kind of data can be stored there is currently hard-coded
in the LocaleFactory base class, to keep it simple and minimize
memory fragmentation.
- Internal contact changes and access must be based on IndividualData,
not FolksIndividual.
- FullView must know which LocaleFactory will pre-compute the data.
- Changing contacts may have no effect on sorting, but may lead
to different pre-computed data.
Looking up a phone number only once does not gain from this change, it
even gets slower (more memory intensive, less cache locality). Only
searching multiple times becomes faster.
Ultimately it would be best to store the normalized strings together
with the telephone number inside EDS when the contact gets
created.
Caching normalized telephone numbers will lead to two special cases:
- contact changed directly in EDS
- contact changed through folks
Cover both cases with extensions to testFilterLive and
testContactWrite.
The unit tests did not work because creating FolksIndividual
instances with attributes is not supported unless they come from
a specific folks backend.
Removing them in preparation of changing the API of
IndividualAggregator (pass LocaleFactory for pre-computing values).
folks and EDS do not support writing properties in parallel
(https://bugzilla.gnome.org/show_bug.cgi?id=652659). Must serialize
setting of modified properties.
Once that is done, we can also write photos ("is modified" check was
missing, now done with some code refactoring) and notes (which somehow
always end up being treated as "modified").
Set empty values if empty in D-Bus and non-empty in folks. Without
that step, properties that were removed via D-Bus would not get
removed inside folks.
We really must create empty sets. A NULL pointer is not accepted by
folks. Somehow it leads to internal "timeout" errors inside folks.
It can happen that Folks has non-empty roles details, but the
containes roles do not contain values that SyncEvolution exposes via
D-Bus. When checking whether "roles" must be added to the top-level
dictionary, the "non default" check must do a deep dive into the
content of the set, instead of just checking for non-empty (as for
other sets).
The problem is in the C++ D-Bus binding. If the method that gets bound
to D-Bus returns a value, that value is ignored in the signature:
int foo() => no out parameter
It works when the method is declared as having a retval:
void foo (int &result) => integer out parameter
This problem exists for both the libdbus and the GIO D-Bus
bindings. In SyncEvolution it affects methods like GetVersions().
The fix consists of always returning a type for the R return type in
the libdbus bindings. getReturn() returned "" because the type was not
a retval. In the GIO binding, the R type was not handled at all. Also
move some common code into a helper function.
This fixes the hotspot during populating the FullView content: moving
contacts around required copying IndividualData and thus copying
complex C++ structs and strings. Storing pointers and moving those
avoids that, with no lack of convenience thanks to boost::ptr_vector.
Reordering also becomes faster, because the intermediate copy only
needs to be of the pointers instead of the full content.
The new "checkpoints" split up the whole script run into pieces which
are timed separately, with duration printed to stdout. In addition,
tools like "perf" can be started for the duration of one phase.
Benchmarking is different from permanently reading and monitoring
changes. --read-all enables a single read operation once the initial
set of new contact IDs are known.
The signal was not sent reliably. Now it is guaranteed to be sent once
when a search has finished sending its initial results, and not
sooner.
This makes it possible to rely on the "Quiescence" signal when
checking whether the current data contains some contact or not.
When the unified address book (= FullView) was not running yet at the
time when a client wanted to search it, the unified address book was
not started and thus the search never returned results.
Will be tested as part of the upcoming testFilterQuiescence.
Running at full verbosity is much too slow when there many
contacts. Therefore adding:
--verbosity=VERBOSITY
Determine what is printed. 0 = only minimal progress
messages. 1 = also summary of view notifications. 2 =
also a one-line entry per contact. 3 = also a dump of
the contact. 4 = also debug output for the script
itself.
The --debug command line flag enables printing of the debug output
sent by SyncEvolution over D-Bus. To capture it in a timely manner,
the running method calls must be done asynchronously with a running
main loop.
This also solves the timeout problem more nicely and is closer to how
a real application should be written.
Use contact IDs in ReadContacts(). Finding the contacts in the view is
done with an (almost) brute force search: a linear search looks at all
contacts and compares against the requested ID. The only enhancement is
that searching starts at the index of the previous contact, which makes
it a bit more efficient when reading consecutively.
As first step, notifications and contact data include the new string
ID. It is a direct copy of the FolksIndividual ID. The next step is
to adapt the ReadContacts() implementation.
The change primarily affects notification buffering in the
Manager. Not only does it have to remember ranges, but also the
corresponding IDs.
The Python test now stores either a ID or a dictionary at each view
entry. This required changing all comparisons against None, the
previous value for "no data". The assertions in ContactsView now use
the unittest assertions because they produce nicer error output, and
these errors get logged via Python (to make them show up in the D-Bus
log together with the other logging of the notification callbacks).
As before, the ContactsView class is picked up as a test class with no
test members. Request that unittest skips it (will still show up in
testpim.py output, though).
This simplifies writing clients which want to read all new or modified
contacts. Previously, with just a range being used in ReadContacts(),
the client had to be prepared to re-issue reads when changes to the
range happened on the server before the request was processed.
The ViewAgent notifications all take IDs. For added contacts the need
is obvious. For modified contacts it comes from the situations where
one contact replaces another. This was already possible before, so the
comment about "temporarily appear at two different positions" merely
documents existing behavior. Removal notifications include the ID
primarily for consistency with the other notifications. It was also
already useful for debugging.
The actual implementation is the same as for vector, with D-Bus array
on the D-Bus side. Now the common code is in a template base class and
referenced by the traits for vector, list and deque.
No attempt is made to automatically pick that trait for other classes
which also could be treated like an ordered collection of items.
The filtered view did not check whether a parent's removed contact was
really part of the view before sending a removal signal for it. Was
not (and still isn't) found by unit tests because the expected signals
were sent and the tests don't wait for the superfluous additional
removal signals.
Some peers have problems with meta data (CtCap, old Nokia phones) and
the sync mode extensions required for advertising the restart
capability (Oracle Beehive).
Because the problem occurs when SyncEvolution contacts the peers
before it gets the device information from the peer, dynamic rules
based on the peer identifiers cannot be used. Instead the local config
must already disable these extra features in advance.
The "SyncMLVersion" property gets extended for this. Instead of just
"SyncMLVersion = 1.0" (as before) it now becomes possible to say
"SyncMLVersion = 1.0, noctcap, norestart".
"noctcap" disables sending CtCap. "norestart" disables the sync mode
extensions and thus doing multiple sync cycles in the same session
(used between SyncEvolution instances in some cases to get client and
server into sync in one session).
Both keywords are case-insensitive. There's no error checking for
typos, so beware!
The "SyncMLVersion" property was chosen because it was already in use
for configuring SyncML compatibility aspects and adding a new property
would have been harder.
--create-database was broken in combination with the final code in EDS
3.6 because it passed NULL for the UID to e_source_new_with_uid(),
which is considered an error by the implementation of that
method. Must use e_source_new() if we don't have a UID.
gee_collection_iterator_get() always transfers ownership. The caller
must g_free() (for strings) or unref (for GObjects) the returned value.
The revised GeeCollCXX does this automatically, with GObject being the default
(relies on the GObject intrusive pointer template classes) and "gchar *" a
special case in the template specialization of an utility traits class.
The usage is almost as before. However, because of the reliance on
SE_GOBJECT_TYPE, that macro must have been used for all types over which
is to be iterated. Requires moving some definitions into header files
where they can be shared.
The main motivation is that it allows other template classes to define a smart
tracking pointer for an arbitrary GObject-derived type. Adding a second flavor
of the class where the constructor with a plain pointer takes over ("steals")
the reference becomes easy.
The caveat is that the template is still unusable without the SE_GOBJECT_TYPE
macro for the type, because without the macro there is no suitable
intrusive_ptr_add_ref/release.
Using a template instead of expanding source code might also be slightly
more efficient regarding compile time. It allows setting breakpoints.
It would be nice to have a default implementation of that which the C++
compiler picks if and only if the pointer is derived from GObject.
Unfortunately that relationship is opaque to the compiler (struct starts
with GObject, but that does not enable any special relationship at the
semantic level). Perhaps play tricks based the common conventions for naming
that first instance ("parent", "parent_instance", "parent_object")?
The change is almost transparent for users of SE_GOBJECT_TYPE,
except for one thing: the defined CXX name is now a typedef instead
of a class. This prevents simple forward declarations, thus folks.h
had to include individual-traits.h.
The callback was allocated with new for the duration of the operation
and must be deleted. Was done via try/catch in some cases, but not all.
The better solution is to use RAII and store the instance in an auto_ptr
as soon as that is possible again.
The comparison fails in nightly testing and causes unnecessary
property changes, which in turn causes the complete test to fail.
See https://bugzilla.gnome.org/show_bug.cgi?id=652659
Disable saving notes to get the test to pass.
Fetched the code and its history from the 1.3.1 archives at:
http://people.debian.org/~ovek/maemo/http://people.debian.org/~ovek/harmattan/
Merged almost everything, except for Maemo/Harmattan specific build files:
autogen-maemo.sh builddeb buildsrc debian
The following changes were also removed, because they are either local
workarounds or merge artifacts which probably also don't belong into
the Maemo/Harmattan branch:
diff --git a/configure.ac b/configure.ac
index cb66617..2c4403c 100644
--- a/configure.ac
+++ b/configure.ac
@@ -44,7 +44,7 @@ if test "$enable_release_mode" = "yes"; then
AC_DEFINE(SYNCEVOLUTION_STABLE_RELEASE, 1, [binary is meant for end-users])
fi
-AM_INIT_AUTOMAKE([1.11.1 tar-ustar silent-rules subdir-objects -Wno-portability])
+AM_INIT_AUTOMAKE([subdir-objects -Wno-portability])
AM_PROG_CC_C_O
diff --git a/src/backends/webdav/CalDAVSource.cpp b/src/backends/webdav/CalDAVSource.cpp
index decd170..7d338ac 100644
--- a/src/backends/webdav/CalDAVSource.cpp
+++ b/src/backends/webdav/CalDAVSource.cpp
@@ -1282,6 +1282,7 @@ void CalDAVSource::Event::fixIncomingCalendar(icalcomponent *calendar)
// time.
bool ridInUTC = false;
const icaltimezone *zone = NULL;
+ icalcomponent *parent = NULL;
for (icalcomponent *comp = icalcomponent_get_first_component(calendar, ICAL_VEVENT_COMPONENT);
comp;
@@ -1295,6 +1296,7 @@ void CalDAVSource::Event::fixIncomingCalendar(icalcomponent *calendar)
// is parent event? -> remember time zone unless it is UTC
static const struct icaltimetype null = { 0 };
if (!memcmp(&rid, &null, sizeof(null))) {
+ parent = comp;
struct icaltimetype dtstart = icalcomponent_get_dtstart(comp);
if (!icaltime_is_utc(dtstart)) {
zone = icaltime_get_timezone(dtstart);
diff --git a/src/backends/webdav/CalDAVSource.h b/src/backends/webdav/CalDAVSource.h
index 517ac2f..fa7c2ca 100644
--- a/src/backends/webdav/CalDAVSource.h
+++ b/src/backends/webdav/CalDAVSource.h
@@ -45,6 +45,10 @@ class CalDAVSource : public WebDAVSource,
virtual void removeMergedItem(const std::string &luid);
virtual void flushItem(const string &uid);
virtual std::string getSubDescription(const string &uid, const string &subid);
+ virtual void updateSynthesisInfo(SynthesisInfo &info,
+ XMLConfigFragments &fragments) {
+ info.m_backendRule = "HAVE-SYNCEVOLUTION-EXDATE-DETACHED";
+ }
// implementation of SyncSourceLogging callback
virtual std::string getDescription(const string &luid);
Making SySync_ConsolePrintf a real instance inside SyncEvolution leads
to link errors in other configurations. It really has to be extern. Added
a comment to the master branch to make that more obvious:
-extern "C" { // without curly braces, g++ 4.2 thinks the variable is extern
- int (*SySync_ConsolePrintf)(FILE *stream, const char *format, ...);
-}
+// This is just the declaration. The actual function pointer instance
+// is inside libsynthesis, which, for historic purposes, doesn't define
+// it in its header files (yet).
+extern "C" int (*SySync_ConsolePrintf)(FILE *stream, const char *format, ...);
Syncing configures a phone, caches its data and can remove the peer
again.
Searching shows and modifies the current state (sorting and active
address books) and can run a search view.
folks and/or SyncEvolution leak some small amounts of memory,
as reported by valgrind during testing. Ignore these analyzed
problems for now, fix them later.
g++ 4.5 on Debian Wheezy fails to recognize that the boolean variable
gets initialized before use. Keep that compiler happy by explicitly
initializing the variables it complains about.
Renaming "listing" to "acessing" broke testPrintDatabases (because
it needs to suppress that error for KDE, which is not running
during D-Bus testing) and introduced a typo.
These operations manipulate the system address book. To avoid
unexpected data loss when a developer runs the script in his
normal environment, check that testpim.py gets run like this:
XDG_CONFIG_HOME=`pwd`/temp-testpim/config \
XDG_DATA_HOME=`pwd`/temp-testpim/local/cache \
PATH=<SyncEvolution install path>/bin:<SyncEvolution install path>/libexec:$PATH
<path to SyncEvolution source>/test/dbus-session.sh \
<path to SyncEvolution source>/src/dbus/server/pim/testpim.py
This will use temp-testpim in addition to temp-testdbus in the current
directory for temporary files.
Adding depends on conversion from D-Bus into a GHashSet of persona
details.
Modifying uses the same GHashSet as intermediate representation, but
then has to look up the Persona and modify its properties. Currently
each property write triggers a write to EDS, which is inefficient and
worse, deadlocks EDS when many properties are set (see
https://bugzilla.gnome.org/show_bug.cgi?id=652659). As a partial
solution, SyncEvolution checks that a property is really different
before setting it. This is good enough to get a few properties changed
without running into the deadlock. But ultimately this needs to be
improved in folks and fixed in EDS.
The nickname cannot be set or modified. Depends on a folks change, see
https://bugzilla.gnome.org/show_bug.cgi?id=686695
Removing a contact shares the "lookup persona" code with modifying.
All three operations can only be done once the EDS backend is loaded
and the system address book is ready. The actual execution therefore
has to be delayed, which requires making these operations
asynchronous in IndividualAggregator.
Grant access to stored Result callbacks, needed because PIM manager
needs to report failures and can do that more efficiently when keeping
a copy of just the error callback, instead of adding yet another
boost::bind on top of Result.
Also add factory methods which wrap the D-Bus result object in a
Result instance of the right type. Also needed for PIM manager.
Only tested as part of fixed database content. The only difference
(and potential bug) in live views is that the refined filter must be
remembered by the FilterView (which it does, see the source code).
Instead of always stopping, first check that there are no views in use
by a client. This can be determined based on the use count of the full
view.
For testing this, an internal, undocumented Manager.IsRunning() D-Bus
method gets added.
While writing the corresponding test, it became obvious that setting the sort
order had the undesired side effect of starting up folks. Aggregation still
didn't run (that gets started by ViewResource), but even just having folks
running is too much for a syncevo-dbus-server that is just used for syncing.
The "sort" property in the ini-style ~/.config/config/pim-manager.ini
file stores the string persistently. XDG_CONFIG_ROOT and HOME are
checked to find the file.
Changing the sort order only has an effect if the new order is valid.
Most of the tests are executed with "sort =" (empty), which picks the
simpler builtin sorting from FullView, as before. Other tests check
that the initial content of the .ini file is used and that it gets
modified by SetSortOrder().
Lookup and search are different: the former is based on a valid
number, the later on user input.
A lookup can compare normalized numbers including the country code, to
ensure that the lookup is exact and does not mismatch numbers from
different countries. Heuristics like suffix matching do not do this
correctly in all cases.
A search is based on user input, which might contain just some digits
in the middle of the phone number. The search ignores formatting in
both input and address book.
In both cases, alpha characters are treated as aliases for their
corresponding digit on a keypad.
The implementation uses Google's libphonebook to implement phone
number normalization and boost::locale to get the ISO-3199 country
code of the current locale.
Here some examples how the implementation works (from the tests),
based on de_DE.UTF-8 locale = DE = +49:
'any-contains 1234', 'any-contains 23', 'any-contains 12/34' find 1234.
'any-contains 5678' finds 56/78.
'any-contains 366227', 'any-contains +1-800-foobar' find +1-800-FOOBAR.
'phone +1800366227' matches +1-800-FOOBAR.
'phone +49897888', 'phone 0897888' all match 089/7888-99 and not
+189788899.
'phone +49897888000' does not match 089/7888-99.
The testFilterExisting covers the semantic of "any-contains",
in particular case-insensitiveness when applied to non-ASCII
characters and matching different properties of the individuals.
The testFilterLive covers adding, updating and removing contacts while
a view is active.
Covers the "any-contains" search, using boost::locale to get a
case-independent representation when asked for case-insensitive
searching. boost::icontains doesn't work as well because it can only
ignore the case of ASCII characters when dealing with UTF-8.
Telephone search still needs to be implemented.
At least in one case, folks emitted a removed + changed signal
when a contact was modified in EDS. Therefore implement the
merging of these two different change signals into on at the
D-Bus level.
Using a list of lists allows composing more complex searches.
In SyncEvolution, some basic searches are going to be implemented:
- caller ID lookup ("phone")
- text search ("any-contains")
Avoid printing the message in runUntil() every 0.1 second.
When running syncevo-dbus-server in a debugger, print to stdout so
that the developer can follow the progress of the Python test.
This is based on the API description of the different sort modes. It
does not make assumptions about how the view changes in reaction to
reordering, except that the final result must have the right indices
marked as changed.
The test data uses accents, which must be ignored when using the
de_DE.UTF-8 locale that is set in the environment before starting the
D-Bus server.
Clarify language, change fallback for missing full name to
concatenation of name components starting with first name and ending
with suffix. This is how Google Contacts sorts.
Change the --enable-dbus-service-pim parameter into one which takes a
parameter that specifies how locale-aware sorting and searching is to
be implemented. The default implementation uses boost::locale. It is
expected to get replaced or augemented by OEMs which want to implement
more complex sorting or searching (like ignoring Tussenvoegsel in the
Netherlands).
The LocaleFactory instance takes the current locale from the
environment. Making it and its users aware of locale changes at
runtime might be needed at some point but is not part of the API at
the moment.
The Manager class uses the factory to handle sorting and searching
requests coming in via D-Bus. Right now, that is not functional yet
because the boost::locale implementation is just a stub. It only
compiles and links.
FullView::setSortOrder is now functional.
Clean up view code a bit:
- All views delay populating their content until the caller asks for
it. For the FullView this will only happen once, so the caller must be
able to handle an already populated view, which was missing
in ViewResource. Still need a test for this.
- Use init(<smart pointer) consistently.
The set of active EDS databases is set after backends are loaded and
before the aggregator is asked to start. This ensures that we never
load data from inactive databases.
The patch depends on a functional implementation of the
folks_backend_set_persona_stores() method. The
folks_individual_aggregator_new_with_backend_store() is used although
it is not necessary at the moment, because there is only a single
instance of the FolksBackendStore class.
Tests cover running with no active address book and merging data from
two address books. The testMerge test case has several properties.
The one which causes folks to link it is X-JABBER and EMAIL (with
the patch for https://bugzilla.gnome.org/show_bug.cgi?id=685401).
Allow checks to change their result without changes inside the event
loop. Previously, the checks were only invoked if there were
events inside the event loop.
Necessary for checking events for a certain period of time.
Imports one vCard into EDS, then checks that reading it via folks and
PIM Manager yields the expected result. Uses pretty much all fields
supported by EDS. Not all of these need to be represented in the PIM
Manager API, see spec.
unittest.assertEqual() works for standard Python to dbus type
comparisons, but the output for mismatches is very unreadable. The
custom version which is now used everywhere reduces the dbus types to
their Python counterparts, which makes the error output much more
useful (diff with Python 2.7).
Also added an optional sortLists parameter which, if true, normalizes
the lists in the values. This is necessary for D-Bus APIs which return
lists without guaranteeing a specific order of the elements.
Previously, strings had to be passed into GDBus as std::string.
Now "const char *" also works. A NULL pointer is allowed and
is turned into an empty string.
Generalization of the code which defined GMainLoopCXX: assumes that
there are _ref/_unref() methods and defines an intrusive boost pointer
for the type.
Add two tests, one with just a single contact which gets added,
modified and removed and one which works with three different
contacts. The second test covers code paths like merging change events
before sending via D-Bus and re-ordering contacts when they get
renamed.
Changing the sort order and searching still need to be implemented and
tested.
Currently the tests only pass when none of the other EDS address books
contain data, because the aggregated view is not limited to the test
address books.
Avoid race condition after starting gdb by simply checking for the
presence of syncevo-dbus-server on the session bus. Previously the
output was checked, which failed due to race conditions and/or
unexpected output when setting breakpoints (not investigated further).
Once the test failed, print the failure or error immediately instead
of in the summary after gdb quit. This is information which might be
relevant for debugging the running syncevo-dbus-server.
After gdb quit, don't attempt to read the syncevo-dbus-server output
file (doesn't exit) and don't dump the D-Bus output to the screen
(usually too long to read without less, which is not possible when
doing interactive debugging). Instead point to the log file.
When TEST_DBUS_GDB=1 is set, syncevo-dbus-server gets run under gdb.
That gdb is passed the modified environment with a (potentially)
replaced HOME. In that case gdb needs an explicit -x parameter,
otherwise it won't find the user's .gdbinit.
The Timeout class now can also invoke the callback when the process is
idle. Pass a negative timeout to trigger that behavior. Most of the
code is similar to waiting for a certain amount of time, which is why
adding that functionality here makes sense despite the slightly
misleading "Timeout" class name.
It was observed that raising an exception in the signal handler did
not propagate to the calling Python code when it was currently in
loop.get_context().iteration().
This commit adds debug logging for timeouts and some utility methods
which avoid the problem by adding logging to the iteration()
call. Python then re-raises the exception in the D-Bus logging code.
This looks like a deficiency (bug?!) of the Python bindings for
iteration() which is merely worked around.
However, the utility methods may also be useful by their own right,
for example to avoid the awkward loop.run() + loop.quit() pairs.
Adding, updating (via GObject "notify") and removing contacts works.
FullView needs access to the FolksIndividualAggregator for
this.
The order of instantiating was changed so that Manager creates the
FullView as part of starting, instead of having an idle FullView as
envisioned earlier. Seems easier that way.
The CompareFormattedName class (previously only used for unit testing)
is used for sorting in ascending "last, first" by default. It is
convenient for testing, because the sort criteria are plain strings
which can be dumped in a debugger (in contrast to boost::locale
transformed strings). Later this default will have to be replaced with
a true locale aware sorting.
Local changes are delayed and merged as much as reasonably possible in
the Manager before sending them out via D-Bus, to avoid excessive
traffic.
This is done until the underlying view is stable
("quiesent"). Detecting that when reading from libfolks is difficult
(see GNOME feature request #684766). The current approach is to flush
pending changes when the process goes idle. In practice this prevents
merging multiple "add" changes, because the process goes idle between
imports from EDS - probably need further work, either in libfolks or
SyncEvolution (use a timeout after all, despite the extra latency?).
Interaction between delaying change signals and reading contacts must
be handled carefully. The client must be up-to-date when it receives
the results of the read request, and "modified" signals must be
enabled again for the data that was sent.
Normally invalidating the same contacts multiple times is prevented,
because folks sends "modified" signals pretty often (GNOME bug
modified FolksIndividual instances until the process is idle and all
major changes to the instance are (hopefully) done.
There is a slight race condition (client reads contact that was
already modified, SyncEvolution processes "modified" signal later) but
because the result is only that the client is asked to re-read the
contact, that is not a problem.
DBusObject_t is just an alias for std::string, so functional there's
no change. Adding getObject() which returns a const reference to it
allows the caller to use the value in D-Bus traits where the
difference between a plain string and an object path matters.
The tests were originally added to test compilation of code written
against the proposed APIs. After implementing them it turned out that
creating FolksIndividual instances and setting properties is not
supported by libfolks
(https://bugzilla.gnome.org/show_bug.cgi?id=684557), and thus testing
like this is impossible.
Keeping the tests anyway, without running them, because perhaps such
testing becomes possible at some point (mock FolksIndividual?!) and
the tests still cover compilation against the APIs.
The test covers aborting a running sync as well as a pending one; both
goes through different code paths.
The asserts only work in unittest from Python >= 2.7. For testing the
PIM Manager that is okay, because it needs a fairly recent software
elsewhere, too. Only test-dbus.py itself must still work with older
Python.
org._01.pim.contacts.Manager.Aborted may be useful if the caller of
SyncPeer() wants to distinguish between sync failures (BadStatus) and
intentionally aborted syncs.
If TEST_DBUS_PBAP_PHONE is set to the Bluetooth MAC address (like
A0:4E:04:1E:AD:30) of a phone, PBAP syncing with that phone is tested.
The phone must be paired, connected and support SyncML in addition to
PBAP. The test hard-codes Nokia SyncML settings to keep it simpe and
because Nokia phones are most likely to be used.
X- extensions added by phones as part of the local->SyncML->PBAP
roundtrip are ignored. This can hide issues. For example, the Nokia
N97 Mini incorrectly sends X-CLASS:private in vCard 3.0 mode (should
use CLASS), which is faithfully copied into the local side as-is,
althought it might be better to recognize and translate it. Not a big
deal for X-CLASS, but might be for other properties.
Allow testing of peer syncing with the SyncEvolution file backend as
target. "database" must be set to a directory name which holds
individual vCard 3.0 files.
Start testing the new org._01.pim.contacts D-Bus API, as implemented
by SyncEvolution with EDS as storage.
Currently EDS has a race condition where an ESource becomes
unremovable when reusing the UID. Therefore the tests temporarily
avoid doing that.
The sleeps are necessary to prevent problems caused by creating and
removing an ESource before the file monitoring code in
evolution-source-registry can process the file events.
Turn the 'test' directory into a Python package (by adding
__init__.py) and adding 'testdbus' as symlink (because 'import
test-dbus' is invalid due to the hyphen).
The entire test-dbus.py could have been renamed, but that would entail
changes elsewhere (nightly testing) and make following the history in
the repository harder.
Looking up the method via eval() no longer worked when test-dbus.py
was used as submodule. Better look up the function in the current
instance of the class via the method name, which itself can be
extracted from the current test id.
The same process now owns two different names on the same connection.
This is transparent for clients; it would be possible to implement
the second service in a different process.
Matches the code which creates the configs. Has to recreate the
properties based on the information found in the SyncEvolution
configuration.
Runs outside of a Session because it works with an atomic snapshot of
the SyncEvolution configs (no other code can modify it while this code
reads it).
The peer properties are are stored as normal SyncEvolution
configurations. Every peer gets its own context, which contains both
the local source definition for the peer and sync plus target configs.
The special name space prefix "pim-manager-" used to identify them and
avoid potential conflicts with normal SyncEvolution configurations.
Local EDS databases are created with a fixed UID and name that also
have that prefix. One could go one step further than it is currently
done and set these databases to "disabled" in the ESourceRegistry,
which would hide them in Evolution.
Creating the databases via the synchronous SyncEvolution SyncSource
API is cheating a bit - strictly speaking, syncevo-dbus-server should
not call methods which can block, to keep it responsive. The long term
goal for syncevo-dbus-server is to move all SyncSource instantiation
and usage into separate processes, to avoid tainting the D-Bus service
with the mess that loading arbitrary third-party libraries can cause.
Config changes and syncing must be serialized, to avoid race
conditions. The Server class tracks Sessions and only activates one at
a time. This patch hooks into that concept and executes the different
operations once the requested session is the active one.
The PIM Manager doesn't have to keep track of the pending D-Bus calls.
That is done implicitly as part of binding an operation to the Session
instance via the "active" signal. What does have to track are pending
sessions, because it is the owner of those. The shared_ptr in
m_pending are the only strong reference to these sessions.
Removing them (as done in stopSync()) automatically destroys the
session and the pending D-Bus call, which (via the default behavior of
DBusResult in GDBus GIO) will send a generic "method failed" error to
the caller.
If the caller needs to distinguish between "operation cancelled" and
"fatal error", then this indirect tracking of the DBusResult must be
changed so that removing a session can call the failed() method with a
suitable error (to be defined in the API).
The Server and Session classes get extended so that it becomes
possibly to create sessions which do not belong to a D-Bus client and
to query the result of a sync directly.
It is possible to sync source A in context @X with source B in the
same context @X. The PIM manager is doing that with one context per
peer and databases that are guaranteed to not conflict with anything
else.
Therefore allow such configs. A better check is left for later, if it
really should be needed at all.
The key trick is the mapping between FolksIndividual and D-Bus. The
patch verifies that this can be done via D-Bus traits, without
fleshing out more than the very basics (sending full name).
Adds a subset of the D-Bus API to syncevo-dbus-server. Views are
resources that are attached to the client and thus get destroyed
automatically when the client disconnects from D-Bus.
In addition, failures for calls to the client's ViewAgent delete the
view. In this case, the client stays connected, because it might only
have destroyed one agent with others still active.
The choice of std::vector in getContacts() comes from the goal of
using this function more or less directly in a D-Bus implementation.
It works efficiently because the size can be computed in advance.
A more general API, like for example one which stores via some append
iterator, would also have been possible but probably isn't worth
the effort.
The classes on top of libfolks are designed such that they can
be tested without involving libfolks. Right now the goal is only
to get the API right; most of the actual functionality is missing.
The terminology (added/modified/removed) is aligned with libebook
terminology and the PIM Manager API description.
Initial step towards using SyncEvolution, PBAP and libfolks in the
context of IVI (in-vehicle-infotainment): D-Bus API definition for the
org._01.pim.contact API, --enable-dbus-service-pim, find libs, compile
into syncevo-dbus-server and client-test.
The only functional code at this time is the unit testing of libfolks,
GValueCXX and libgee.
Creating a database is only possible with a chosen name. The UID is
chosen automatically by the storage. A new property would be needed
to also specify the UID.
There are no command line tests for the new functionality because
support for it in backends is limited.
Depends on the EDS 3.6 ESourceRegistry API. Needs the very latest EDS
with the following commit. There's no configure check yet because EDS
3.6 is not released yet.
commit 6df76009318eac9dbe3dd49165394d389102764e
Author: Matthew Barnes <mbarnes@redhat.com>
Date: Tue Sep 11 22:56:08 2012 -0400
Bug 683785 - Add e_source_new_with_uid()
Variation of e_source_new() which allows a predetermined UID to be
specified for a scratch source. This changes the "uid" property from
read-only to read/write + construct-only, and eliminates the need for
EServerSideSource to override the property.
The API supports creating a database with a specific name and UID. To
avoid ambiguities, deleting is done via the UID.
Looking up the UID for a "database" property value is meant to be
supported by opening the SyncSource and then getting the current
database that was opened.
This allows implementing a delete operation that uses the same
database lookup method as syncing.
All of the new methods have stubs which reject the operation
respectively tell the caller that no information is available. This
was done to avoid having to adapt all derived SyncSources. Support for
the new functionality is optional.
Looks up a value in a map and returns a found value or a default. This
mimicks Python's dict.get().
Because it is not possible in C++ to return None as default, the
returned value is wrapped in a InitState to tell the caller whether
the value was found.
Depends on the unified InitState template.
The new InitState template works for both classes and generic types.
The difference is in the kind of base class that it inherits from: the
one for classes "is" an instance of the wrapped type, the other "has"
such an instance. boost::is_class is used to pick the right utility
class, something that had to be done manually previously.
While touching the classes, the explicit initialization with "const
char *" was replace with a more general template constructor.
The main advantage is that other template code, like the D-Bus traits,
no longer needs to distinguish between InitState and
InitStateClass. The InitState implementation did not become
smaller. At least it avoids code duplication (m_wasSet handled in one
place).
Using a DBusClient* to make an actual call does not change the object.
The call is independent of the instance used to make it (see also the
previous commit about handling method results for asynchronous calls
after destroying the call instance). Therefore it makes sense to mark
the methods as const.
Now DBusResult itself tracks whether a reply was sent. If not, it
reports a generic "processing the method call failed" error. If that
error is not good enough for the method implementation, then it must
call failed() itself, as before.
Remove global variables and allow addditional names to be owned via
DBusConnectionPtr::ownName().
g_bus_own_name_on_connection() is given shared access to a ref-counted
OwnNameAsyncData (instead of full ownership) because that way it is
guaranteed that the instance is still around while waiting for name
acquisition in undelay().
Primarily this is meant to be used for iterating over collections
in libfolks, which come straight from libgee.
The header file assumes that the corresponding gee header files are
available. A user of it must link against libgee.
The wrapper enhances type safety (can only assign values of
the right type, enforced at compile time) and automates resource
tracking (g_value_unset() called be destructor). Generic,
boxed values need special treatment because their GType
is not a constant.
The header file assumes that the corresponding glib header files
are available. A user of it must link against libgobject.
The synchronous version used to deadlock frequently, the asynchronous
one doesn't, so use that instead. The problem was also worked around
later in EDS (commit below), but let's stick to the asynchronous
version anyway to be closer to Evolution.
commit 6c4c1c6a1f40a6873be954a0b9d770e31726bd50
Author: Matthew Barnes <mbarnes@redhat.com>
Date: Fri Sep 7 07:40:09 2012 -0400
ESourceRegistry: Work around GType deadlock.
Work around http://bugzilla.gnome.org/show_bug.cgi?id=683519
until GObject's type initialization deadlock issue is fixed.
Apparently only the synchronous instantiation is affected.
Instead of manually written callback functions, use the C++ helper
code to get C++ code invoked from inside glib. Reduces the code
size and ensures that no exceptions escape into glib C code.
gcc failed because the type of the function is not a pointer, and it
does not allow that as template parameter (in contrast to
clang). Explicitly turn the function name into a pointer.
Some asynchronous operations are guaranteed to not fail, in which case
the finish method will have two parameters (object + handle), or just
one (handle).
This patch adds support for that by checking the type of the first
parameter to distinguish between these additional cases. It switches
to function_traits to determine function attributes. Using
boost::function also works, but was a bit hacky.
e_source_registry_new_finish() does not take a GObject pointer
as first parameter. Added another flavor of GAsyncReadyCXX
for _finish() calls with only two instead of three parameters.
A common pattern when the result is needed immediately is to write a
callback function which just stores the result and then iterate the
main loop until the asynchronous method completes. The new
SYNCEVO_GLIB_CALL_SYNC() macro provides such storage methods in the
GAsyncReadyDoneCXX helper classes and waits until the method is done.
The connectSignal() utility code can be used instead of manually
written static callback functions. The direct C callbacks are provided
by the header file for up to nine parameters and invoke a
boost::function (or any class which can be turned into a
boost::function, like the result of boost::bind).
The provided C callbacks ensure that exceptions do not escape into the
calling glib. Any exception making it that far aborts the
program. This is often missing from manually written callbacks.
The trick is to allocate a boost::function which is owned by glib and
gets destroyed with another callback when no longer needed.
Zimbra uses \N, which is valid. Added a very simplistic conversion
to \n, which is good enough for our known test cases where \\N will
not occur.
Removing VALUE=DATE from BDAY was necessary for Zimbra (?).
Both Zimbra and Google Contacts don't preserve the X-EVOLUTION-UI-SLOT
parameter.
As with SyncML, FN is not preserved by Google when using CardDAV.
In contrast to SyncML, Google's CardDAV supports additional features,
like setting a description for URL, TEL, ADR, etc. via vCard grouping
and X-ABLabel (an Apple Address Book app extensions). For now ignore
that while testing. Long term support for this in SyncEvolution would
be good.
With Google Contact + CardDAV the auto-discovery failed after
finding the default address, without reporting that result.
The reason was that the search continued at the root of the server
where PROPFIND triggers an error when using the Google server. Because
of a missing check for "have result", that error was treated as
something which needs to be reported to the user.
Fixed by unifying the various checks in a singe class.
The randomness of readdir() can cause heisenbugs. It also makes
operations like --import of files in a directory unpredictable, unless
the caller remembers to sort himself. Better make sorting
mandatory. There shouldn't be any need for performance critical
directory reading.
Fallback to old obexd API was broken because creating the
DBusRemoteObject does not verify whether the service really exists and
thus always succeeds. Fixed by checking for existence as part of the
actual CreateSession method call.
The new code needs glib. Include header file and declare dependency in
configure.
The backend must throw errors when something fatal happens. Logging
the error is not enough, because that can't be checked by the
caller. Throwing errors is best done via the utility methods in
SyncSource, because those include the source name in the exception.
Memory handling was broken: nothing owned the memory in the
StringPiece instances, munmap() was missing. Fixed by making
PbabSyncSource the owner of both.
Unified the parsing of the result. The new code based on pcrecpp is
used for both old and new obexd API.
File name and GError allocated by g_file_open_tmp() were leaked. The
file descriptor and the file were leaked in case of aborting via an
exception. Now these resources are owned by a class which will always
clean up when getting destructed.
A failed transfer was not checked for when using the new API. Probably
failed when trying to use the file (because obexd deletes it), but
better show the error message that we got for the failed transfer.
Remove the obsolete vcardParse().
The backend is not useful for most users, therefore it has to be enable
during compilation with --enable-pbap. The code for "PBAP disabled"
had to be adapted to a backend API change.
Instead of using explicit g_variant_unref() calls, let a smart pointer
class own the instance. This allows to remove code and (more
important) fixes leaks when something throws an exception or the
D-Bus traits for boost::variant return prematurely.
The signal name was not stored, making the actual filtering more
relaxed than it should have been. Worse, the path was redundantly
checked for a complete match even when SIGNAL_FILTER_PATH_PREFIX was
set, thus preventing signal deliver in that case.
Working with obexd 0.47's Transfer object depends on path prefix
filtering of signals to be efficient. This patch adds that support in
the GDBus C++ bindings, together with some documentation on how to
use it.
It also allows relaxing the signal matching by interpreting empty
strings as "match any".
Finally it adds support for passing of sender, path, interface and
member (= signal name) information to the user of the D-Bus
binding. This is necessary because a handler with wildcards for
matching cannot know the actual values unless they are passed to the
callback explicitly. This was not possible at all before for signals
(because extracting that information relied on a message pointer,
which wasn't set while decoding signal messages) and limited to just
the sender in method implementations.
Besides in functionality, GDBus for GIO and for libdbus now also
differ slightly in their API. In order to pass meta data about a
signal into the demarshalling get() methods, they have to be stored in
ExtractArgs first and passed to get() via that struct, which is an
interface change.
Derived code which extends marshalling/demarshalling needs to be
adapted accordingly. Allowing some ifdefs elsewhere seems simpler than
adapting GDBus for libdbus. Those ifdefs can be removed once libdbus
support is no longer needed.
The patch relaxes access control to DBusObject and DBusRemoteObject
because that simplifies the code which works better with std::string.
Traditionally the sender of a signal was not checked. This is probably
related to unsolved questions like "should watching org.foo.bar follow
owner changes", but not doing any checking is just not right
either. Not fixed in this patch.
and allow this UID to be used for selecting a particular calendar.
Since all listed calendars are in the default storage anyway,
the UID is far more useful to have. (On the N9, knowing the
physical storage does not help the user at all anyway, as access
to it is restricted and needs to go through the API anyway.)
(cherry picked from commit a5c2939c1d)
The PBAP backend must link against pcrecpp explicitly. The header
files are part of the global search path, but the library is not.
Having to specify the library explicitly is right (avoids
adding it to modules which don't need it); perhaps the header
file flags also should have to be added explicitly.
It worked when linked into an executable which also uses pcrecpp, but
can fail to build depending on how strict the toolchain is. Found
by dist checks in the nightly testing.
When using libdbus instead of GIO D-Bus (as done by syncevolution.org
binaries and SyncEvolution on Maemo), local sync may have aborted
after 25 seconds when syncing many items with a D-Bus timeout error:
[ERROR] sending message to child failed: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
Reported by Toke Høiland-Jørgensen for Harmattan. Somehow not encountered
elsewhere.
Root cause is the use of the default D-Bus timeout of 25 seconds
instead of disabling the timeout altogether. Fixed by using
DBUS_TIMEOUT_INFINITE and INT_MAX as fallback.
(cherry picked from commit a15e8be1c2)
First, clarified that a source has to set all revision strings or
none. This was implied, but not spelled out explicitly.
Second, use that to detect when a sync must be done in slow mode.
Third, avoid calculating changes when that isn't needed. This last
point was left as TODO while in release preparations and can
be committed now.
This changes is needed for the PBAP backend which does not currently
set revision strings and thus cannot be used in incremental syncing.
This patch introduces support for true one-way syncing ("caching"):
the local datastore is meant to be an exact copy of the data on the
remote side. The assumption is that no modifications are ever made
locally outside of syncing. This is different from one-way sync modes,
which allows local changes and only temporarily disables sending them
to the remote side.
Another goal of the new mode is to avoid data writes as much as
possible.
This new mode only works on the server side of a sync, where the
engine has enough control over the data flow.
Most of the changes are in libsynthesis. SyncEvolution only needs to
enable the new mode, which is done via an extension of the "sync"
property:
- "local-cache-incremental" will do an incremental sync (if possible)
or a slow sync (otherwise). This is usually the right mode to use,
and thus has "local-cache" as alias.
- "local-cache-slow" will always do a slow sync. Useful for
debugging or after (accidentally) making changes on the server side.
An incremental sync will ignore such changes because they are not
meant to happen and thus leave client and sync out-of-sync!
Both modes are recorded in the sync report of the local side. The
target side is the client and records the normal "two-way" or "slow"
sync modes.
With the current SyncEvolution contact field list, first, middle and
last name are used to find matches during any kind of slow sync. The
organization field is ignored for matching during the initial slow
sync and used in all following ones. That's okay, the difference won't
matter in practice because the initial slow sync in PBAP caching will
be done with no local data. The test achieve the same result in both
cases by keeping the organization set in the reduced data set.
It's also okay to include the property in the comparison, because it
might help to distinguish between "John Doe" in different companies.
It might be worthwhile to add more fields as match criteria, for
example the birthday. Currently they are excluded, probably because
they are not trusted to be supported by SyncML peers. In caching mode
the situation is different, because all our data came from the peer.
The downside is that in cases where matching has to be done all the
time because change detection is not supported (PBAP), including the
birthday as criteria will cause unnecessary contact removed/added
events (and thus disk IO) when a contact was originally created
without birthday locally and then a birthday gets added on the phone.
Testing is done as part of the D-Bus testing framework, because usually
this functionality will be used as part of the D-Bus server and writing
tests in Python is easier.
A new test class "TestLocalCache" contains the new tests. They include
tests for removing extra items during a slow sync (testItemRemoval),
adding new client items under various conditions (testItemAdd*) and
updating/removing an item during incremental syncing
(testItemUpdate/Delete*). Doing these changes during a slow sync could
also be tested (not currently covered).
The tests for removing properties (testPropertyRemoval*) cover
removing almost all contact properties during an initial slow sync, a
second slow sync (which is treated differently in libsynthesis, see
merge=always and merge=slowsync), and an incremental sync.
When delaying processes as part of the sync, disable the normal
test timeouts. Useful for attaching to the delayed process
with gdb and then debugging without running into the test
timeouts.
For example in the new TestLocalCache.testItemDelete100, the
percentage value in the ProgressChanged signal become larger
than 100 and then revert to 100 at the end of the sync.
Seems the underlying calculation is faulty or simply inaccurate.
This patch does not fix that. Instead it just clips the final
result to the valid range. It also cleans up ownership of the
actual int32_t progress value.
Instad of having three of the helper classes as members,
only store pointers to them in Server. The advantage is
that including Server.h becomes simpler for .cpp files
which do not need to use these classes and that they
can be constructed/destructed more explicitly. This is
particularly important because they get access to the
Server instance which is still in the process of
initializing itself.
Change tracking in the file backend used to be based on the
modification time in seconds. When running many syncs quickly (as in
testing), that can lead to changes not being detected when they happen
within a second.
Now the file backend also includes the sub-second part of the
modification time stamp, if available. The revision string
intentionally uses no leading zeros, because that would make it
unnecessarily larger.
This change is relevant when upgrading SyncEvolution: most of the
items will be considered "updated" once during the first sync after
the upgrade (or a downgrade) because the revision strings get
calculated differently.
With this patch, the databaseFormat can be used as follows:
[(2.1|3.0)][:][^]propname,propname,...
3.0 = download in vCard 3.0 instead of the default 2.1
3.0:^PHOTO = download in vCard 3.0 format, excluding PHOTO
PHOTO = download in vCard 2.1 format, only the PHOTO
Valid property names are pulled from obexd with ListFilterFields().
Binding a temporary std::string (the result of getDatabase() in this
case) to a const reference is broken, because the temporary instance
will get deleted before the reference.
Local IDs across sessions are only useful when we also have useful
revision strings. For debugging it is easier to just enumerate the
contacts. Would be nice to use the same number as in the PBAP session,
but that information is not currently available via obexd (see "PBAP +
two-step download" on Bluez mailing list).
As it stands now, the PBAP backend can only be used in slow syncs
where the engine does the matching. Perhaps that's the right way to do
it, instead of adding redundant code to the backends.
When using libdbus instead of GIO D-Bus (as done by syncevolution.org
binaries and SyncEvolution on Maemo), local sync may have aborted
after 25 seconds when syncing many items with a D-Bus timeout error:
[ERROR] sending message to child failed: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
Reported by Toke Høiland-Jørgensen for Harmattan. Somehow not encountered
elsewhere.
Root cause is the use of the default D-Bus timeout of 25 seconds
instead of disabling the timeout altogether. Fixed by using
DBUS_TIMEOUT_INFINITE and INT_MAX as fallback.
and allow this UID to be used for selecting a particular calendar.
Since all listed calendars are in the default storage anyway,
the UID is far more useful to have. (On the N9, knowing the
physical storage does not help the user at all anyway, as access
to it is restricted and needs to go through the API anyway.)
First, FN is not mandatory in PBAP for vCard 2.1. Better use N as key,
which is mandatory. Second, finding the property name must stop at
colon or semicolon, whatever comes first. The previous code included
the colon and the first name component because it stopped at the first
semicolon.
The mechanism for making D-Bus method calls changed on the master
branch. The normal call operator now does a blocking method call,
which breaks code depending on the former non-blocking semantic.
All D-Bus methods calls in the PBAB backend are now blocking. This
allows removing all callback methods and the start/stop main loop
tricks. A lot easier to read and problems while executing the method
calls are now properly reported back to the user of the backend via
exceptions.
Populates the local cache with the addressbook given by the remote
device as result of PullAll().
The LUID is built from the full-name of the vcard entry, combined with a
suffix that avoid collisions in case of multiple entries with the same
full-name.
This instantiates the D-Bus wrapper and thus requests a obex-client
session.
A local cache of the addressbook has been added which will be loaded
when the database is opened.
Fixed the autotools rules and include statement so that the top-level
configure+Makefile can choose between the two different C++
bindings. The library must be linked explicitly to avoid problems when
linking statically.
In libdbus, the variant must be opened as a sub-iterator with the type
of the actual value as parameter. Do that in the boost::variant
visitor, using a template call operator so that we don't need multiple
versions of it for each type in the variant.
If the engine got a parent event with X-SYNCEVOLUTION-EXDATE-DETACHED,
merged it internally and then wrote it back, the
X-SYNCEVOLUTION-EXDATE-DETACHED would have been stored in the CalDAV
server. Now this is avoided by removing all such properties before
storing the new or updated event.
This was previously done (and still is) as an extra precaution in the
code which adds the properties.
(cherry picked from commit ede6e65ccb)
The previous approach (updating the internal cache) had the drawback
that X-SYNCEVOLUTION-EXDATE-DETACHED was also sent to the CalDAV
server. The work of generating it was done in all cases, even if not
needed. Found when running the full test suite.
Now the X-SYNCEVOLUTION-EXDATE-DETACHED properties are only added to
the icalcomponent that is generated for the engine in
readSubItem(). There's still the risk that such an extended VEVENT
will be stored again (read/modify/write conflict resolution), so
further changes will be needed to filter them out.
To ensure that this change doesn't break the intended semantic of
X-SYNCEVOLUTION-EXDATE-DETACHED, the presence of these properties is
now checked in the LinkedItems::testLinkedItemsParentChild test.
(cherry picked from commit 1cd49e9ecd)
The check for the _r variants in libical still used an older max
version. This might have prevented using them (if not found) or
could have led to a mixture of old and new libecal in the same
process (probably crashed).
(cherry picked from commit f93b675d77)
Required for Maemo 5 recurrences workaround.
icalproperty_get_value_as_string() is one of those
functions for which a _r variant exists; use that
if possible.
(cherry picked from commit 88b0cc2b62)
When storing an updated detached recurrence, the VEVENT was expected
to contain a RECURRENCE-ID. This might not be the case when the peer
in a local sync (typically the local storage) was unable to store that
property.
Support such a local storage by re-adding the RECURRENCE-ID based on
the available information:
- RECURRENCE-ID value from sub ID
- TZID from parent event's DTSTART (if parent exists) or
current event's DTSTART (otherwise)
Tests for different scenarios (all-day event with date-only RECURRENCE-ID,
with TZID, without TZID) will be committed separately.
(cherry picked from commit 03d3c720ba)
If the event has a DTSTART with TZID, then the EXDATE also should
have that same TZID. It is uncertain whether the backend provides
the TZID, but even if it does, because of the SIMPLE-EXDATE rule
the value wouldn't be parsed.
(cherry picked from commit 6d80112dc4959e8c4f940b026e0447fcf7256142)
This must be done for regular EXDATE values in the EXDATE array field
(new SIMPLE-EXDATE rule) and for the addition EXDATE values created
for RECURRENCE-IDs in the EXDATES_DETACHED array field (new
HAVE-EXDATE-DETACHED-NO-TZID rule).
Both these rules are activated as subrules by the new MAEMO-CALENDAR
rule, which is set by the Maemo Calendar backend now.
There is one caveat the SIMPLE-EXDATE rule is also active when parsing
an EXDATE created by the storage and therefore TZID will be ignored,
if any is set at all (uncertain).
A vCalendar outgoing script could fix this by adding the DTSTART time
zone to the floating time value in the parsed EXDATEs.
(cherry picked from commit 755638e3c570b531c9bba81f99a8ac710cb25564)
Tell the engine to pass us EXDATEs created for each RECURRENCE-ID in a
detached recurrence. Necessary because the storage and app do not
support UID/RECURRENCE-ID and thus show duplicates without this
workaround.
(cherry picked from commit 165ea81fca9493d0dce55b82d127ad74cf7b56af)
Conflicts:
src/backends/maemo/MaemoCalendarSource.h
Add X-SYNCEVOLUTION-EXDATE-DETACHED properties to main event for each
detached recurrence. Needed by some other SyncEvolution
backends (for example, Maemo 5).
(cherry picked from commit 253adad7d77910b120b4f89a9922dd30516ed3bd)
The Maemo 5 calendar backend does not support UID/RECURRENCE-ID
semantic. This is problematic in combination with peers which rely on
that to override the main recurring event on specific recurrences,
because the Maemo 5 calendar will show duplicates.
Fixing this inside the calendar backend is hard because it doesn't get
to see all related events at once. The engine has the same
problem. Therefore a workaround is used:
- backends which have that unified access to all related events
add special X-SYNCEVOLUTION-EXDATE-DETACHED properties to the
main event which correspond to the exceptions represented by
detached recurrences, in addition to the regular EXDATE; the
backend must set the HAVE-SYNCEVOLUTION-EXDATE-DETACHED rule when
using PARSETEXTWITHPROFILE()
- the engine parses the special property (only if the
HAVE-SYNCEVOLUTION-EXDATE-DETACHED rule is active!) and stores
the values in the EXDATES_DETACHED field
- this gets passed between SyncEvolution instances in a local
or SyncML sync
- another SyncEvolution backend can request to get either
X-SYNCEVOLUTION-EXDATE-DETACHED or regular EXDATE properties
by setting the HAVE-[SYNCEVOLUTION-]EXDATE-DETACHED rules
The Maemo 5 backend needs to set HAVE-EXDATE-DETACHED. This means that
importing events will add additional EXDATEs which will be sent back
when updating events inside the calendar app, but that is okay. They
are merely redundant.
(cherry picked from commit a4d0909b28721ff3500306f5a29a367ba2f2bffb)
Conflicts:
src/syncevo/MapSyncSource.h
Running "syncevolution" without parameters ran into a null
pointer access in a Boost shared pointer, leading to an abort
of the program with a Boost exception.
Fix this by checking whether the backend really has a context
node. Long term we need a solution which provides the necessary
information even when running without a sync config.
Conflicts are because upstream decided to delete their debian directory.
Conflicts:
debian/changelog
debian/compat
debian/control
debian/copyright
debian/docs
debian/rules
debian/syncevolution.install
This is slightly wasteful since these files are not needed by the
syncevolution command line client, as a percentage of the size of the
binaries, the waste is small.
This resolves a problem reported Matthijs Kooijman with
syncevolution-http. It is arguably overkill, but it does have the
benefit of minimizing archive size.
echo'A git checkout is required to generate a ChangeLog.' >&2;\
echo'A git checkout is required to generate a ChangeLog.' >&2;\
fi
fi
ifENABLE_EVOLUTION_COMPATIBILITY
# check .so (relevant for modular builds) and main syncevolution binary
# (relevant in that case and for static builds) for dependencies on
# problematic libraries and symbols
#
# ical_strdup is an exception because it is in SyncEvolution.
all_local_installchecks+= toplevel_so_check
toplevel_so_check:
for i in `find $(DESTDIR)/$(libdir)/syncevolution $(DESTDIR)/$(libdir)/libsyncevo* $(DESTDIR)/$(libdir)/libsynthesis* -name *.so`$(DESTDIR)/$(bindir)/syncevolution;\
[Identifies which source revision to use from --with-synthesis-src repository, empty string stands for latest. Default for default --synthesis-src: SYNTHESISSRC_REVISION]),
[Identifies which source revision to use from --with-synthesis-src repository, empty string stands for latest. Default for default --synthesis-src: SYNTHESISSRC_REVISION]),
[build executables which only call Evolution via dlopen/dlsym: this avoids all hard dependencies on EDS shared objects, but might lead to crashes when their ABI changes]),
[build executables which only call Evolution via dlopen/dlsym: this avoids all hard dependencies on EDS shared objects, but might lead to crashes when their ABI changes; use --enable-evolution-compatibility=ical to enable a weaker mode where linking is done normally and only libical.so.0/1 enum differences are worked around (allows patching resulting executables to use either of these two)]),
[libical 1.0 updated its system zone data parsing code so that it produces VTIMEZONEs which are unsuitable for syncing. SyncEvolution ships with a copy of the older code and uses it by default in combination with libical 1.0. Starting with libical v2, icaltzutil_set_exact_vtimezones_support and the code in libical is used again.]),
[enable_icaltz_util="$enableval"
test "$enable_icaltz_util" = "yes" || test "$enable_icaltz_util" = "no" || AC_ERROR([invalid value of --disable-internal-icaltz: $enableval])],
[AC_MSG_ERROR([need at least libsynthesis >= SYNTHESIS_MIN_VERSION; the latest libsynthesis for SyncEvolution is the one from http://meego.gitorious.org/meego-middleware/libsynthesis])])
[AC_MSG_ERROR([need at least libsynthesis >= SYNTHESIS_MIN_VERSION; the latest libsynthesis for SyncEvolution is the one from http://cgit.freedesktop.org/SyncEvolution/])])
fi
fi
@ -871,6 +918,7 @@ if test "$need_glib" = "yes"; then
AC_DEFINE(HAVE_BOOST_LOCALE,,[define if the Boost::Locale library is available])
BOOSTLIBDIR=`echo $BOOST_LDFLAGS | sed -e 's/@<:@^\/@:>@*//'`
LDFLAGS_SAVE=$LDFLAGS
if test "x$ax_boost_user_locale_lib" = "x"; then
for libextension in `ls $BOOSTLIBDIR/libboost_locale*.so* $BOOSTLIBDIR/libboost_locale*.dylib* $BOOSTLIBDIR/libboost_locale*.a* 2>/dev/null | sed 's,.*/,,' | sed -e 's;^lib\(boost_locale.*\)\.so.*$;\1;' -e 's;^lib\(boost_locale.*\)\.dylib.*$;\1;' -e 's;^lib\(boost_locale.*\)\.a.*$;\1;'` ; do
for libextension in `ls $BOOSTLIBDIR/boost_locale*.dll* $BOOSTLIBDIR/boost_locale*.a* 2>/dev/null | sed 's,.*/,,' | sed -e 's;^\(boost_locale.*\)\.dll.*$;\1;' -e 's;^\(boost_locale.*\)\.a.*$;\1;'` boost_locale; do