Changed the default sync format of the Evolution Data Server contact
source from vCard 2.1 to vCard 3.0. This is better for SyncEvolution<->SyncEvolution
sync, because it avoids some vCard 2.1 encoding issues in the Synthesis engine (found
during "testConversion").
Also more properties are officially part of the standard.
This uses the new libsynthesis support for adding and checking entries
in the SyncCap to detect per datastore whether UID/RECURRENCE-ID are
truly globally unique and thus can be used to finding pairs. The
presence of the property alone is no guarantee for that.
Previously this kind of pairing was enabled only for local sync, which
was a hack which didn't work for local backends which didn't support
UID (for example, Maemo 5 calendar). It also didn't work for mixtures
of datastores with and without that kind of support.
"1122583000" was randomly chosen as pseudo sync mode. It is a number
because strings confuse Funambol. Note that SYNCMODESUPPORTED() only
works inside the compare script.
Both begin() methods grab the item node and use it for tracking.
That must only be done once, otherwise the second cycle uses an empty
Boost pointer (=> exception).
The main goal is to test CalDAV/CardDAV sources as part
of a SyncML client and/or server. A test involving syncevo-http-server
is now named "<client storage><server storage>":
- edsfile = EDS in client, file in server (used to be syncevohttp)
- davfile = CalDAV/CardDAV in client, file in server (new)
- edsdav = EDS in client, CalDAV/CardDAV in server (new)
For this, WebDAVSourceRegister.cpp must be able to create test sources
which match the client 1/2 sync configs. The client "1" or "2" strings
are passed through the abstract ClientTest into the source A/B create
callbacks. WebDAVSourceRegister.cpp cannot get this directly from
ClientTest because it lives in a plugin which is not necessarily
linked against ClientTest.
A conceptual change is that CLIENT_TEST_EVOLUTION_PREFIX/USER/PASSWORD
no longer override existing config properties. That is necessary
because the shared prefix is too simplistic for WebDAV (needs full URL
in "database"); also helps KDE (needs resource URI). The env variables
and the default "SyncEvolution_Test_" value for the database prefix are
still used if the config does not exist. That is useful to prevent
accidentally running client-test against the default databases.
The nightly setup script might (should!?) be made public to simplify
configuring a server.
Another change is the user-configurable part of client-test now lives
entirely in the _1/_2 client sync configs and contexts. From there the
source properties are copied into the Client::Source context each time
client-test runs.
A CalDAV/CardDAV source could not be used as data storage in a SyncML
server or for direct syncing with a SyncML peer (phone, server)
because the source dependent on additional information that was taken
from the "target-config" (syncURL, credentials, log level).
Now the source settings are checked first, with the "target-config" as
fallback:
- "database" - must be set to the full URL of the resource,
use --print-databases or the servers documentation to find it
- "databaseUser" - username on server
- "databasePassword" - corresponding password
The log level for WebDAV operations is taken from the global logging
instance as fallback and thus from the log settings of the running
sync.
The downside of this approach is further duplication of the
credentials. This will be fixed eventually by introducing the concept
of "credential references", but that is harder to implement and will
depend on the "databaseUser/Password" change anyway.
XMLParser::initReportParser() allows an empty callback,
but doResponseEnd() didn't check for that. Caused exceptions
and thus failures in the Google 404 workaround (now also fixed
differently, by providing a callback).
The workaround for a 404 from Google Calendar for a GET (sending a
REPORT request matching the item's UID) was broken: first, processing
the result ended up calling the unset responseEnd boost function
pointer, which caused the request to fail. Second, getting multiple
items wasn't handled (data from all items concatenated together was
used).
That can happen in the somewhat unlike case that some items have a UID
which is a complete superset of the requested UID - not realistic in
real life, but happens during testing.
Fixed by using a responseEnd callback which only stores the right
item data and resets the buffer in all cases.
All user-facing functions (password handling, reading from
stdin) are now in a dedicated "UserInterface" class, instead of
spreading that between ConfigUserInterface and SyncContext.
SyncContext no longer has the misleading "is a" [Config]UserInterface]
relationship. Instead it "has a" UserInterface instance, which is set
at runtime. Long term the plan is remove the need to subclass
SyncContext. In the local sync that was already possible
(LocalTransportContext->LocalTransportUI).
The guarantee that there always is a usable instance that can be used
without checking for NULL is provided by the getUserInterfaceNonNull()
method, which falls back to a dummy instance if necessary.
Reading data and password from stdin is moved out of the core
libsyncevolution into the syncevolution binary. That way the D-Bus
server and client-test do not accidentally attempt to read from stdin
(has happened when setting up testing incorrectly).
The special semantic of the former RegisterSyncSource::InactiveSource
(invalid pointer of value 1) caused bugs, like using it in
--print-databases (=> segfault) or not being able to store the result
of a createSource() directly in a smart pointer (=> potential leak in
SyncSource::createSource()).
Obviously a bad idea to start with. Replaced with a
RegisterSyncSource::InactiveSource() method which creates a real,
inactive SyncSource instance which can and must be deleted by the
caller.
This is a SyncSource API change for backend developers.
Instead of RegisterSyncSource::InactiveSource, return
RegisterSyncSource::InactiveSource().
Comparisons against RegisterSyncSource::InactiveSource needs
to be replaced with a call to the new SyncSource::isInactive().
User visible fixes:
* --print-databases: no longer crashes when EDS or KDE backends
are not usable. Instead it prints "not enabled during compilation or
not usable in the current environment".
* --print-databases: continues with other backends even if
one backend throws an exception, as the KDE backend does
when it cannot find Akonadi. Error messages are printed.
The platform specific code which is of no value unless you run a
specific desktop now gets compiled as part of shared libraries, just
like the storage backends. The advantage is that the rest of
SyncEvolution keeps running even if one of these shared libraries
cannot be loaded due to missing depdendencies. syncevolution.org
packages will not declared these dependencies, to allow installing
each package without forcing the installation of unwanted libraries.
Distros can package the platform code separately.
Another advantage is reduced code duplication (password load/store
was duplicated in command line and D-Bus server).
Technically this uses almost the same mechnism as loadable sync
sources. The code resides in src/backends/[kde|gnome], where the
autotool magic finds the *Register.cpp files automatically and
includes them into executables. These files contain global singletons
which, when initialized, connect platform specific code to new signals
in the core (init, password load/save).
The actual code is in the backend libraries. Because
SE_ARG_ENABLE_BACKEND() is not used (in favor of the traditional
enable macros), linking against these libs must be set up by adding
them to the (now slightly misnamed) SYNCSOURCES variable in the
configure fragments.
EDS 3.3 has the "store PHOTO data in local files" patch, i.e. it
automatically copies the data from base64 encoded property value
into a file and then only stores an URI of that file in the vCard.
SyncEvolution already correctly handles such URIs when syncing.
In that case it is desirable to work with the URI as long as
possible and only inline once the data actually gets sent to a
peer (see d6d6e8ca, "vcard: inline local photo data (BMC #19661)").
But the backend should also inline the PHOTO file data when
exporting as vCard, because that vCard is meant to be a complete
representation of the entire contact, including the PHOTO (might
be send to other peer via email, used to restore later, etc.).
This patch adds that inlining for "raw reads" (aka "give me
your data in your native format") with the help of the EDS 3.3
e_contact_inline_local_photos() utility methods. Compile or
runtime checks are necessary to determine whether that methods
is available, depending on whether compatibility mode is active.
If EDS doesn't have the function, data will be exported with the URI,
as before. That should be fairly unlikely before EDS 3.3.
The camel backend failed to compile with latest EDS from master
(a 3.3.99 pre-release). Disabling compilation of camel
backend and eplugin (which might still compile, but wasn't
tested anyway in the nightly builds), to keep testing of activesyncd
for contacts and calendar working.
ClientSourceRevisions needs to be reset() before adding new items
while doing change detection. CalDAVSource also didn't properly
populate its m_cache in following cycles because the cache was assumed
to be set already.
When a server responds to a PROPFIND for a path with results for some
other path, then SyncEvolution crashed during the search for the
default calendar or address book because of a bug in the code which
was meant to handle that kind of response. Apparently Yahoo Calendar
did that. Now seen again in combination with Radicale 0.6.4.
The debug request failed with Radicale (details below). Now the result
of executing the request is ignored instead of failing the entire
operation.
PROPFIND /public_user/calendar/ HTTP/1.1
[<?xml version="1.0" encoding="utf-8"?>
<propfind xmlns="DAV:"><allprop/></propfind>
]
HTTP/1.0 500 Internal Server Error
Traceback (most recent call last):
File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
File "/usr/lib/python2.7/dist-packages/radicale/__init__.py", line 183, in __call__
status, headers, answer = function(environ, items, content, None)
File "/usr/lib/python2.7/dist-packages/radicale/__init__.py", line 365, in propfind
environ["PATH_INFO"], content, calendars, user)
File "/usr/lib/python2.7/dist-packages/radicale/xmlutils.py", line 184, in propfind
props = [prop.tag for prop in prop_element]
TypeError: 'NoneType' object is not iterable
Radicale 0.6.4 returns no results (neither error nor data) when asked
to deliver specific items with a multiget report. This broke certain
change tracking cases in SyncEvolution (one sync succeeded, items
added or updated later on).
Now SyncEvolution detects the missing responses and falls back to
individual GET requests (slower).
SyncEvolution strips the collection path from hrefs to produce
relative IDs. This assumed that the path ends in a slash, which was
not always the case when users manually set the URL (syncURL,
database). As a result, IDs had a leading slash, which later were
treated as absolute file paths on the server, leading either to
authentication or 404 errors.
Now the path is normalized as a collection when parsing the URL, which
adds the expected slash.
Radicale sends <href> values with more than one slash as separator:
<href>/public_user/calendar/calendar_1//20060406T211449Z-4562-727-1-63@gollum.ics</href>
SyncEvolution then only stripped the part up and including
".../calendar_1/" and used the rest as ID, which later failed because
the leading slash in "/20060406T211449Z-4562-727-1-63@gollum.ics" led
to requests for an item in the root of the server.
The previous patch was incomplete, or something changed again on the
Google side. We also need to deal with Google Calendar reporting items
in the etag query and then not being able to deliver their data in
a multiget (404 error). Affects change tracking.
A valid reason for the 404 would be a concurrent DELETE. So let's
do the right thing and remove all information that we still had about
the item, i.e., report it as deleted.
Recently Google Calendar started to report empty VCALENDAR items
with no VEVENTs inside. Sending a DELETE for them didn't seem to
have an effect, or rather, might have created them in the first place.
This has all kinds of weird effects during syncing.
Now such empty items are silently ignored.
Always try to create address book or calendar database, because even
if there is a source there's no guarantee that the actual database was
created already; the original logic for only setting this when
explicitly requesting a new database therefore failed in some cases.
When using the "backend = addressbook/calendar/todo/memo" aliases, then
the EDS backend must be the only backend that accepts that, to avoid
ambiguities and regressions for traditional usage of SyncEvolution.
These aliases are used when following the current setup instructions
or the GTK sync UI.
Even if SyncEvolution somehow figured out that "addressbook" is meant to
be the KDE addressbook (for example by looking at KDE_FULL_SESSION once
more), fully automatic configuration for KDE still wouldn't work because
the "database" property has to be set explicitly to a URI of an Akonadi
resource. There is no code in the backend or Akonadi to identify the
default resource.
When building modules, the registation code must be part of
the module. Integration testing should always be offered by
the backend. The ifdef is only for the test driver.
readItem() might be asked to retrieve a non-existent item.
The fetch job succeeds in that case, but without returning
any item. Throw the 404 status error in this case.
Due to bitrot the Akonadi backend and KWallet support code no longer
worked. Moved the common code for KApplication initialization into
libsyncevolution's SyncContext::initMain() and fixed autotools rules.
The old code always tried to contact an X server (default constructor
of KApplication). That doesn't seem to be necessary and is avoided now.
Even better might be to skip KApplication entirely and instead use
QCoreApplication and KComponentData, as suggested by
http://api.kde.org/4.x-api/kdelibs-apidocs/kdeui/html/classKApplication.html
KAboutData was incorrectly passed the address of a string pointer, not
the pointer itself.
Testing the Akonadi backend in client-test failed because client-test
always overwrites the "backend" value with
"Test_kde_[contact/event/..]._[1/2]". Now this special case is
detected. The backend then uses the first resp. second resource that
it finds.
When NEON_LIBS=-lneon-gnutls, the sed invocation didn't properly turn
that into -ldl. Instead it used -ldl-gnutls, which caused a link error.
Fixed with an extended regex.
Partial cherry-pick from 856576df99 (without the install check).
Re-insert the recurrence ID if not provided as part of the
data when updating an item. Necessary to support dumb local
storage which doesn't support UID/RECURRENCE-ID.
SyncEvolution now requires that "item not found" errors have
a 404 status code; the Synthesis engine depends on recognizing
such errors in some cases.
For calendars, the backend does all the checking itself and
doesn't event talk to the server.
For contacts, the same is done for deleting (because the
server accepts a delete request for a non-existent item
without complaining) and also detects the "not found error"
when reading (which would be nicer if it could be done
without string comparisons, but that doesn't seem to work yet).
The plain "char *" sync key used to be for a static buffer, now
is allocated dynamically. Transfer ownership of the buffer
to our eptr smart pointer, which will also throw an error if the
pointer is NULL.
activesyncd gets compiled completely (which should always succeed) and
then only the required pieces are installed (without the things which
are hard-coded for /usr, because installing those will fail).
Testing is done by running a similar set of client-test tests as for remove
CalDAV/CardDAV servers.
activesyncd is started anew each time client-test is run, done in
the new wrappercheck.sh script. Can be combined with valgrindcheck.sh:
wrappercheck.sh valgrindcheck.sh activesyncd -- valgrindcheck.sh client-test ...
The return code of wrappercheck.sh is the return code of the real command
or the daemon, if the real command succeeded. This way the special 100 return
code from valgrindcheck.sh is returned if the only problems were memory
issues.
When NEON_LIBS=-lneon-gnutls, the sed invocation didn't properly turn
that into -ldl. Instead it used -ldl-gnutls, which caused a link error.
Fixed with an extended regex. Also added a installcheck for this particular
aspect.
This is primarily for ActiveSync where the test failed until support
for removing properties was added to activesyncd. Is also applied to
all other sources, just in case.
The EDS contact backend needs to keep the X-EVOLUTION-FILE-AS property
because EDS keeps adding it, which makes it impossible to test its removal.
At the property level, the isDefault retval exposed whether the
property value was set explicitly in the config or taken from the
property default. That information got lost at the
SyncConfig/SyncSourceConfig level although there are cases where that
is relevant (like providing better error messages, BMC #23783).
Now that level uses the new InitState classes instead of plain
int/bool/std::string return values. Code which assigns these return
values to local variables doesn't need to be adapted. Directly using
the return value in an expression might need some work (typically
adding a get() if the compiler cannot infer the desired
type). Overriding the virtual methods always needs to be adapted.
If the engine got a parent event with X-SYNCEVOLUTION-EXDATE-DETACHED,
merged it internally and then wrote it back, the
X-SYNCEVOLUTION-EXDATE-DETACHED would have been stored in the CalDAV
server. Now this is avoided by removing all such properties before
storing the new or updated event.
This was previously done (and still is) as an extra precaution in the
code which adds the properties.
(cherry picked from commit ede6e65ccb)
The previous approach (updating the internal cache) had the drawback
that X-SYNCEVOLUTION-EXDATE-DETACHED was also sent to the CalDAV
server. The work of generating it was done in all cases, even if not
needed. Found when running the full test suite.
Now the X-SYNCEVOLUTION-EXDATE-DETACHED properties are only added to
the icalcomponent that is generated for the engine in
readSubItem(). There's still the risk that such an extended VEVENT
will be stored again (read/modify/write conflict resolution), so
further changes will be needed to filter them out.
To ensure that this change doesn't break the intended semantic of
X-SYNCEVOLUTION-EXDATE-DETACHED, the presence of these properties is
now checked in the LinkedItems::testLinkedItemsParentChild test.
(cherry picked from commit 1cd49e9ecd)
Required for Maemo 5 recurrences workaround.
icalproperty_get_value_as_string() is one of those
functions for which a _r variant exists; use that
if possible.
(cherry picked from commit 88b0cc2b62)
When deleting an item on phone and locally, the next sync fails with
ERROR messages about "object not found". This has several reasons:
- libsynthesis super data store attempts to read items
which may or may not exist (triggers ERROR message)
- it checks for 404 but Evolution backends only return a generic
database error (causes sync to fail)
It turned out that ReadItem and DeleteItem are expected to return a
404 status when the requested item does not exist. This patch documents
that (only in the TrackingSyncSource, though), adds tests and fixes
EDS, WebDAV, file and sqlite backends accordingly.
This patch also suppresses the 404 error logging inside DeleteItem(),
while still returning that error code to the Synthesis engine. Not
logging that particular situation is consistent with the previous
SyncEvolution behavior of silently returning successfully when there
wasn't anything to delete.
In addition, more recent libsynthesis versions also no longer do
a ReadItem() call to test for existence. That would still trigger
a spurious (albeit now harmless) ERROR message.
(cherry picked from commit ba289c899f)
Conflicts:
src/backends/webdav/CalDAVSource.cpp
test/ClientTest.cpp
test/ClientTest.h
The default implementation removes one VEVENT after the other, which
is slow for large merged events. Directly removing the entire event
series is faster.
When deleting an item on phone and locally, the next sync fails with
ERROR messages about "object not found". This has several reasons:
- libsynthesis super data store attempts to read items
which may or may not exist (triggers ERROR message)
- it checks for 404 but Evolution backends only return a generic
database error (causes sync to fail)
It turned out that ReadItem and DeleteItem are expected to return a
404 status when the requested item does not exist. This patch documents
that (only in the TrackingSyncSource, though), adds tests and fixes
EDS, WebDAV, file and sqlite backends accordingly.
This patch also suppresses the 404 error logging inside DeleteItem(),
while still returning that error code to the Synthesis engine. Not
logging that particular situation is consistent with the previous
SyncEvolution behavior of silently returning successfully when there
wasn't anything to delete.
In addition, more recent libsynthesis versions also no longer do
a ReadItem() call to test for existence. That would still trigger
a spurious (albeit now harmless) ERROR message.
If the engine got a parent event with X-SYNCEVOLUTION-EXDATE-DETACHED,
merged it internally and then wrote it back, the
X-SYNCEVOLUTION-EXDATE-DETACHED would have been stored in the CalDAV
server. Now this is avoided by removing all such properties before
storing the new or updated event.
This was previously done (and still is) as an extra precaution in the
code which adds the properties.
The previous approach (updating the internal cache) had the drawback
that X-SYNCEVOLUTION-EXDATE-DETACHED was also sent to the CalDAV
server. The work of generating it was done in all cases, even if not
needed. Found when running the full test suite.
Now the X-SYNCEVOLUTION-EXDATE-DETACHED properties are only added to
the icalcomponent that is generated for the engine in
readSubItem(). There's still the risk that such an extended VEVENT
will be stored again (read/modify/write conflict resolution), so
further changes will be needed to filter them out.
To ensure that this change doesn't break the intended semantic of
X-SYNCEVOLUTION-EXDATE-DETACHED, the presence of these properties is
now checked in the LinkedItems::testLinkedItemsParentChild test.
Required for Maemo 5 recurrences workaround.
icalproperty_get_value_as_string() is one of those
functions for which a _r variant exists; use that
if possible.
If the event has a DTSTART with TZID, then the EXDATE also should
have that same TZID. It is uncertain whether the backend provides
the TZID, but even if it does, because of the SIMPLE-EXDATE rule
the value wouldn't be parsed.
This must be done for regular EXDATE values in the EXDATE array field
(new SIMPLE-EXDATE rule) and for the addition EXDATE values created
for RECURRENCE-IDs in the EXDATES_DETACHED array field (new
HAVE-EXDATE-DETACHED-NO-TZID rule).
Both these rules are activated as subrules by the new MAEMO-CALENDAR
rule, which is set by the Maemo Calendar backend now.
There is one caveat the SIMPLE-EXDATE rule is also active when parsing
an EXDATE created by the storage and therefore TZID will be ignored,
if any is set at all (uncertain).
A vCalendar outgoing script could fix this by adding the DTSTART time
zone to the floating time value in the parsed EXDATEs.
Tell the engine to pass us EXDATEs created for each RECURRENCE-ID in a
detached recurrence. Necessary because the storage and app do not
support UID/RECURRENCE-ID and thus show duplicates without this
workaround.
Add X-SYNCEVOLUTION-EXDATE-DETACHED properties to main event for each
detached recurrence. Needed by some other SyncEvolution
backends (for example, Maemo 5).
?SyncEvolution=NoCTag as part of the syncURL disables change tracking
based on the CTag. Useful for simplifying the test logs because each
source instantiation starts with a full dump of the items.
Exceptions reported by CppUnit only contained one file+line pair. If
that location was called multiple times inside a larger test, then it
was impossible to tell where it was called. The new assertion macros
and in particular CT_ASSERT_NO_THROW() solve this problem by catching
exceptions, adding the current file+line information and then
rethrowing an extended exception. When CppUnit finally logs the
problem, it will contain a complete call stack.
For this to work, every single line which might throw an exception
must be wrapped in a macro. Entering and leaving the line is logged
together with the wrapped expression as part of the test .log file.
doSync() is handled as a special case and gets the file+line info of
its caller via parameters.
New logging macros are introduced and used in LocalTests::testChanges:
instead of writing comments, call the logging macros and the string
will appear also in the .log file of the test.
Further areas for improvements:
- use CLIENT_TEST_LOG() everywhere
- reduce file names to just the base name is logged
- convert .log file into HTML with links into session logs and
ClientTest.cpp source file
When storing an updated detached recurrence, the VEVENT was expected
to contain a RECURRENCE-ID. This might not be the case when the peer
in a local sync (typically the local storage) was unable to store that
property.
Support such a local storage by re-adding the RECURRENCE-ID based on
the available information:
- RECURRENCE-ID value from sub ID
- TZID from parent event's DTSTART (if parent exists) or
current event's DTSTART (otherwise)
Tests for different scenarios (all-day event with date-only RECURRENCE-ID,
with TZID, without TZID) will be committed separately.
(cherry picked from commit 03d3c720ba)
If the event has a DTSTART with TZID, then the EXDATE also should
have that same TZID. It is uncertain whether the backend provides
the TZID, but even if it does, because of the SIMPLE-EXDATE rule
the value wouldn't be parsed.
(cherry picked from commit 6d80112dc4959e8c4f940b026e0447fcf7256142)
This must be done for regular EXDATE values in the EXDATE array field
(new SIMPLE-EXDATE rule) and for the addition EXDATE values created
for RECURRENCE-IDs in the EXDATES_DETACHED array field (new
HAVE-EXDATE-DETACHED-NO-TZID rule).
Both these rules are activated as subrules by the new MAEMO-CALENDAR
rule, which is set by the Maemo Calendar backend now.
There is one caveat the SIMPLE-EXDATE rule is also active when parsing
an EXDATE created by the storage and therefore TZID will be ignored,
if any is set at all (uncertain).
A vCalendar outgoing script could fix this by adding the DTSTART time
zone to the floating time value in the parsed EXDATEs.
(cherry picked from commit 755638e3c570b531c9bba81f99a8ac710cb25564)
Calendar and generic ActiveSync source now use the same logic in
beginSync(). New is a workaround for Google, which seems to require
that eas_sync_handler_get_items() gets called twice at the start of a
slow sync.
When storing an updated detached recurrence, the VEVENT was expected
to contain a RECURRENCE-ID. This might not be the case when the peer
in a local sync (typically the local storage) was unable to store that
property.
Support such a local storage by re-adding the RECURRENCE-ID based on
the available information:
- RECURRENCE-ID value from sub ID
- TZID from parent event's DTSTART (if parent exists) or
current event's DTSTART (otherwise)
Tests for different scenarios (all-day event with date-only RECURRENCE-ID,
with TZID, without TZID) will be committed separately.
Tell the engine to pass us EXDATEs created for each RECURRENCE-ID in a
detached recurrence. Necessary because the storage and app do not
support UID/RECURRENCE-ID and thus show duplicates without this
workaround.
(cherry picked from commit 165ea81fca9493d0dce55b82d127ad74cf7b56af)
Conflicts:
src/backends/maemo/MaemoCalendarSource.h
Add X-SYNCEVOLUTION-EXDATE-DETACHED properties to main event for each
detached recurrence. Needed by some other SyncEvolution
backends (for example, Maemo 5).
(cherry picked from commit 253adad7d77910b120b4f89a9922dd30516ed3bd)
The unconditional change of calling StartDataRead (aka beginSync)
early enough so that ActiveSync can force a slow sync had negative
consequences, because now it was called before the peer was contacted
and credentials were accepted:
- broke the "sync started successfully" logic, resulting in
notifications for syncs which were supposed to be retried silently
(showed up in TestSessionAPIsDummy.testAutoSyncNetworkFailure)
- database dumps were done even if not needed because sync never
starts
Now all backends are called as before unless they explicitly ask for
the early call. The ActiveSync backend does that. The downsides of
that approach do not matter much because syncing will start okay and
dumping of data is typically disable on that side of a local sync.
The ActiveSync backend now detects the daemon's "Sync error: Invalid
synchronization key" and falls back to a slow sync. This is only done
if the sync key was already invalid when beginSync()
started. Otherwise something fishy must be going on and it seems
prudent to rather abort the sync with an error.
It would be nice if this special error could be detected without
having to resort to a string comparison, but this is not currently
supported by libeasclient because error codes are not yet part of the
API (BMC #23618).
Various backends (Evolution, ActiveSync, WebDAV) depend on
libical. This wasn't done correctly, with the result that
--enable-activesync without --enable-evolution and --enable-webdav
failed to compile because ENABLE_ICAL was unset by the WebDAV
configure.
Now backends can request libical support by setting need_ical="yes",
then later LIBICAL_LIBS/CFLAGS, ENABLE_ICAL define and condition
variable will be set accordingly. Similar to need_glib="yes".
When testing against a server which has an at sign in the principal,
the principal was checked over and over again, because the "already
seen" comparison failed for foo@bar != foo%40bar.
Fixed by pushing normalization into the "tried" instance and doing the
normalization in all cases.
Added additional logging of collection search. The fallback if the
display name is not found is changed to an empty string, instead of
using a non-translated string.
The initial limit of checking 10 candidates was too restrictive. Now
increased to 1000. It's unsure whether it is small enough to be
useful, but at least it shouldn't trigger unexpectedly anymore.
When a collection was identified as one we searched for, then don't
look at its properties. If the search started with the principal, then
we are not going to learn more from the collection's properties. If
the search starts with a specific collection, then the search will stop
after reporting that collection.
Not listing further collections when searching for all databases is
probably what the user wants. Otherwise he should have used the
principal or simple the host name.
These properties may contain more than one href. eGroupware does for
address books when the user configures to expose more than one address
book in the web interface for GroupDAV.
Now SyncEvolution adds all of these URLs as candidates. Ordering is
preserved, so the first URL in the property is also visited first (=
becomes the default).
eGroupware gives just a path as Location for a redirect. Accept that
by copying the missing pieces of information (scheme, host, port) from
the current session.
Typically all locations which add to the candidates list check first
whether the candidate was already visited before. This is useful
because it avoids growing that list unnecessarily and makes logging
slightly more informative (shows skipped candidates at the time when
they are found).
But if a candidate was accidentally added twice, it would be used
twice. So let's check again when picking a candidate from the list,
just to be sure.
When given just the host name and thus starting with the / path, also
look at .well-known URLs inside that host. Previously the code relied
on a redirect at that path, which happened to work for eGroupware but
isn't standardized.
Typically calendar/addressbook-home-set points to a collection which
contains calendar and addressbook collections. When scanning for one
kind of collection it is possible to ignore all collections of the
other kind, because those are guaranteed to not contain anything not
related to them.
This patch does that by accepting the right kind of collection and
those which cannot be ruled out for sure, which currently are CalDAV
and CardDAV collections.
The patch also scans sub-collections in alphabetical order, which
makes the result more deterministic.
Based on the previous code, which stopped when finding the first
collection. Now that code is an utility function which reports all
resources to a callback until told to stop by that callback.
During testing, removing all items is done via a special call in
TestingSyncSource. It used to be an utility method which fell back to
the normal SyncSource API.
This patch changes several things
- Reversed order in which items are deleted in that utility method,
because removing children (= longer IDs) first tends to be supported
better by servers (bug in CalDAV server, but still...).
- Allow backends to implement their own removeAllItems().
- Implement that in CalDAV + MapSyncSource as removing the merged
items directly, instead of using a sequence of PUT+final DELETE.
Found while testing with Bedework CalDAV server. Makes testing more
robust and efficient.
Conflicts:
configure.ac
test/ClientTest.cpp
test/testcases/eds_event.ics.funambol.tem.patch
Conflicts because of version number and updated test cases resp. local
delete optimization.
ActiveSync backend had to be adapted to modified InsertItemResult: now
it requests a merge when it detects duplicates, like the CalDAV backend
already did on the 1.2 branch.
Affects SyncEvolution 1.2: when bailing out of
EvolutionCalendarSource::retrieveItem() when EDS returned the wrong
component, that component wasn't freed (recent change). Fixed by
making it owned by a smart pointer as soon as possible.
eGroupware does not include ETags in quotes. SyncEvolution
unconditionally stripped the first and last character, making ETags
shorter than they really were. Now it strips them only if both are
quotation marks.
e_cal_create_object() of a detached recurrence fails with "UID already
exists" if there is any other event with that UID, regardless whether
it is the parent or another detached recurrence.
When adding new items, SyncEvolution did not handle the case were
another detached recurrence, but not the parent, already existed. The
check for "UID used" must check for any item with that UID, not just
the parent.
When removing the parent, SyncEvolution temporarily removes detached
recurrences and recreates them later (can't be done differently with
older EDS). Recreating a second detached recurrence with
e_cal_create_object() then failed, must use e_cal_modify_object() for
it and all following recurrences.
Finally, EDS itself is confused when asked for a UID without
RECURRENCE-ID, as it happens during such a removal: instead of
returning the information that the parent doesn't exist (which
SyncEvolution handles), it returns the first child (which broke change
tracking by adding an entry for the non-existant parent). Worked
around by doing a sanity check on the returned data.
Because these additional changes would have been very slow with the
list of luid strings, m_allLUIDs is now a more complex map from UID to
set of RECURRENCE-IDs. This also makes some other code more
efficient (O(log n) instead of O(n), less parsing of luid strings).
Returning 207 = DB_DataMerged when an item replaced an existing one
wasn't correct (the data wasn't really merged) and also wasn't handled
as intended by the Synthesis engine. Now a backend can properly report what it did:
- data fully replaced
- data was merged
- data needs to be merged by engine
The last option is used by EDS and CalDAV when an add<->add conflict
is detected by the backends. In that case the Synthesis engine will
read the local item, merge it with the incoming item and write back
the result.
At the moment the tests assume that the more recent item wins
completely. But our field list config specifies a merge=lines mode for
some fields, which results in concatenating these fields in the merged
item. Thus the tests currently fail. Need to decide which behavior is
desired.
The memset/memcpy of the embedded boost::function instances inside the
old ClientTestConfig was causing segfaults at the end of a client-test
run if compiled with optimization.
Therefore this commit turns ClientTestConfig into a proper class
containing members which initialize themselves (Bool wrapper class,
std::string), thus memset is no longer needed and used. Also added the
standard m_ prefix.
m_numItems is gone, was never set by any backend anyway and even
expected to be consistent in one test. Now CLIENT_TEST_NUM_ITEMS is
read by defNumItems() each time it is needed.
Removed "const char *" strings from method parameters. This revealed
that config.itemType (a const char *) was incorrectly passed to
insert() where the boolean "relax" parameter should have been given.
Replaced by "false" (= strict checking) even though the old code
must have run with an implicit "true" (= relaxed checking). Let's see
whether any tests fail now.
(cherry-picked from commit 6399bd8181)
Several required libraries were not linked to directly, which leads to
linker issues with binutils/ld on Debian Testing.
(cherry picked from commit 7d12eaf3a5)
The target config cannot be shared between different sync configs
unless those configs select a different gconf account config in their
"username" property. The reason is that change tracking is tied to
that gconf account, not the sync config itself.
Related to BMC #22881 "Invalid synchronization key". Shows that
endSync() isn't called when beginSync() already fails. Need
a different way of reseting the sync key.
The memset/memcpy of the embedded boost::function instances inside the
old ClientTestConfig was causing segfaults at the end of a client-test
run if compiled with optimization.
Therefore this commit turns ClientTestConfig into a proper class
containing members which initialize themselves (Bool wrapper class,
std::string), thus memset is no longer needed and used. Also added the
standard m_ prefix.
m_numItems is gone, was never set by any backend anyway and even
expected to be consistent in one test. Now CLIENT_TEST_NUM_ITEMS is
read by defNumItems() each time it is needed.
Removed "const char *" strings from method parameters. This revealed
that config.itemType (a const char *) was incorrectly passed to
insert() where the boolean "relax" parameter should have been given.
Replaced by "false" (= strict checking) even though the old code
must have run with an implicit "true" (= relaxed checking). Let's see
whether any tests fail now.