The code which caught the 404 status had the unintended side effect of
also catching 401 errors and then not reporting them. Fixed by
handling the exception as in the default "Exception" case if it does
not fit the 404 special case.
Saw an unexpected 401 error in the middle of a sync. At that point
the credentials should have been recognized as valid, but somehow
weren't added debug output to track down the problem.
Updating an item must be done with the same UID that was originally
set by the server. The Maemo 5 backend replaces the UID received from
the server with its own sequential numbering of events in the SQLite
database.
This commit is an attempt to catch this situation and restore the
correct the UID before sending the update item content to the server.
The value in the key/value pairs now start with a slash. The intention
is that if the content ever has to be extended, it can be done by
adding a version number or something like that in front of the
slash. Right now, that version is implicitly empty. Without the slash
it wouldn't be possible to distinguish to distinguish the future
version number from the revision.
All members except for the integer port were auto-initialized.
This commit fixes valgrind warnings in WebDAV in isEmpty()
by initializing the port in a new constructor.
Trying to reuse the TrackingSyncSource change tracking was a dead-end
that just led to horribly complex scaffolding classes (like the key/value
node which had to keep revisions synchronized and mix in UID).
This is a complete rewrite where change detection is done in
MapSyncSource, using a similar approach as in TrackingSyncSource. Some
of the session life cycle is now cut-and-pasted from
TrackingSyncSource (primarily checking the overall database
revision). Eventually this common logic might get refactored into a
SyncSource utility class, but for now let's keep it separate.
This solution is much cleaner and uses simpler key/value storage
with one item-<mainid> entry for each merged item, mapping to the
revision, UID, and list of subids.
The multiget result processing had to problems: etag wasn't requested
and thus not stored and existing subids were not removed for already
existing items.
The comparison between full path and local ID and between etag and
revision was completely mixed up, with the result that all items were
always loaded anew even if unchanged.
Fixed by storing the reduced luid+revision mapping and using it in all
further comparisons against the SubRevisionsMap_t.
The API was intentionally changed to notify backend developers of the
new possibilities. The SQLite backend hadn't been adapted and failed
to compile...
The recent commit for storing UID in the tracking source together with
revision information was incomplete.
First, the transformation from %d-%s to %d/%s/%s hadn't been finished
and wouldn't have worked because one return parameter of
splitMainIDValue() was not declared as reference (found via compiler
warnings in the nightly build by g++; I was using clang, which didn't
complain).
Second, there were cases that led to MapConfigNode::setProperty()
being called without having m_uids set first. For example,
listAllItems() + detectChanges(). Now all methods which return luids also
set UID values in the MapConfigNode. In addition, setProperty() is
more careful about preserving existing UID information.
This incidence shows that coding on a plane is no good, and that the
reusal of TrackingSyncSource as base for MapSyncSource really was a
mistake because of the complexity. Should really be rewritten.
The recent commit for the multiget REPORT used the temporary
string from stringstream after it was freed, because Neon::Request
doesn't copy the request body. Must make a copy in the caller.
This commit implements updateAllSubItems(). A first query
retrieves the etags of all items. A comparison determines
removed items and those which are new or updated. Those items
are then fetched with a multiget REPORT and used to complete
the cache and item list.
404 errors, as they are possible when Google Calendar gets
confused, are intentionally not handled. The rationale is
that a slow sync has a suitable workaround (use data from the
query REPORT) and hopefully the problem will occur less often
for future calendar changes.
The handling of href and etag data had to be done by all
classes using initResponseHandler(). Now it is in the XML
handler itself.
In addition, users of the class process the gathered results
at the end of the results XML element. At that point, all
the response parts are guaranteed to be processed.
There may be backends (like CalDAV) where updating existing
information about items is more efficient than reading them
from scratch. This commit adds updateAll[Sub]Items in SyncSourceRevisions/TrackingSyncSource resp. the SubSyncSource API for that purpose.
If it is called when some information exits (without it, the attempt
would be useless) and when change tracking may resuse existing
information. So an explicit slow sync still reads all items and wipes
out all old information, just in case that it is somehow
broken. Should not matter in most cases, though, because wrong luids
and mismatching revisions will be detected and corrected. The only
situation where incorrect information would matter is correct
luid+revision with wrong meta information attached in MapSyncSource
(subids, UID, ...).
The testLinkedItem tests updated items without actually
modifying their content. That worked as long as LAST-MODIFIED
increased, but because of the recent speed-ups that was no
longer always the case. The test then failed because the ETag
of a CalDAV resource didn't changed on the Apple Calendar Server.
This commit fixes that by modifying the item content when it is
to be used as an update.
One test didn't take that into account and continued using the
unmodified item content. Fixed.
The UID for each CalDAV resource is required to handle additions to
that meeting series correctly. The previous commits for using cached
results broke the linked items tests because the UID wasn't available
unless the calendar was read completely.
This commit fixes that by caching the UID in the rev- entries of the
tracking node. It also cleans up the naming to avoid confusion around
luid (resource path), uid (part of the item data), and subid (again
part of the item data, but needed to address individual VEVENTs).
Note that the code has become fairly complex due to reusing the
TrackingSyncSource as base class of MapSyncSource. In retrospect it
seems better to not reuse that particular logic and instead do all of
the change tracking directly in MapSyncSource. But that is a more
intrusive change and thus not done at this time.
This commit changes the trade-off between "duration of sync" and
"memory consumption" in favor of "faster sync": when all items need to
be listed (currently the case in an initial sync or if the server's
database has changed, according to the CTag), then the complete
calendar is downloaded in one go with REPORT and cached in memory.
It used to be downloaded already before without caching it. The reason
was that there was a (futile) attempt at reducing the download size by
requesting only the minimum set of properties. It was futile because
all servers ignored that hint and sent complete items. Therefore, and
because the case isn't occuring as often as it used to, it makes sense
to avoid the expensive (latency!) GET requests in favor of using more
memory.
Each sync involving WebDAV did a complete data dump (dumpData) and
showed differences (printChanges) for the WebDAV side of the sync.
This could be used to restore the WebDAV server after a sync, but it
seems a bit excessive and not useful for most users because the same
is also done on the local side of the sync.
Therefore this patch sets these two options to off in the
configuration templates.
There was a GET before a DELETE, for two reasons:
- If the item still has VEVENTs left, a PUT instead of
a DELETE is necessary. Now the code checks for this
special case and only GETs the item when a PUT follows.
- Providing a description for debug output required
access to the item content. Now the item content is
only used if already loaded.
The CTag mechanism allows to quickly check whether data has changed.
With WebDAV and the recent infrastructure changes in
SyncSourceRevisions/TrackingSyncSource, that is straight-forward.
With CalDAV it is a bit more complicated because the m_cache needs to
be populated for some of the operations to succeed. This is
accomplished via the setAll[Sub]Items() calls.
If a sync source can quickly determine that nothing has changed,
then SyncSourceRevisions::detectChanges() can use a shortcut and
simply copy the list of known uids => CHANGES_NONE mode.
Such a change detection will be possible in the WebDAV backends (using
the Calendar Collection Entity Tag (CTag) as "database revision"
string) and perhaps in the future also in Evolution Data Server (after
adding a new API).
This commit also adds a CHANGES_SLOW mode. This is meant as hint that
detecting changes is not necessary. Right now, this mode is the same
as CHANGES_FULL because the code changes should be minimized at this
time (in preparation for 1.2). More work will probably be needed
to distinguish between unit testing and real real slow syncs.
Finally, a way to pass information about the cached item list is added
with the SyncSourceRevision::setAllItems() call. setAllItems() and
listAllItems() are mutually exclusive: either the backend delivers
the information, or it receives it. The CalDAV backend depends on this
because it needs to maintain a cache with information about all items.
The flag should only have been added to old configurations. Adding it
to templates (which happen to match the version check) is wrong and
must be avoided. Because of the wrong flag, the "SyncEvolution"
template was shown in the MeeGo UX UI although it shouldn't have been.
If we get a 404 error while contacting the server, it might mean
that the username was wrong, so the server gave us a not found
error. It's better to let the user know that, because we don't
have a clear heuristic to determin whether this might have been
a true 404 error.
The convertion of 404 errors to 401 should happen only if the URL
we're trying to open is one in which it was us who injected the
username into the URL. This was achieved by removing the username
injection from the context creation code, and moving it into the
loop that does the autodiscovery, adding it path by path as it
was necessary.
Notice: this required NeonCXX to be aware of the "%u" semantic,
something I'm not completely comfortable with.
See also: https://bugs.meego.com/show_bug.cgi?id=17862
The TYPE=HOME seems to be redundant. The Evolution UI doesn't distinguish
types, and therefore our internal field list also doesn't.
The motivation for removing the TYPE is that it breaks the
testExtentions test with Google, because the TYPE gets lost in our
sent data.
Local data not supported by a peer (for example, X- extensions and
vCard 3.0 properties) was lost when importing updates from such a peer.
The Synthesis engine can preserve such extensions, but doesn't
apply that more expensive merging unless a backend is configured
with <updateallfields>.
This patch adds it for the Evolution contact and calendar backends.
SyncSources are now allowed to insert arbitrary config properties
into the <datastore></datastore> config. The main motivation is that
some backends need to enable <updateallfields>.
EDS can store arbitrary vCard extensions. These used to get lost in
two-way syncing because the engine couldn't convert them into the
internal field list. This patch adds a catch-all field (XPROPS) and a
match for unknown X- extensions.
Enabling the preservation of local extensions still needs to be enabled
separately for each backend which wants to use the logic (<updateallfields>).
The CtCap information is necessary for preserving fields not supported
by the Google SyncML server. It depends on the "overridedevinf"
patches for libsynthesis, currently under review.
Backends don't have access to ClientTest::update. Must provide
a pointer to it in the test config => ClientTestConfig::genericUpdate.
Manipulating N/FN is problematic because some peers support FN, some
don't => better update or add a NOTE instead when dealing with vCards.
Must also work for NOTE;CHARSET="UTF-8": test case, so the matching
against properties in the item is very inprecise.
The rule="KDE" properties are only used internally. They are never
sent to peers via SyncML. It is debatable whether they should be
listed in the CtCap sent to peers: on the one hand, receiving them
probably works (untested). On the other hand, the peer will never get
them back because the content will be encoded using the other
properties instead.
Without further hints, the Synthesis engine includes all properties in
the outgoing CtCap. This patch changes that by explicitly setting the
show="no" parameter to the internal properties.
This patch replaces the ugly configuration translation with a runtime
check in a comparescript.
This is a first step towards detecting properly at runtime whether a
peer supports UID/RECURRENCE-ID semantic in calendar data. Currently
this check is still based on the "local sync == use UID/RECURRENCE-ID"
shortcut.
This patch depends on a libsynthesis which supports the COMPAREMODE()
method.
Testing with Google Calendar showed another abort due to connection
problems:
"Could not read status line: SSL error: decryption failed or bad record mac"
Retry in this case, as in the "Secure connection truncated" case before.
contactServer() wasn't called before SyncSourceRevisions asked
for all items, which then failed with a boost::shared_ptr exception
on m_session.
Fixed this by wrapping the operations in
WebDAVSource::backup/restoreData which call contactServer() before
invoking the original implementation.
Function composition with boost::bind() might be nicer, but didn't
work right away (compile errors due to invalid syntax?) and thus
isn't used.
GErrorCXX was originally added to KCal-EDS. Copying it back because
it is useful.
GListCXX wraps a GList or GSList in a STL compatible list with forward
iterators. Appending (= push_back) is supported with both, but will be
slow for GSList.
The "database" property now can hold the final URL of the collection
(aka database) on the WebDAV server which is used for the
source. Setting it skips the entire auto-discovery process, which
makes access quite a bit faster and more reliable.
Because most UIs won't know initially how to find these URLs (listing
them not supported by the core SyncEvolution) and/or won't have the
necessary user dialogs, the property is set automatically after a
successful sync. This avoids the accidental switching between databases
when the user adds or removes databases on the server (which can lead
to different results of the auto discovery).
There's still no guarantee that the database picked by default is the
"right" one. That can only be solved in the UI because servers
typically don't have a "default" or "personal" flag for their
collections.
The conversion from a parsed URI to a string URL dropped the query
part and introduced redundant characters when port, userinfo and
fragment were empty.
It also introduced an extra slash before the path, which broke Google
Calendar with cached URLs (400 "bad request" error when using a path
with double slashes at the beginning).
NetworkManager 0.9 changes the values of
org.freedesktop.NetworkManager.Status property. Fortunately the new
and old values are not in conflict.
Commit also starts setting presence to false only when we know this
should happen (and not the other way around): It's better to fail
this way than prevent user from syncing if things like this happen.
NetworkManager 0.9 is more strict about the call arguments: it seems
newer dbus-glib requires that the interface name is specified, otherwise
there is an AccessDenied error.
Google CalDAV does not deal well with a detached recurrence that has
no parent. The "Google child hack" avoids *adding* such a recurrence,
but it missed the case where the same problem occurs when removing the
parent before the children. Added similar code to the partial removal
code path.
There were still several cases where the "Google delete hack" (= catch
409 "Can't delete a recurring event except on its organizer's
calendar", update item, delete again) did not work.
Sometimes it failed when only an EXDATE was set => also remove
EXDATE, not just RRULE, to convince the server that the event is not
recurring.
Sometimes it failed because the DELETE came too soon after the PUT
(?!). Added a retry loop.
When updating a meeting series by adding a child, the series' SEQ
value was not increased if the new child already had a higher SEQ
value. Then the updating failed because the other events in the series
didn't have a higher SEQ than on the server.
Fix this by always bumping first, then comparing against the new
event.
Timeouts for the SyncML messages between master and child did not
make sense. In HTTP they are necessary to detect dead peers, but with
permanent pipes between both sides that isn't an issue.
Because the timeouts triggered incorrectly, this patch removes them by
never setting the m_timeoutSeconds member variable. For sending the
child's status report a fixed timeout of 5 minutes (the previous
default) continues to be used, just in case that something goes
horribly wrong (software bug) and sending the report somehow hangs
(which it shouldn't).
Header files must be listed explicitly in autotools. Otherwise
they are not getting included in source tar balls. "make distcheck"
exposes that problem.
Google temporarily redirects to a special URL when the calendar is
down. The check() function already recognized this and just told
the caller to try again. But Session::propfind*() methods threw
a redirect exception themselves before giving check() a chance to
catch the special case.
Solved by keeping ne_propfind_handler instance valid during the check()
call (necessary because deleting it would also delete the ne_status
needed by check()) and calling check() directly with all the needed
information.
The handler will be deleted by a destructor class in combination with
boost::shared_ptr when leaving propfindURI() or when trying again
with a fresh handler in the retry loop.
The description of the "password" property was "SyncML server",
which was used by the command line password prompt:
"Enter password for SyncML server: ..."
Given that the property now also is used for CalDAV credentials and
the prompt did not tell the user for which SyncML server they were
asked, it is better to use the config name as description. Now the
password prompt is, for example:
"Enter password for @google-calendar: ..."
Keyring access is not affected by this change because the password
description was not used to find or describe the stored password.
There still was a TODO in the code for handling "-" as password value.
No surprise, not having that implemented broke CalDAV sync in
syncevo-dbus-server because it would try to read the password from
stdin (the default in SyncContext).
Probably SyncContext shouldn't provide such an unsafe fallback, but
that's something for another patch.
This patch addresses the immediate problem by moving the
initialization of the SyncContext used by the child process into the
master process and adding the password checking directly afterwards
(LocalTransportAgent::start()). It runs in the main process
(syncevolution or syncevo-dbus-server) and uses the "request password"
method of the main sync context. Passwords are then stored
temporarily, so the same check doesn't have to ask for passwords again
in the child process.
Long term we'll need to rewrite the complete password handling...
This was replaced at some point with iterating over registered properties
(see SyncContext::sync()) without removing the obsolete methods. Removing
them now to avoid further confusion.
Explicit --enable-mlite didn't enable mlite (enable_mlite not set).
The default is now to not enable it in any case (even if available),
so traditional users won't have to add --disable-mlite to suppress
the error about it being not installed.
The code which depends on mlite must not be compiled if the feature
is disabled, because if mlite is not installed, it would cause compile
errors.
The notifications system has been made template based. There is
a Factory object that creates NotificationManagers with the correct
template (mlite, libnotify, or a dummy no-op) according to the
platform.
If (for whatever reason) the patch file is empty, we
shouldn't invoke the "patch" command. Some implementations
of it then complain instead of copying the input file.
This commit adds a copying of the original test case file
if the corresponding patch file exists and is empty.
This reverts commit c1aaf7128e.
This patch doesn't work because the composed patch for valid
patches is no longer piped into the "patch" command due to
the additional test command in the middle. Will commit a different
solution.
Ignore "peerType = WebDAV" configurations as well as the
configurations with syncURL starting with "local://@" temporarily
(before we actually support WebDAV in the UI).
This patch adds .xml configs which replace the google-contacts from
the patched Buteo plugins. The glue code configures it as the normal
"google" config.
This patch teaches the command line how to infer the right template
for a source-config@<something> config creation:
syncevolution --configure source-config@google-calendar
For this particular example, "google-calendar" must match the "Google
Calendar" fingerprint in the template. Spaces, hyphen and underscores
are now all considered equal in TemplateConfig::fingerprintMatch().
Also added CmdlineTest::testWebDAV unit test. The test can only run if
WebDAV support is enabled, because otherwise the
backend=CardDAV/CalDAV would be rejected.
This patch introduces the new "peerType" property which marks
templates and configs as something that can be used for the
'source-config@<target>' configs necessary for local sync.
Only "WebDAV" is used. If peerType is not set, the template or config
is traditional SyncML.
This patch also adds two templates, one for Google Calendar and one
for Yahoo CardDAV and CalDAV. Because Yahoo CardDAV is unreliable,
it is not enabled.
The code for builtin templates had side effects, like always adding
all four standard sources to a template, even if the template itself
didn't have all of them defined. It also hid the problem that listing
templates didn't work for templates on disk.
Another benefit is that template files can be packaged separately. By
choosing the packages which are to be installed, a distributor of
SyncEvolution (like MeeGo) can choose which services to offer by
default.
Therefore this patch removes the "builtin templates" feature, which
was only useful in unusual use cases anyway (for example, single-binary
distribution).
Because there are no more default values for source properties, all
templates must specify the "backend" explicitly. syncevo-phone-config
was adapted accordingly, and also updated to use the current names of
the properties in the process.
As part of moving the templates into separate files, some of them
were cleaned up:
- Mobical: now points to Everdroid, its new name
- Google, Ovi: SSL verification is always enabled in the templates;
the workaround for old libsoup should no longer be
necessary for most users
- Google: renamed to "Google_Contacts", with "Google" as alias,
because there will be two Google templates soon
- Scheduleworld: use "server no longer in operation" instead of
an invalid URL
The finger print match had a special case for "default". The exact
intention of that is unknown. Perhaps it was meant to give that
template a boost when it wouldn't match the string that is getting
searched for at all.
But it had the effect that an exact match when searching for the
"default" template was not found and thus that template couldn't be
used in the command line after moving it from builtin to external.
Removed the complete check.
In TemplateConfig::fingerprintMatch the peerIsClient property only
matters when the mode is either "match only clients" or "match only
servers".
This patch checks for this first before reading the template.
Minor optimization, not performance critical.
The check with "starts with ." probably was meant to filter out hidden
files. Because it was applied to the full path, including the
directory names, it didn't have that effect. Instead it skipped all
entries of the template dir started with a dot, as in "./templates".
That was used in one of the Cmdline unit tests and only worked while
none of the templates there were needed for the test. It started to
fail after removing the builtin templates.
Better check error message before return code. That way
a non-zero error is visible in the output instead of
only the failed run check.
Calling doit() would be shorter, but hide the actual location
of the failure.
Moved removal and creation of the test directory (= "CmdlineTest")
into the test setup method. That way all tests are guaranteed to
start in a clean state, without having to duplicate that all over
the place.
Motivated by the observation that at least one test didn't have the
necessary cleanup, which caused a failure when creating more
templates.
If compilation and testing runs outside of the original source,
the src/testcases must be made available to "client-test". This
used to be done with a full copy, but that means that changes
made later are not reflected in later tests. Better use a symlink.
Also remember to remove it as part of "make clean".
Same bug as in command line (previous commit). Scanning
for on-disk templates only considered Bluetooth devices.
For server templates, only the ones built into SyncEvolution
were returned.
For some reason the 'valgrind' target attempted to run valgrind on a
./test executable. Perhaps it's a leftover from the past, as it should
be ./client-test.
Nightly compilation was failing because empty template patch files
would result in an error like the following:
patch: **** Only garbage was found in the patch input.
Making sure that the patch file is not empty before trying to apply
the patch fixes the problem.
"syncevolution --templates ?" only showed builtin templates
because the scanning for template on disk was called without
any match definition => empty results, only builtin templates
used as fallback.
After moving the main initialization from open() into
beginSync()/contactServer() two other functions failed because they
assumed that open() had done the initialization:
- WebDAVSource::isEmpty()
- CalDAVSource::backupData()
These functions are (correctly) called before beginSync() because they
need to work before resp. without a sync session.
Fixed this by tracking whether the server has been contacted and
calling contactServer() multiple times.
autotools were trying to package generated sources, which
fails when Qt bindings are disabled and wouldn't be desired
anyway. Need t odistinguish between dist_ and nodist_ components.
syncevo-dbus-server seems to be missing link info for dbus, without
this patch I get:
/usr/bin/ld: syncevo_dbus_server-syncevo-dbus-server.o:
undefined reference to symbol 'dbus_connection_ref'
Based on Gabriel's dbus-qt rules, adapted to avoid
duplicating the rules and to have the files regenerated
whenever input files change.
Also added a workaround for qdbusxml2cpp putting invalid
preprocessor symbols SYNCEVO-...-FULL_H... into the
header files.