The word "source" implies reading, while in fact access is read/write.
"datastore" avoids that misconception. Writing it in one word emphasizes
that it is single entity.
While renaming, also remove references to explicit --*-property
parameters. The only necessary use today is "--sync-property ?"
and "--datastore-property ?".
--datastore-property was used instead of the short --store-property
because "store" might be mistaken for the verb. It doesn't matter
that it is longer because it doesn't get typed often.
--source-property must remain valid for backward compatility.
As many user-visible instances of "source" as possible got replaced in
text strings by the newer term "datastore". Debug messages were left
unchanged unless some regex happened to match it.
The source code will continue to use the old variable and class names
based on "source".
Various documentation enhancements:
Better explain what local sync is and how it involves two sync
configs. "originating config" gets introduces instead of just
"sync config".
Better explain the relationship between contexts, sync configs,
and source configs ("a sync config can use the datastore configs in
the same context").
An entire section on config properties in the terminology
section. "item" added (Todd Wilson correctly pointed out that it was
missing).
Less focus on conflict resolution, as suggested by Graham Cobb.
Fix examples that became invalid when fixing the password
storage/lookup mechanism for GNOME keyring in 1.4.
The "command line conventions", "Synchronization beyond SyncML" and
"CalDAV and CardDAV" sections were updated. It's possible that the
other sections also contain slightly incorrect usage of the
terminology or are simply out-dated.
Google has turned off their SyncML server, so the corresponding
"Google Contacts" template became useless and needs to be removed. It
gets replaced by a "Google" template which combines the three
different URLs currently used by Google for CalDAV/CardDAV.
This new template can be used to configure a "target-config@google"
with default calendar and address book database already enabled. The
actual URL of these databases will be determined during the first
sync using them.
The template relies on the WebDAV backend's new capability to search
multiple different entries in the syncURL property for databases. To
avoid listing each calendar twice (once for the legacy URL, once with
the new one) when using basic username/password authentication, the
backend needs a special case for Google and detect that the legacy URL
does not need to be checked.
The syncURL property may contain multiple different space or tab
separated URLs. Previously, the WebDAV backend only used the first one
when scanning for databases. Now it tries all of them.
This will be useful for configuring all Google endpoints in one
template.
Certains domains (like googleapis.com) are aliases. The lookup tools
do not fail with NXDOMAIN in this case, causing the
syncevo-webdav-lookup script to return only an unknown lookup error,
in which case the WebDAV backend will retry for a while.
We need to detect aliases resp. missing SRV information and return a
"no information error".
The asyncError relied on calling
folks_individual_aggregator_remove_individual() incorrectly
with a NULL pointer. folks in Debian Testing just crashes
now, so we can no longer do this.
Previously, only syncURL=local://@<context name> was allowed and used
the "target-config@context name" config as target side in the local
sync.
Now "local://config-name@context-name" or simply "local://config-name"
are also allowed. "target-config" is still the fallback if only a
context is give.
It also has one more special meaning: "--configure
target-config@google-calendar" will pick the "Google_Calendar"
template automatically because it knows that the intention is to
configure the target side of a local sync. It does not know that when
using some other name for the config, in which case the template (if
needed) must be specified explicitly.
The process name in output from the target side now also includes the
configuration name if it is not the default "target-config".
The first progress signal gets emitted after sleeping for 10 seconds
at the start of the sync and then killing syncevo-dbus-server races
with completing the sync. What we want is to kill during the 10 second
wait, so we better wait for the debug output directly before it and
then kill directly.
When configuring a WebDAV server with username = email address and no
URL (which is possible if the server supports service discovery via
the domain in the email address), then storing the credentials in the
GNOME keyring used to fail with "cannot store password in GNOME
keyring, not enough attributes".
That is because GNOME keyring seemed to get confused when a network
login has no server name and some extra safeguards were added to
SyncEvolution to avoid this.
To store the credentials in the case above, the email address now gets
split into user and domain part and together get used to look up the
password.
When doing PBAP caching, we don't want any meta data written because
the next sync would not use it anyway. With the latest libsynthesis
we can configure "/dev/null" as datadir for the client's binfiles and
libsynthesis will avoid writing them.
The PIM manager uses this for PBAP syncing automatically. For testing
it can be enabled by setting the SYNCEVOLUTION_EPHEMERAL env variable.
The recent change which introduced checking of the target session only
worked if the entries were listed in the "right" (= target-config
entry last) order. Don't reply on that, instead check for the one new entry
that we should find and use that.
As pointed out by compiler warnings on recent distros, the write()
result in the SuspendFlags signal handler was ignored. Not sure
whether it really can fail in practice. Handle some
possible errors with retrying.
The Cmdline.cpp unit tests did not check the symlink() result,
causing a warning resp. error depending on the configure options.
Reported by Emiliano Heyns.
This reverts commit c435e937cd.
Commit 7b636720a in libsynthesis fixes an unitialized memory read in
the asynchronous item update code path.
Testing confirms that we can now used batched writes reliably with EDS
(the only backend currently supporting asynchronous writes +
batching), so this change enables it again also for local and
SyncEvolution<->SyncEvolution sync (with asynchronous execution of
contact add/update overlapped with SyncML message exchanges) and other
SyncML syncs (with changes combined into batches and executed at the
end of each message).
Instead of downloading contacts one-by-one with GET, SyncEvolution now
looks at contacts that are most likely going to be needed soon and
gets all of them at once with addressbook-multiget REPORT.
The number of contacts per REPORT is 50 by default, configurable by
setting the SYNCEVOLUTION_CARDDAV_BATCH_SIZE env variable.
This has two advantages:
- It avoids round-trips to the server and thus speeds up a large
download (100 small contacts with individual GETs took 28s on
a fast connection, 3s with two REPORTs).
- It reduces the overall number of requests. Google CardDAV is known
to start issuing intermittent 401 authentication errors when the
number of contacts retrieved via GET is too large. Perhaps this
can be avoided with addressbook-multiget.
This is similar to read-ahead in EDS contacts. However, because we
cannot run Neon requests asynchronously (at least not easily), a
batched read must complete before any contact from it can be
returned to the caller.
Arbitrary sync flags can be passed to SyncPeerWithFlags() with the
new --sync-flags command line parameter. The value of that parameter
must be a JSON formatted hash.
The advantage of this approach over explicit command line parameters
(like --progress-frequency) is that we do not need to add further code
when adding new flags. We can also pass intentionally broken flags
to check the PIM Managers error handling.
The downside is that it is a bit less user-friendly.
With this change, progress frequency and PBAP sync can be set in two
ways, either as part of --sync-flags or with the individual command
line parameters. The latter override --sync-flags.
When the value of a flag has the wrong type, then dumping the value
may provide some hint about what is wrong. Note that values of types
not expected at all by the SyncFlags type already get filtered out
already by our D-Bus binding, in which case the error message shows
no value.
syncevo-webdav-lookup is needed during testing when doing
WebDAV database scans, so build it in "src" like the rest
of the binaries and link to it in the installed test suite.
Google recently enhanced support for RECURRENCE-ID, so SyncEvolution
no longer needs to replace the property when uploading a single
detached event with RECURRENCE-ID. However, several things are still
broken in the server, with no workaround in SyncEvolution:
- Removing individual events gets ignored by the server;
a full "wipe out server data" might work (untested).
- When updating the parent event, all child events also get
updated even though they were included unchanged in the data
sent by SyncEvolution.
- The RECURRENCE-ID of a child event of an all-day recurring event
does not get stored properly.
- The update hack seems to fail for complex meetings: uploading them
once and then deleting them seems to make uploading them again
impossible.
All of these issues were reported to Google and are worked on there,
so perhaps the situation will improve. In the meantime, syncing with
Google CalDAV should better be limited to:
- Downloading a Google calendar in one-way mode.
- Two-way syncing of simple calendars without complex meeting
serieses.
While updating the Google workarounds, the alarm hack (sending a
new event without alarms twice to avoid the automatic server side
alarm) was simplified. Now the new event gets sent only once with a
pseudo-alarm.
Google seems to have changed its PHOTO rewriting. If that keeps
happening, we need to stop comparing the actual data.
However, comparing the actual data is useful to detect when we
do not properly handle it. For example, in testUpdateRemoteWins/local-synced
it was a bit surprising that only one contact had to be updated at first.
It turned out that libsynthesis did not compare the entire photo data,
only the part before embedded null bytes.
The bugs caused by introducing merging of arrays were not detected
by the automated testing. We need to repeat syncing after we have
data and check that nothing changes in the default, no-phone mode,
and we need to check that nothing got changed on the remote side.
In PBAP mode that would have triggered an error, but when using the
file backend we silently modified its data.
Removing the URL ensures that the failure to avoid writes of
incomplete contacts gets covered. Rearranging the properties
also may be relevant (but was handled already).
Adding grouping to the contact datatype broke PBAP caching: when
sending an empty URL, for example, during the sync, the parsed contact
had different field arrays than the locally stored contact, because the
latter was saved without the empty URL.
This caused the field-based comparison to detect a difference even when
the final, reencoded contact wasn't different at all.
To solve this, syncing now uses the same "don't send empty properties"
configuration as local storages. Testing shows that this resolves
the difference for EDS.
A more resilient solution would be to add a check based on the encoded
data, but that's more costly performance wise.
The new "preserve repeating properties during conflict resolution"
feature was only active when using EDS as storage. The relevant
merge script must be applied to all datatypes, not just the EDS
flavor.
The feature was also unintentionally active when running in
caching mode. This caused two problems:
- The cached item was updated even though only the
ordering of repeating properties had been modified during
merging.
- The merged item was sent back to the client side, which
was undesirable (caching is supposed to be one-way) or even
impossible (PBAP is read-only, causing sync failures eith error 20030).
We must check for caching mode and disable merging when it is active.
We also must not tell the engine that we updated the photo property
in the winning item, because then that item would get sent to the
read-only side of the sync.
Perhaps a better solution would be to actually tell the engine
that the remote side is read-only when we activate caching mode.
When handling an update/update conflict (both sides of the sync have an
updated contact) and photo data was moved into a local file by EDS, the engine
merged the file path and the photo data together and thus corrupted the photo.
The engine does not know about the special role of the photo property.
This needs to be handled by the merge script, and that script did not
cover this particular situation. Now the loosing side is cleared,
causing the engine to then copy the winning side over into the loosing
one.
Found by Renato Filho/Canonical when testing SyncEvolution for Ubuntu 14.04.
If enabled via env variables, PullAll transfers will be limited to
a certain numbers contacts at different offsets until all data got
pulled. See README for details.
When transfering in chunks, the enumeration of contacts for the engine
no longer matches the PBAP enumeration. Debug output uses "offset #x"
for PBAP and "ID y" for the engine.
Using a pipe was never fully supported by obexd (blocks
obexd). Transfering in suitably sized chunks (FDO #77272) will be a
more obexd friendly solution with a similar effect (not having to
buffer the entire address book in memory).
Passing 0.1 as delay did not work as intended because it was converted
to an integer value of 0 seconds. Found by gcc 4.9. Must use a one second
delay.
The "func" variable was correctly initialized to NULL if no comparsion
matches, but cppcheck 1.65 warns anyway. Use the more readable
intialization to NULL in the final else clause.