g++ 4.6 and ld 2.21.52.20110707 (Debian Unstable) led to a different
order of global instance construction:
1. WebDAV constructor calls SyncConfig::getRegistry()
2. getRegistry() adds the (uninitialized!) property instances
and modifies them
3. SyncConfig.cpp instances are initializes, which resets
some of the values modified by getRegistry()
The result was that, for example, the "defaultPeer" property was
treated like an unshared property and written into the wrong config
file.
The assumption that variables in a compilation unit are initialized
before methods in that unit can be called is not based on anything in
the C++ standard. Therefore this commit rewrites the code so that
properties are not added/updated inside the getRegistry()
methods. Instead this is done in separate classes which (and that is
guaranteed by the C++ standard) are constructed after the properties
defined earlier in the compilation unit.
g++ 4.6 complains about the unused assignment. Probably this boolean
result needs to be checked. But as GDBus will be replaced soon anyway,
don't bother now.
As per RFC 2445, CATEGORIES:1,2 and CATEGORIES:1\nCATEGORIES:2 is
same. We need to pick one normal form. This commit ensures that all
categories are listed in a single CATEGORIES properties. This was both
easier to implement (splitting at a comma while not splitting at a \,
is tricky) and leads to a shorter normal form (less lines).
The reason for making this change now is that with Evolution 2.32.2
and libical 0.44-3, categories end up being stored with one entry per
CATEGORIES property. That by itself is okay and thus our tests should
pass, which they don't without this change to synccompare.
The only problem is that Evolution doesn't seem to handle it (breaks
setting categories in the UI even without syncing). That's something which
needs to be fixed in Evolution.
In contrast to the Yahoo template, this one doesn't mention
a specific service and enables both contact and calendar sync.
To be used with a service that supports auto-discovery.
A detached HEAD has a hash in .git/HEAD. This confused
gen-autotools.sh which tried to resolve it via "git show-ref", which
in turn only works for refs.
Now gen-autotools.sh checks for real hashes in .git/HEAD and truncates
them directly, without involving show-ref in this case.
The previous commit added a check for qmake, but then used QMAKE
without checking whether qmake was found at all. This caused configure
problems on systems where qmake wasn't available.
The error handling also wasn't correct. A "test" was missing in front
of the comparison.
The test was meant to check the error triggered by setting an
invalid backend value. Instead it checked the usability of such
a source and thus duplicated the (badly named) testCheckSourceNoType.
testCheckSourceInvalidType itself failed to pass when SyncEvolution
was compiled with modules, because then the "apple-contacts" backend
wasn't installed and SetConfig() failed with an unexpected error.
Now the test triggers that error in call cases with "backend = no-such-backend"
and checks that the right error is reported.
TestDBusSession.testSecondSession failed when TestDBusPresence tests
had been run before. The reason is apparently the mock Connman object
and its calls to loop.quit(): that causes the testSecondSession test
to stop before it has seen all the expected reasons for quitting the
main loop.
This commit changes TestDBusServerPresence so that the Connman object
is added and removed as part of setUp() and testDown(). This seems to fix
the problem.
Connman.GetProperties() now also returns something valid in the final
else clause. Previously Python recorded a lot of "'None' not iterable"
errors when Connman.GetProperties() was called more often than
expected and returned None.
Commit 1a40a29 (added after 1.1.99.4) removed timeouts in the local
transport, reasoning that such timeouts only make sense in unreliable
transports and only cause problems (like premature aborts).
Therefore the TestLocalSync.testTimeout which tested the old behavior
became invalid. Removed completely.
Commit 3f1185, contained in 1.1.99.3, changed
SyncContext::throwError() so that it throws a StatusException with
STATUS_FATAL. Previously a runtime exception was thrown, which
Exception::handle() recorded as a local error.
This commit fixes that regression by throwing a STATUS_FATAL +
LOCAL_STATUS_CODE, which restores the traditional result of
throwError().
Found by test-dbus.py TestDBusSyncError.testSyncNoConfig.
The test checked for zero status for inactive sources, whereas the
current implementation doesn't report anything for these sources at
all. Both is acceptable, but let's keep the test strict and check for
the current behavior.
TestConnection and TestSessionAPIsDummy used configs with
backend=addressbook/calendar/todo/memo which had to have databases
with a name derived from CLIENT_TEST_EVOLUTION_SOURCE and the source
name. This was neither documented nor did the required databases match
the ones used by the client-test programs anymore.
For the sake of making the test setup easier, this commit changes
these tests so that they use the file backend (always available) and
file://temp-test-dbus/<source name> databases (created if needed by
the backend). In other words, the tests now run without manual setup
of the host.
The downside is that D-Bus testing no longer covers the real
sources. That's okay, client-test covers that, whereas test-dbus.py
should focus on the D-Bus API itself.
'temp-test-dbus' is a bit more suitable than 'xdg-root' because
* it ties the directory to the script which creates it
* 'temp' implies that it holds no important data
* it is used for various files ('xdg_root' in the Python source
is a bit misleading)
The default value of 'test-dbus' is the same name as the directory
holding the source files used for testing. In the DBusUtil.runTest the
xdg_root directory is removed. So, when running test-dbus-py from the
test directory it deletes the directory in which the source file
reside.
runtests.py - made the SyncEvolutionTest class configurable so that it
can run test-bus.py, added "--enable=dbus" with it.
resultchecker.py - parse the output of test-dbus.py and split out the
failure reports for linking.
The purpose is two-fold:
- tell a user of test-dbus.py what he has to put on the command
line to run a failing test
- produce output that can be parsed more easily by resultchecker.py
Avoid Python warning when destructing the Python file runs into the
already closed fd by closing the file instead of the low-level fd.
Print some information about the current action.
Parsing the revision map extracted the wrong subset of the string. As
a result, the revision comparison was broken and reported more changes
than really existed. Showed up as a failure in
Client::Sync::eds_event::testOneWayFromClient and requests for items
in a multiget when it wasn't needed.
The script used to override the "database" property of
all configured sources in the "dbus_unittest" config.
That is confusing and wasn't documented.
Now the comment for TestSessionAPIsReal describes how to
set up a working config and then doesn't touch it.
The "retry on 401" code wasn't active during the initial sync
because the fact that the credentials had been accepted before
was only recorded on disk, but not in memory.
Moving the response handling from the data element to the response
element caused problems with Google, because it sends a 404 status for
the collection with no data. Apple Calendar Server didn't do that when
testing the change manually, so the problem only showed up in the
nightly testing.
This patch restores the previous behavior of simply ignoring responses
with no data. Some better error handling might be useful.
As discussed on the mailing list, "source-config" is ambiguous because
the "addressbook/calendar/..." configs are also called "source
configs".
Now the naming is "sync" config (for the config with syncURL=local://,
because it is used for syncing) and "target" config (because it is
used as target in a sync config's syncURL).
Rejected:
"local" config - because the databases are not necessarily local
"source" config - see above
"client" or "server" config - because both sides might use local data
and/or client/server could refer to the role
of the peer or the SyncML client/server model
used internally
The Funambol template hadn't been updated and the command line
tests failed because the didn't expect the PeerName to be set.
The normalization of "= F" to "= 0" broke the "= Funambol" peer name.
Doesn't seem to server any useful purpose anymore, so removed.
The code which caught the 404 status had the unintended side effect of
also catching 401 errors and then not reporting them. Fixed by
handling the exception as in the default "Exception" case if it does
not fit the 404 special case.
Saw an unexpected 401 error in the middle of a sync. At that point
the credentials should have been recognized as valid, but somehow
weren't added debug output to track down the problem.
Updating an item must be done with the same UID that was originally
set by the server. The Maemo 5 backend replaces the UID received from
the server with its own sequential numbering of events in the SQLite
database.
This commit is an attempt to catch this situation and restore the
correct the UID before sending the update item content to the server.
The value in the key/value pairs now start with a slash. The intention
is that if the content ever has to be extended, it can be done by
adding a version number or something like that in front of the
slash. Right now, that version is implicitly empty. Without the slash
it wouldn't be possible to distinguish to distinguish the future
version number from the revision.
All members except for the integer port were auto-initialized.
This commit fixes valgrind warnings in WebDAV in isEmpty()
by initializing the port in a new constructor.
Trying to reuse the TrackingSyncSource change tracking was a dead-end
that just led to horribly complex scaffolding classes (like the key/value
node which had to keep revisions synchronized and mix in UID).
This is a complete rewrite where change detection is done in
MapSyncSource, using a similar approach as in TrackingSyncSource. Some
of the session life cycle is now cut-and-pasted from
TrackingSyncSource (primarily checking the overall database
revision). Eventually this common logic might get refactored into a
SyncSource utility class, but for now let's keep it separate.
This solution is much cleaner and uses simpler key/value storage
with one item-<mainid> entry for each merged item, mapping to the
revision, UID, and list of subids.
The multiget result processing had to problems: etag wasn't requested
and thus not stored and existing subids were not removed for already
existing items.
The comparison between full path and local ID and between etag and
revision was completely mixed up, with the result that all items were
always loaded anew even if unchanged.
Fixed by storing the reduced luid+revision mapping and using it in all
further comparisons against the SubRevisionsMap_t.
The API was intentionally changed to notify backend developers of the
new possibilities. The SQLite backend hadn't been adapted and failed
to compile...
The recent commit for storing UID in the tracking source together with
revision information was incomplete.
First, the transformation from %d-%s to %d/%s/%s hadn't been finished
and wouldn't have worked because one return parameter of
splitMainIDValue() was not declared as reference (found via compiler
warnings in the nightly build by g++; I was using clang, which didn't
complain).
Second, there were cases that led to MapConfigNode::setProperty()
being called without having m_uids set first. For example,
listAllItems() + detectChanges(). Now all methods which return luids also
set UID values in the MapConfigNode. In addition, setProperty() is
more careful about preserving existing UID information.
This incidence shows that coding on a plane is no good, and that the
reusal of TrackingSyncSource as base for MapSyncSource really was a
mistake because of the complexity. Should really be rewritten.
The recent commit for the multiget REPORT used the temporary
string from stringstream after it was freed, because Neon::Request
doesn't copy the request body. Must make a copy in the caller.
This commit implements updateAllSubItems(). A first query
retrieves the etags of all items. A comparison determines
removed items and those which are new or updated. Those items
are then fetched with a multiget REPORT and used to complete
the cache and item list.
404 errors, as they are possible when Google Calendar gets
confused, are intentionally not handled. The rationale is
that a slow sync has a suitable workaround (use data from the
query REPORT) and hopefully the problem will occur less often
for future calendar changes.