2005-11-26 22:16:03 +01:00
/*
2009-03-25 15:21:04 +01:00
* Copyright ( C ) 2005 - 2009 Patrick Ohly < patrick . ohly @ gmx . de >
* Copyright ( C ) 2009 Intel Corporation
2009-04-30 18:14:03 +02:00
*
* This library is free software ; you can redistribute it and / or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation ; either
* version 2.1 of the License , or ( at your option ) version 3.
*
* This library is distributed in the hope that it will be useful ,
* but WITHOUT ANY WARRANTY ; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the GNU
* Lesser General Public License for more details .
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library ; if not , write to the Free Software
* Foundation , Inc . , 51 Franklin Street , Fifth Floor , Boston , MA
* 02110 - 1301 USA
2005-11-26 22:16:03 +01:00
*/
2009-07-21 19:18:53 +02:00
# ifndef _GNU_SOURCE
# define _GNU_SOURCE 1
# endif
# include <dlfcn.h>
2009-10-05 14:49:32 +02:00
# include <syncevo/SyncContext.h>
# include <syncevo/SyncSource.h>
# include <syncevo/util.h>
rewrote signal handling
Having the signal handling code in SyncContext created an unnecessary
dependency of some classes (in particular the transports) on
SyncContext.h. Now the code is in its own SuspendFlags.cpp/h files.
Cleaning up when the caller is done with signal handling is now part
of the utility class (removed automatically when guard instance is
freed).
The signal handlers now push one byte for each caught signal into a
pipe. That byte tells the rest of the code which message it needs to
print, which cannot be done in the signal handlers (because the
logging code is not reentrant and thus not safe to call from a signal
handler).
Compared to the previous solution, this solves several problems:
- no more race condition between setting and printing the message
- the pipe can be watched in a glib event loop, thus removing
the need to poll at regular intervals; polling is still possible
(and necessary) in those transports which do not integrate with
the event loop (CurlTransport) while it can be removed from
others (SoupTransport, OBEXTransport)
A boost::signal is emitted when the global SuspendFlags change.
Automatic connection management is used to disconnect instances which
are managed by boost::shared_ptr. For example, the current transport's
cancel() method is called when the state changes to "aborted".
The early connection phase of the OBEX transport now also can be
aborted (required cleaning up that transport!).
Currently watching for aborts via the event loop only works for real
Unix signals, but not for "abort" flags set in derived SyncContext
instances. The plan is to change that by allowing a "set abort" on
SuspendFlags and thus making
SyncContext::checkForSuspend/checkForAbort() redundant.
The new class is used as follows:
- syncevolution command line without daemon uses it to control
suspend/abort directly
- syncevolution command line as client of syncevo-dbus-server
connects to the state change signal and relays it to the
syncevo-dbus-server session via D-Bus; now all operations
are protected like that, not just syncing
- syncevo-dbus-server installs its own handlers for SIGINT
and SIGTERM and tries to shut down when either of them
is received. SuspendFlags then doesn't activate its own
handler. Instead that handler is invoked by the
syncevo-dbus-server niam() handler, to suspend or abort
a running sync. Once syncs run in a separate process, the
syncevo-dbus-server should request that these processes
suspend or abort before shutting down itself.
- The syncevo-local-sync helper ignores SIGINT after a sync
has started. It would receive that signal when forked by
syncevolution in non-daemon mode and the user presses
CTRL-C. Now the signal is only handled in the parent
process, which suspends as part of its own side of
the SyncML session and aborts by sending a SIGTERM+SIGINT
to syncevo-local-sync. SIGTERM in syncevo-local-sync is
handled by SuspendFlags and is meant to abort whatever
is going on there at the moment (see below).
Aborting long-running operations like import/export or communication
via CardDAV or ActiveSync still needs further work. The backends need
to check the abort state and return early instead of continuing.
2012-01-19 16:11:22 +01:00
# include <syncevo/SuspendFlags.h>
2013-04-24 12:00:45 +02:00
# include <syncevo/ThreadSupport.h>
2013-07-29 16:51:26 +02:00
# include <syncevo/IdentityProvider.h>
2005-11-26 22:16:03 +01:00
2009-10-05 14:49:32 +02:00
# include <syncevo/SafeConfigNode.h>
2012-06-05 10:27:29 +02:00
# include <syncevo/IniConfigNode.h>
2009-04-15 21:03:26 +02:00
2009-10-05 14:49:32 +02:00
# include <syncevo/LogStdout.h>
# include <syncevo/TransportAgent.h>
# include <syncevo/CurlTransportAgent.h>
# include <syncevo/SoupTransportAgent.h>
OBEX Client Transport: in-process OBEX client (binding over Bluetooth, #5188)
Outgoing OBEX connection implementation, only binds over Bluetooth now.
Integrates with gmainloop so that the opertaions in the transport will not
block the whole application.
It uses Bluetooth sdp to automatically discovery the corresponding service
channel providing SyncML service; the process is asynchronous. Callback
sdp_source_cb and sdp_callback are used for this purpose. sdp_source_cb is a
GIOChannel watch event callback which poll the underlying sdp socket, the
sdp_callback is invoked by Bluez during processing sdp packets.
Callback obex_fd_source and obex_callback are related to the OBEX processing
(Connect, Put, Get, Disconnect). obex_fd_source is a GIOChannel event source
callback which poll the underlying OBEX interface, the obex_callback is
invoked by libopenobex when it needs to delivering events to the application.
Connect is splited by several steps, see CONNECT_STATUS for more detail.
Disconnect will be invoked when shutDown is called or processing in
obex_fd_source_cb is failed, timeout occurs or user suspention. It will first
try to send a "Disconnect" command to server and waiting for response. If
such opertaion is failed it will disconnect anyway. It is important to call
wait after shutdown to ensure the transport is properly cleaned up.
Each callback function is protected by a "Try-catch" block to ensure no
exception is thrown in the C stack. This is important otherwise the application
will abort if an exception is really thrown.
Using several smart pointers to avoid potential resource leak. After initialized
the resource is held by ObexTransportAgent. Copy the smart pointer to the local
stack entering a function and return to ObexTransportAgent if the whole
process is correct and we want to continue. First, it ensures the resource is
released at least during ObexTransportAgent destructing; Second, it can also
try to release the resource as early as possible. For example cxxptr<ObexEvent>
will release the resource during each wait() so that the underlying poll will
not be processed if no transport activity is expected by the application.
"SyncURL" is used consistently for the address of the remote peer to
contact with.
2009-11-13 06:13:12 +01:00
# include <syncevo/ObexTransportAgent.h>
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
# include <syncevo/LocalTransportAgent.h>
2005-11-26 22:16:03 +01:00
2018-01-16 17:17:34 +01:00
# include <functional>
2005-11-26 22:16:03 +01:00
# include <list>
# include <memory>
# include <vector>
2006-03-19 22:37:30 +01:00
# include <sstream>
# include <fstream>
# include <iomanip>
# include <iostream>
2006-05-26 14:49:19 +02:00
# include <stdexcept>
2009-06-10 13:32:49 +02:00
# include <algorithm>
2009-06-26 07:55:48 +02:00
# include <ctime>
2005-11-26 22:16:03 +01:00
2008-03-30 21:08:19 +02:00
# include <boost/algorithm/string/predicate.hpp>
2009-06-10 17:28:45 +02:00
# include <boost/algorithm/string/join.hpp>
2009-07-09 18:58:21 +02:00
# include <boost/algorithm/string/split.hpp>
2010-08-01 21:15:02 +02:00
# include <boost/utility.hpp>
2008-03-30 21:08:19 +02:00
2006-03-19 22:37:30 +01:00
# include <sys/stat.h>
2010-01-04 17:54:32 +01:00
# include <sys/wait.h>
2006-03-19 22:37:30 +01:00
# include <pwd.h>
# include <unistd.h>
2009-07-02 06:28:33 +02:00
# include <signal.h>
2006-03-19 22:37:30 +01:00
# include <dirent.h>
# include <errno.h>
2009-10-02 17:23:53 +02:00
# include <pthread.h>
# include <signal.h>
2006-03-19 22:37:30 +01:00
2009-10-05 14:49:32 +02:00
# include <synthesis/enginemodulebridge.h>
# include <synthesis/SDK_util.h>
2009-11-13 05:31:06 +01:00
# include <synthesis/san.h>
2009-01-18 22:14:24 +01:00
2013-10-01 09:26:41 +02:00
# ifdef USE_DLT
# include <dlt.h>
# endif
2010-02-18 10:24:05 +01:00
# include "test.h"
2009-10-05 14:49:32 +02:00
# include <syncevo/declarations.h>
2009-10-02 17:23:53 +02:00
SE_BEGIN_CXX
2009-10-25 22:46:09 +01:00
SyncContext * SyncContext : : m_activeContext ;
2009-06-26 07:55:48 +02:00
2009-11-30 11:23:06 +01:00
static const char * LogfileBasename = " syncevolution-log " ;
2014-03-19 14:39:42 +01:00
static std : : string RealPath ( const std : : string & path )
{
std : : string buffer ;
2018-01-30 17:00:24 +01:00
char * newPath = realpath ( path . c_str ( ) , nullptr ) ;
2014-03-19 14:39:42 +01:00
if ( newPath ) {
buffer = newPath ;
free ( newPath ) ;
return buffer ;
} else {
return path ;
}
}
config: share properties between peers, configuration view without peer
This patch makes the configuration layout with per-source and per-peer
properties the default for new configurations. Migrating old
configurations is not implemented. The command line has not
been updated at all (MB #8048). The D-Bus API is fairly complete,
only listing sessions independently of a peer is missing (MB #8049).
The key concept of this patch is that a pseudo-node implemented by
MultiplexConfigNode provides a view on all user-visible or hidden
properties. Based on the property name, it looks up the property
definition, picks one of the underlying nodes based on the property
visibility and sharing attributes, then reads and writes the property
via that node. Clearing properties is not needed and not implemented,
because of the uncertain semantic (really remove shared properties?!).
The "sync" property must be available both in the per-source config
(to pick a backend independently of a specific peer) and in the
per-peer configuration (to select a specific data format). This is
solved by making the property special (SHARED_AND_UNSHARED flag) and
then writing it into two nodes. Reading is done from the more specific
per-peer node, with the other node acting as fallback.
The MultiplexConfigNode has to implement the FilterConfigNode API
because it is used as one by the code which sets passwords in the
filter. For this to work, the base FilterConfigNode implementation must
use virtual method calls.
The TestDBusSessionConfig.testUpdateConfigError checks that the error
generated for an incorrect "sync" property contains the path of the
config.ini file. The meaning of the error message in this case is that
the wrong value is *for* that file, not that the property is already
wrong *in* the file, but that's okay.
The MultiplexConfigNode::getName() can only return a fixed name. To
satisfy the test and because it is the right choice at the moment for
all properties which might trigger such an error, it now is configured
so that it returns the most specific path of the non-shared
properties.
"syncevolution --print-config" shows errors that are in files. Wrong
command line parameters are rejected with a message that refers to the
command line parameter ("--source-property sync=foo").
A future enhancement would be to make the name depend on the
property (MB#8037).
Because an empty string is now a valid configuration name (referencing
the source properties without the per-peer properties) several checks
for such empty strings were removed. The corresponding tests were
updated resp. removed. Instead of talking about "server not found",
the more neutral name "configuration" is used. The new
TestMultipleConfigs.testSharing() covers the semantic of sharing
properties between multiple configs.
Access to non-existant nodes is routed into the new
DevNullConfigNode. It always returns an empty string when reading and
throws an error when trying to write into it. Unintentionally writing
into a config.ini file therefore became harder, compared with the
previous instantiation of SyncContext() with empty config name.
The parsing of incoming messages uses a SyncContext which is bound to
a VolatileConfigNode. This allows reading and writing of properties
without any risk of touching files on disk.
The patch which introduced the new config nodes was not complete yet
with regards to the new layout. Removing nodes and trees used the
wrong root path: getRootPath() refers to the most specific peer
config, m_root to the part without the peer path. SyncConfig must
distinguish between a view with peer-specific properties and one
without, which is done by setting the m_peerPath only if a peer was
selected. Copying properties must know whether writing per-specific
properties ("unshared") is wanted, because trying to do it for a view
without those properties would trigger the DevNullConfigNode
exception.
SyncConfig::removeSyncSource() removes source properties both in the
shared part of the config and in *all* peers. This is used by
Session.SetConfig() for the case that the caller is a) setting instead
of updating the config and b) not providing any properties for the
source. This is clearly a risky operation which should not be done
when there are other peers which still use the source. We might have a
problem in our D-Bus API definition for "removing a peer
configuration" (MB #8059) because it always has an effect on other
peers.
The property registries were initialized implicitly before. With the
recent changes it happened that SyncContext was initialized to analyze
a SyncML message without initializing the registry, which caused
getRemoteDevID() to use a property where the hidden flag had not been
set yet.
Moving all of these additional flags into the property constructors is
awkward (which is why they are in the getRegistry() methods), so this
was fixed by initializing the properties in the SyncConfig
constructors by asking for the registries. Because there is no way to
access them except via the registry and SyncConfig instances (*), this
should ensure that the properties are valid when used.
(*) Exception are some properties which are declared publicly to have access
to their name. Nobody's perfect...
2009-11-13 20:02:44 +01:00
SyncContext : : SyncContext ( )
{
init ( ) ;
}
2009-10-05 14:49:32 +02:00
SyncContext : : SyncContext ( const string & server ,
config: share properties between peers, configuration view without peer
This patch makes the configuration layout with per-source and per-peer
properties the default for new configurations. Migrating old
configurations is not implemented. The command line has not
been updated at all (MB #8048). The D-Bus API is fairly complete,
only listing sessions independently of a peer is missing (MB #8049).
The key concept of this patch is that a pseudo-node implemented by
MultiplexConfigNode provides a view on all user-visible or hidden
properties. Based on the property name, it looks up the property
definition, picks one of the underlying nodes based on the property
visibility and sharing attributes, then reads and writes the property
via that node. Clearing properties is not needed and not implemented,
because of the uncertain semantic (really remove shared properties?!).
The "sync" property must be available both in the per-source config
(to pick a backend independently of a specific peer) and in the
per-peer configuration (to select a specific data format). This is
solved by making the property special (SHARED_AND_UNSHARED flag) and
then writing it into two nodes. Reading is done from the more specific
per-peer node, with the other node acting as fallback.
The MultiplexConfigNode has to implement the FilterConfigNode API
because it is used as one by the code which sets passwords in the
filter. For this to work, the base FilterConfigNode implementation must
use virtual method calls.
The TestDBusSessionConfig.testUpdateConfigError checks that the error
generated for an incorrect "sync" property contains the path of the
config.ini file. The meaning of the error message in this case is that
the wrong value is *for* that file, not that the property is already
wrong *in* the file, but that's okay.
The MultiplexConfigNode::getName() can only return a fixed name. To
satisfy the test and because it is the right choice at the moment for
all properties which might trigger such an error, it now is configured
so that it returns the most specific path of the non-shared
properties.
"syncevolution --print-config" shows errors that are in files. Wrong
command line parameters are rejected with a message that refers to the
command line parameter ("--source-property sync=foo").
A future enhancement would be to make the name depend on the
property (MB#8037).
Because an empty string is now a valid configuration name (referencing
the source properties without the per-peer properties) several checks
for such empty strings were removed. The corresponding tests were
updated resp. removed. Instead of talking about "server not found",
the more neutral name "configuration" is used. The new
TestMultipleConfigs.testSharing() covers the semantic of sharing
properties between multiple configs.
Access to non-existant nodes is routed into the new
DevNullConfigNode. It always returns an empty string when reading and
throws an error when trying to write into it. Unintentionally writing
into a config.ini file therefore became harder, compared with the
previous instantiation of SyncContext() with empty config name.
The parsing of incoming messages uses a SyncContext which is bound to
a VolatileConfigNode. This allows reading and writing of properties
without any risk of touching files on disk.
The patch which introduced the new config nodes was not complete yet
with regards to the new layout. Removing nodes and trees used the
wrong root path: getRootPath() refers to the most specific peer
config, m_root to the part without the peer path. SyncConfig must
distinguish between a view with peer-specific properties and one
without, which is done by setting the m_peerPath only if a peer was
selected. Copying properties must know whether writing per-specific
properties ("unshared") is wanted, because trying to do it for a view
without those properties would trigger the DevNullConfigNode
exception.
SyncConfig::removeSyncSource() removes source properties both in the
shared part of the config and in *all* peers. This is used by
Session.SetConfig() for the case that the caller is a) setting instead
of updating the config and b) not providing any properties for the
source. This is clearly a risky operation which should not be done
when there are other peers which still use the source. We might have a
problem in our D-Bus API definition for "removing a peer
configuration" (MB #8059) because it always has an effect on other
peers.
The property registries were initialized implicitly before. With the
recent changes it happened that SyncContext was initialized to analyze
a SyncML message without initializing the registry, which caused
getRemoteDevID() to use a property where the hidden flag had not been
set yet.
Moving all of these additional flags into the property constructors is
awkward (which is why they are in the getRegistry() methods), so this
was fixed by initializing the properties in the SyncConfig
constructors by asking for the registries. Because there is no way to
access them except via the registry and SyncConfig instances (*), this
should ensure that the properties are valid when used.
(*) Exception are some properties which are declared publicly to have access
to their name. Nobody's perfect...
2009-11-13 20:02:44 +01:00
bool doLogging ) :
2009-10-06 17:22:47 +02:00
SyncConfig ( server ) ,
config: share properties between peers, configuration view without peer
This patch makes the configuration layout with per-source and per-peer
properties the default for new configurations. Migrating old
configurations is not implemented. The command line has not
been updated at all (MB #8048). The D-Bus API is fairly complete,
only listing sessions independently of a peer is missing (MB #8049).
The key concept of this patch is that a pseudo-node implemented by
MultiplexConfigNode provides a view on all user-visible or hidden
properties. Based on the property name, it looks up the property
definition, picks one of the underlying nodes based on the property
visibility and sharing attributes, then reads and writes the property
via that node. Clearing properties is not needed and not implemented,
because of the uncertain semantic (really remove shared properties?!).
The "sync" property must be available both in the per-source config
(to pick a backend independently of a specific peer) and in the
per-peer configuration (to select a specific data format). This is
solved by making the property special (SHARED_AND_UNSHARED flag) and
then writing it into two nodes. Reading is done from the more specific
per-peer node, with the other node acting as fallback.
The MultiplexConfigNode has to implement the FilterConfigNode API
because it is used as one by the code which sets passwords in the
filter. For this to work, the base FilterConfigNode implementation must
use virtual method calls.
The TestDBusSessionConfig.testUpdateConfigError checks that the error
generated for an incorrect "sync" property contains the path of the
config.ini file. The meaning of the error message in this case is that
the wrong value is *for* that file, not that the property is already
wrong *in* the file, but that's okay.
The MultiplexConfigNode::getName() can only return a fixed name. To
satisfy the test and because it is the right choice at the moment for
all properties which might trigger such an error, it now is configured
so that it returns the most specific path of the non-shared
properties.
"syncevolution --print-config" shows errors that are in files. Wrong
command line parameters are rejected with a message that refers to the
command line parameter ("--source-property sync=foo").
A future enhancement would be to make the name depend on the
property (MB#8037).
Because an empty string is now a valid configuration name (referencing
the source properties without the per-peer properties) several checks
for such empty strings were removed. The corresponding tests were
updated resp. removed. Instead of talking about "server not found",
the more neutral name "configuration" is used. The new
TestMultipleConfigs.testSharing() covers the semantic of sharing
properties between multiple configs.
Access to non-existant nodes is routed into the new
DevNullConfigNode. It always returns an empty string when reading and
throws an error when trying to write into it. Unintentionally writing
into a config.ini file therefore became harder, compared with the
previous instantiation of SyncContext() with empty config name.
The parsing of incoming messages uses a SyncContext which is bound to
a VolatileConfigNode. This allows reading and writing of properties
without any risk of touching files on disk.
The patch which introduced the new config nodes was not complete yet
with regards to the new layout. Removing nodes and trees used the
wrong root path: getRootPath() refers to the most specific peer
config, m_root to the part without the peer path. SyncConfig must
distinguish between a view with peer-specific properties and one
without, which is done by setting the m_peerPath only if a peer was
selected. Copying properties must know whether writing per-specific
properties ("unshared") is wanted, because trying to do it for a view
without those properties would trigger the DevNullConfigNode
exception.
SyncConfig::removeSyncSource() removes source properties both in the
shared part of the config and in *all* peers. This is used by
Session.SetConfig() for the case that the caller is a) setting instead
of updating the config and b) not providing any properties for the
source. This is clearly a risky operation which should not be done
when there are other peers which still use the source. We might have a
problem in our D-Bus API definition for "removing a peer
configuration" (MB #8059) because it always has an effect on other
peers.
The property registries were initialized implicitly before. With the
recent changes it happened that SyncContext was initialized to analyze
a SyncML message without initializing the registry, which caused
getRemoteDevID() to use a property where the hidden flag had not been
set yet.
Moving all of these additional flags into the property constructors is
awkward (which is why they are in the getRegistry() methods), so this
was fixed by initializing the properties in the SyncConfig
constructors by asking for the registries. Because there is no way to
access them except via the registry and SyncConfig instances (*), this
should ensure that the properties are valid when used.
(*) Exception are some properties which are declared publicly to have access
to their name. Nobody's perfect...
2009-11-13 20:02:44 +01:00
m_server ( server )
{
init ( ) ;
m_doLogging = doLogging ;
}
local sync: avoid confusion about what data is changed
In local sync the terms "local" and "remote" (in SyncReport, "Data
modified locally") do not always apply and can be confusing. Replaced
with explicitly mentioning the context.
The source name also no longer is unique. Extended in the local sync
case (and only in that case) by adding a <context>/ prefix to the
source name.
Here is an example of the modified output:
$ syncevolution google
[INFO] @default/itodo20: inactive
[INFO] @default/addressbook: inactive
[INFO] @default/calendar+todo: inactive
[INFO] @default/memo: inactive
[INFO] @default/ical20: inactive
[INFO] @default/todo: inactive
[INFO] @default/file_calendar+todo: inactive
[INFO] @default/file_vcard21: inactive
[INFO] @default/vcard30: inactive
[INFO] @default/text: inactive
[INFO] @default/file_itodo20: inactive
[INFO] @default/vcard21: inactive
[INFO] @default/file_ical20: inactive
[INFO] @default/file_vcard30: inactive
[INFO] @google/addressbook: inactive
[INFO] @google/memo: inactive
[INFO] @google/todo: inactive
[INFO] @google/calendar: starting normal sync, two-way
Local data changes to be applied remotely during synchronization:
*** @google/calendar ***
after last sync | current data
removed since last sync <
> added since last sync
-------------------------------------------------------------------------------
BEGIN:VCALENDAR BEGIN:VCALENDAR
...
END:VCALENDAR END:VCALENDAR
-------------------------------------------------------------------------------
[INFO] @google/calendar: sent 1/2
[INFO] @google/calendar: sent 2/2
Local data changes to be applied remotely during synchronization:
*** @default/calendar ***
no changes
[INFO] @default/calendar: started
[INFO] @default/calendar: updating "created in Google, online"
[INFO] @default/calendar: updating "created in Google - mod2, online"
[INFO] @google/calendar: started
[INFO] @default/calendar: inactive
[INFO] @google/calendar: normal sync done successfully
Synchronization successful.
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | @default | @google | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| disabled, 0 KB sent by client, 2 KB received |
| item(s) in database backup: 3 before sync, 3 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Mon Oct 25 10:03:24 2010, duration 0:13min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified @default during synchronization:
*** @default/calendar ***
before sync | after sync
removed during sync <
> added during sync
-------------------------------------------------------------------------------
BEGIN:VCALENDAR BEGIN:VCALENDAR
VERSION:2.0 VERSION:2.0
...
END:VCALENDAR END:VCALENDAR
-------------------------------------------------------------------------------
pohly@pohly-mobl1:/tmp/syncevolution/src$
Synchronization successful.
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | @google | @default | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 |
| two-way, 2 KB sent by client, 0 KB received |
| item(s) in database backup: 2 before sync, 2 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Mon Oct 25 10:03:24 2010, duration 0:13min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified @google during synchronization:
*** @google/calendar ***
no changes
2010-10-25 10:34:23 +02:00
SyncContext : : SyncContext ( const string & client ,
const string & server ,
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
const string & rootPath ,
2018-01-16 17:17:34 +01:00
const std : : shared_ptr < TransportAgent > & agent ,
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
bool doLogging ) :
local sync: avoid confusion about what data is changed
In local sync the terms "local" and "remote" (in SyncReport, "Data
modified locally") do not always apply and can be confusing. Replaced
with explicitly mentioning the context.
The source name also no longer is unique. Extended in the local sync
case (and only in that case) by adding a <context>/ prefix to the
source name.
Here is an example of the modified output:
$ syncevolution google
[INFO] @default/itodo20: inactive
[INFO] @default/addressbook: inactive
[INFO] @default/calendar+todo: inactive
[INFO] @default/memo: inactive
[INFO] @default/ical20: inactive
[INFO] @default/todo: inactive
[INFO] @default/file_calendar+todo: inactive
[INFO] @default/file_vcard21: inactive
[INFO] @default/vcard30: inactive
[INFO] @default/text: inactive
[INFO] @default/file_itodo20: inactive
[INFO] @default/vcard21: inactive
[INFO] @default/file_ical20: inactive
[INFO] @default/file_vcard30: inactive
[INFO] @google/addressbook: inactive
[INFO] @google/memo: inactive
[INFO] @google/todo: inactive
[INFO] @google/calendar: starting normal sync, two-way
Local data changes to be applied remotely during synchronization:
*** @google/calendar ***
after last sync | current data
removed since last sync <
> added since last sync
-------------------------------------------------------------------------------
BEGIN:VCALENDAR BEGIN:VCALENDAR
...
END:VCALENDAR END:VCALENDAR
-------------------------------------------------------------------------------
[INFO] @google/calendar: sent 1/2
[INFO] @google/calendar: sent 2/2
Local data changes to be applied remotely during synchronization:
*** @default/calendar ***
no changes
[INFO] @default/calendar: started
[INFO] @default/calendar: updating "created in Google, online"
[INFO] @default/calendar: updating "created in Google - mod2, online"
[INFO] @google/calendar: started
[INFO] @default/calendar: inactive
[INFO] @google/calendar: normal sync done successfully
Synchronization successful.
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | @default | @google | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| disabled, 0 KB sent by client, 2 KB received |
| item(s) in database backup: 3 before sync, 3 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Mon Oct 25 10:03:24 2010, duration 0:13min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified @default during synchronization:
*** @default/calendar ***
before sync | after sync
removed during sync <
> added during sync
-------------------------------------------------------------------------------
BEGIN:VCALENDAR BEGIN:VCALENDAR
VERSION:2.0 VERSION:2.0
...
END:VCALENDAR END:VCALENDAR
-------------------------------------------------------------------------------
pohly@pohly-mobl1:/tmp/syncevolution/src$
Synchronization successful.
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | @google | @default | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 |
| two-way, 2 KB sent by client, 0 KB received |
| item(s) in database backup: 2 before sync, 2 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Mon Oct 25 10:03:24 2010, duration 0:13min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified @google during synchronization:
*** @google/calendar ***
no changes
2010-10-25 10:34:23 +02:00
SyncConfig ( client ,
2018-01-16 17:17:34 +01:00
std : : shared_ptr < ConfigTree > ( ) ,
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
rootPath ) ,
local sync: avoid confusion about what data is changed
In local sync the terms "local" and "remote" (in SyncReport, "Data
modified locally") do not always apply and can be confusing. Replaced
with explicitly mentioning the context.
The source name also no longer is unique. Extended in the local sync
case (and only in that case) by adding a <context>/ prefix to the
source name.
Here is an example of the modified output:
$ syncevolution google
[INFO] @default/itodo20: inactive
[INFO] @default/addressbook: inactive
[INFO] @default/calendar+todo: inactive
[INFO] @default/memo: inactive
[INFO] @default/ical20: inactive
[INFO] @default/todo: inactive
[INFO] @default/file_calendar+todo: inactive
[INFO] @default/file_vcard21: inactive
[INFO] @default/vcard30: inactive
[INFO] @default/text: inactive
[INFO] @default/file_itodo20: inactive
[INFO] @default/vcard21: inactive
[INFO] @default/file_ical20: inactive
[INFO] @default/file_vcard30: inactive
[INFO] @google/addressbook: inactive
[INFO] @google/memo: inactive
[INFO] @google/todo: inactive
[INFO] @google/calendar: starting normal sync, two-way
Local data changes to be applied remotely during synchronization:
*** @google/calendar ***
after last sync | current data
removed since last sync <
> added since last sync
-------------------------------------------------------------------------------
BEGIN:VCALENDAR BEGIN:VCALENDAR
...
END:VCALENDAR END:VCALENDAR
-------------------------------------------------------------------------------
[INFO] @google/calendar: sent 1/2
[INFO] @google/calendar: sent 2/2
Local data changes to be applied remotely during synchronization:
*** @default/calendar ***
no changes
[INFO] @default/calendar: started
[INFO] @default/calendar: updating "created in Google, online"
[INFO] @default/calendar: updating "created in Google - mod2, online"
[INFO] @google/calendar: started
[INFO] @default/calendar: inactive
[INFO] @google/calendar: normal sync done successfully
Synchronization successful.
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | @default | @google | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| disabled, 0 KB sent by client, 2 KB received |
| item(s) in database backup: 3 before sync, 3 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Mon Oct 25 10:03:24 2010, duration 0:13min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified @default during synchronization:
*** @default/calendar ***
before sync | after sync
removed during sync <
> added during sync
-------------------------------------------------------------------------------
BEGIN:VCALENDAR BEGIN:VCALENDAR
VERSION:2.0 VERSION:2.0
...
END:VCALENDAR END:VCALENDAR
-------------------------------------------------------------------------------
pohly@pohly-mobl1:/tmp/syncevolution/src$
Synchronization successful.
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | @google | @default | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 |
| two-way, 2 KB sent by client, 0 KB received |
| item(s) in database backup: 2 before sync, 2 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Mon Oct 25 10:03:24 2010, duration 0:13min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified @google during synchronization:
*** @google/calendar ***
no changes
2010-10-25 10:34:23 +02:00
m_server ( client ) ,
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
m_localClientRootPath ( rootPath ) ,
m_agent ( agent )
{
init ( ) ;
local sync: avoid confusion about what data is changed
In local sync the terms "local" and "remote" (in SyncReport, "Data
modified locally") do not always apply and can be confusing. Replaced
with explicitly mentioning the context.
The source name also no longer is unique. Extended in the local sync
case (and only in that case) by adding a <context>/ prefix to the
source name.
Here is an example of the modified output:
$ syncevolution google
[INFO] @default/itodo20: inactive
[INFO] @default/addressbook: inactive
[INFO] @default/calendar+todo: inactive
[INFO] @default/memo: inactive
[INFO] @default/ical20: inactive
[INFO] @default/todo: inactive
[INFO] @default/file_calendar+todo: inactive
[INFO] @default/file_vcard21: inactive
[INFO] @default/vcard30: inactive
[INFO] @default/text: inactive
[INFO] @default/file_itodo20: inactive
[INFO] @default/vcard21: inactive
[INFO] @default/file_ical20: inactive
[INFO] @default/file_vcard30: inactive
[INFO] @google/addressbook: inactive
[INFO] @google/memo: inactive
[INFO] @google/todo: inactive
[INFO] @google/calendar: starting normal sync, two-way
Local data changes to be applied remotely during synchronization:
*** @google/calendar ***
after last sync | current data
removed since last sync <
> added since last sync
-------------------------------------------------------------------------------
BEGIN:VCALENDAR BEGIN:VCALENDAR
...
END:VCALENDAR END:VCALENDAR
-------------------------------------------------------------------------------
[INFO] @google/calendar: sent 1/2
[INFO] @google/calendar: sent 2/2
Local data changes to be applied remotely during synchronization:
*** @default/calendar ***
no changes
[INFO] @default/calendar: started
[INFO] @default/calendar: updating "created in Google, online"
[INFO] @default/calendar: updating "created in Google - mod2, online"
[INFO] @google/calendar: started
[INFO] @default/calendar: inactive
[INFO] @google/calendar: normal sync done successfully
Synchronization successful.
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | @default | @google | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| disabled, 0 KB sent by client, 2 KB received |
| item(s) in database backup: 3 before sync, 3 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Mon Oct 25 10:03:24 2010, duration 0:13min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified @default during synchronization:
*** @default/calendar ***
before sync | after sync
removed during sync <
> added during sync
-------------------------------------------------------------------------------
BEGIN:VCALENDAR BEGIN:VCALENDAR
VERSION:2.0 VERSION:2.0
...
END:VCALENDAR END:VCALENDAR
-------------------------------------------------------------------------------
pohly@pohly-mobl1:/tmp/syncevolution/src$
Synchronization successful.
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | @google | @default | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 |
| two-way, 2 KB sent by client, 0 KB received |
| item(s) in database backup: 2 before sync, 2 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Mon Oct 25 10:03:24 2010, duration 0:13min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified @google during synchronization:
*** @google/calendar ***
no changes
2010-10-25 10:34:23 +02:00
initLocalSync ( server ) ;
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
m_doLogging = doLogging ;
}
local sync: avoid confusion about what data is changed
In local sync the terms "local" and "remote" (in SyncReport, "Data
modified locally") do not always apply and can be confusing. Replaced
with explicitly mentioning the context.
The source name also no longer is unique. Extended in the local sync
case (and only in that case) by adding a <context>/ prefix to the
source name.
Here is an example of the modified output:
$ syncevolution google
[INFO] @default/itodo20: inactive
[INFO] @default/addressbook: inactive
[INFO] @default/calendar+todo: inactive
[INFO] @default/memo: inactive
[INFO] @default/ical20: inactive
[INFO] @default/todo: inactive
[INFO] @default/file_calendar+todo: inactive
[INFO] @default/file_vcard21: inactive
[INFO] @default/vcard30: inactive
[INFO] @default/text: inactive
[INFO] @default/file_itodo20: inactive
[INFO] @default/vcard21: inactive
[INFO] @default/file_ical20: inactive
[INFO] @default/file_vcard30: inactive
[INFO] @google/addressbook: inactive
[INFO] @google/memo: inactive
[INFO] @google/todo: inactive
[INFO] @google/calendar: starting normal sync, two-way
Local data changes to be applied remotely during synchronization:
*** @google/calendar ***
after last sync | current data
removed since last sync <
> added since last sync
-------------------------------------------------------------------------------
BEGIN:VCALENDAR BEGIN:VCALENDAR
...
END:VCALENDAR END:VCALENDAR
-------------------------------------------------------------------------------
[INFO] @google/calendar: sent 1/2
[INFO] @google/calendar: sent 2/2
Local data changes to be applied remotely during synchronization:
*** @default/calendar ***
no changes
[INFO] @default/calendar: started
[INFO] @default/calendar: updating "created in Google, online"
[INFO] @default/calendar: updating "created in Google - mod2, online"
[INFO] @google/calendar: started
[INFO] @default/calendar: inactive
[INFO] @google/calendar: normal sync done successfully
Synchronization successful.
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | @default | @google | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| disabled, 0 KB sent by client, 2 KB received |
| item(s) in database backup: 3 before sync, 3 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Mon Oct 25 10:03:24 2010, duration 0:13min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified @default during synchronization:
*** @default/calendar ***
before sync | after sync
removed during sync <
> added during sync
-------------------------------------------------------------------------------
BEGIN:VCALENDAR BEGIN:VCALENDAR
VERSION:2.0 VERSION:2.0
...
END:VCALENDAR END:VCALENDAR
-------------------------------------------------------------------------------
pohly@pohly-mobl1:/tmp/syncevolution/src$
Synchronization successful.
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | @google | @default | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 |
| two-way, 2 KB sent by client, 0 KB received |
| item(s) in database backup: 2 before sync, 2 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Mon Oct 25 10:03:24 2010, duration 0:13min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified @google during synchronization:
*** @google/calendar ***
no changes
2010-10-25 10:34:23 +02:00
void SyncContext : : initLocalSync ( const string & config )
{
m_localSync = true ;
string tmp ;
splitConfigString ( config , tmp , m_localPeerContext ) ;
m_localPeerContext . insert ( 0 , " @ " ) ;
}
config: share properties between peers, configuration view without peer
This patch makes the configuration layout with per-source and per-peer
properties the default for new configurations. Migrating old
configurations is not implemented. The command line has not
been updated at all (MB #8048). The D-Bus API is fairly complete,
only listing sessions independently of a peer is missing (MB #8049).
The key concept of this patch is that a pseudo-node implemented by
MultiplexConfigNode provides a view on all user-visible or hidden
properties. Based on the property name, it looks up the property
definition, picks one of the underlying nodes based on the property
visibility and sharing attributes, then reads and writes the property
via that node. Clearing properties is not needed and not implemented,
because of the uncertain semantic (really remove shared properties?!).
The "sync" property must be available both in the per-source config
(to pick a backend independently of a specific peer) and in the
per-peer configuration (to select a specific data format). This is
solved by making the property special (SHARED_AND_UNSHARED flag) and
then writing it into two nodes. Reading is done from the more specific
per-peer node, with the other node acting as fallback.
The MultiplexConfigNode has to implement the FilterConfigNode API
because it is used as one by the code which sets passwords in the
filter. For this to work, the base FilterConfigNode implementation must
use virtual method calls.
The TestDBusSessionConfig.testUpdateConfigError checks that the error
generated for an incorrect "sync" property contains the path of the
config.ini file. The meaning of the error message in this case is that
the wrong value is *for* that file, not that the property is already
wrong *in* the file, but that's okay.
The MultiplexConfigNode::getName() can only return a fixed name. To
satisfy the test and because it is the right choice at the moment for
all properties which might trigger such an error, it now is configured
so that it returns the most specific path of the non-shared
properties.
"syncevolution --print-config" shows errors that are in files. Wrong
command line parameters are rejected with a message that refers to the
command line parameter ("--source-property sync=foo").
A future enhancement would be to make the name depend on the
property (MB#8037).
Because an empty string is now a valid configuration name (referencing
the source properties without the per-peer properties) several checks
for such empty strings were removed. The corresponding tests were
updated resp. removed. Instead of talking about "server not found",
the more neutral name "configuration" is used. The new
TestMultipleConfigs.testSharing() covers the semantic of sharing
properties between multiple configs.
Access to non-existant nodes is routed into the new
DevNullConfigNode. It always returns an empty string when reading and
throws an error when trying to write into it. Unintentionally writing
into a config.ini file therefore became harder, compared with the
previous instantiation of SyncContext() with empty config name.
The parsing of incoming messages uses a SyncContext which is bound to
a VolatileConfigNode. This allows reading and writing of properties
without any risk of touching files on disk.
The patch which introduced the new config nodes was not complete yet
with regards to the new layout. Removing nodes and trees used the
wrong root path: getRootPath() refers to the most specific peer
config, m_root to the part without the peer path. SyncConfig must
distinguish between a view with peer-specific properties and one
without, which is done by setting the m_peerPath only if a peer was
selected. Copying properties must know whether writing per-specific
properties ("unshared") is wanted, because trying to do it for a view
without those properties would trigger the DevNullConfigNode
exception.
SyncConfig::removeSyncSource() removes source properties both in the
shared part of the config and in *all* peers. This is used by
Session.SetConfig() for the case that the caller is a) setting instead
of updating the config and b) not providing any properties for the
source. This is clearly a risky operation which should not be done
when there are other peers which still use the source. We might have a
problem in our D-Bus API definition for "removing a peer
configuration" (MB #8059) because it always has an effect on other
peers.
The property registries were initialized implicitly before. With the
recent changes it happened that SyncContext was initialized to analyze
a SyncML message without initializing the registry, which caused
getRemoteDevID() to use a property where the hidden flag had not been
set yet.
Moving all of these additional flags into the property constructors is
awkward (which is why they are in the getRegistry() methods), so this
was fixed by initializing the properties in the SyncConfig
constructors by asking for the registries. Because there is no way to
access them except via the registry and SyncConfig instances (*), this
should ensure that the properties are valid when used.
(*) Exception are some properties which are declared publicly to have access
to their name. Nobody's perfect...
2009-11-13 20:02:44 +01:00
void SyncContext : : init ( )
2005-11-26 22:16:03 +01:00
{
config: share properties between peers, configuration view without peer
This patch makes the configuration layout with per-source and per-peer
properties the default for new configurations. Migrating old
configurations is not implemented. The command line has not
been updated at all (MB #8048). The D-Bus API is fairly complete,
only listing sessions independently of a peer is missing (MB #8049).
The key concept of this patch is that a pseudo-node implemented by
MultiplexConfigNode provides a view on all user-visible or hidden
properties. Based on the property name, it looks up the property
definition, picks one of the underlying nodes based on the property
visibility and sharing attributes, then reads and writes the property
via that node. Clearing properties is not needed and not implemented,
because of the uncertain semantic (really remove shared properties?!).
The "sync" property must be available both in the per-source config
(to pick a backend independently of a specific peer) and in the
per-peer configuration (to select a specific data format). This is
solved by making the property special (SHARED_AND_UNSHARED flag) and
then writing it into two nodes. Reading is done from the more specific
per-peer node, with the other node acting as fallback.
The MultiplexConfigNode has to implement the FilterConfigNode API
because it is used as one by the code which sets passwords in the
filter. For this to work, the base FilterConfigNode implementation must
use virtual method calls.
The TestDBusSessionConfig.testUpdateConfigError checks that the error
generated for an incorrect "sync" property contains the path of the
config.ini file. The meaning of the error message in this case is that
the wrong value is *for* that file, not that the property is already
wrong *in* the file, but that's okay.
The MultiplexConfigNode::getName() can only return a fixed name. To
satisfy the test and because it is the right choice at the moment for
all properties which might trigger such an error, it now is configured
so that it returns the most specific path of the non-shared
properties.
"syncevolution --print-config" shows errors that are in files. Wrong
command line parameters are rejected with a message that refers to the
command line parameter ("--source-property sync=foo").
A future enhancement would be to make the name depend on the
property (MB#8037).
Because an empty string is now a valid configuration name (referencing
the source properties without the per-peer properties) several checks
for such empty strings were removed. The corresponding tests were
updated resp. removed. Instead of talking about "server not found",
the more neutral name "configuration" is used. The new
TestMultipleConfigs.testSharing() covers the semantic of sharing
properties between multiple configs.
Access to non-existant nodes is routed into the new
DevNullConfigNode. It always returns an empty string when reading and
throws an error when trying to write into it. Unintentionally writing
into a config.ini file therefore became harder, compared with the
previous instantiation of SyncContext() with empty config name.
The parsing of incoming messages uses a SyncContext which is bound to
a VolatileConfigNode. This allows reading and writing of properties
without any risk of touching files on disk.
The patch which introduced the new config nodes was not complete yet
with regards to the new layout. Removing nodes and trees used the
wrong root path: getRootPath() refers to the most specific peer
config, m_root to the part without the peer path. SyncConfig must
distinguish between a view with peer-specific properties and one
without, which is done by setting the m_peerPath only if a peer was
selected. Copying properties must know whether writing per-specific
properties ("unshared") is wanted, because trying to do it for a view
without those properties would trigger the DevNullConfigNode
exception.
SyncConfig::removeSyncSource() removes source properties both in the
shared part of the config and in *all* peers. This is used by
Session.SetConfig() for the case that the caller is a) setting instead
of updating the config and b) not providing any properties for the
source. This is clearly a risky operation which should not be done
when there are other peers which still use the source. We might have a
problem in our D-Bus API definition for "removing a peer
configuration" (MB #8059) because it always has an effect on other
peers.
The property registries were initialized implicitly before. With the
recent changes it happened that SyncContext was initialized to analyze
a SyncML message without initializing the registry, which caused
getRemoteDevID() to use a property where the hidden flag had not been
set yet.
Moving all of these additional flags into the property constructors is
awkward (which is why they are in the getRegistry() methods), so this
was fixed by initializing the properties in the SyncConfig
constructors by asking for the registries. Because there is no way to
access them except via the registry and SyncConfig instances (*), this
should ensure that the properties are valid when used.
(*) Exception are some properties which are declared publicly to have access
to their name. Nobody's perfect...
2009-11-13 20:02:44 +01:00
m_doLogging = false ;
m_quiet = false ;
m_dryrun = false ;
2018-01-26 15:13:37 +01:00
m_keepPhotoData = false ;
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
m_localSync = false ;
config: share properties between peers, configuration view without peer
This patch makes the configuration layout with per-source and per-peer
properties the default for new configurations. Migrating old
configurations is not implemented. The command line has not
been updated at all (MB #8048). The D-Bus API is fairly complete,
only listing sessions independently of a peer is missing (MB #8049).
The key concept of this patch is that a pseudo-node implemented by
MultiplexConfigNode provides a view on all user-visible or hidden
properties. Based on the property name, it looks up the property
definition, picks one of the underlying nodes based on the property
visibility and sharing attributes, then reads and writes the property
via that node. Clearing properties is not needed and not implemented,
because of the uncertain semantic (really remove shared properties?!).
The "sync" property must be available both in the per-source config
(to pick a backend independently of a specific peer) and in the
per-peer configuration (to select a specific data format). This is
solved by making the property special (SHARED_AND_UNSHARED flag) and
then writing it into two nodes. Reading is done from the more specific
per-peer node, with the other node acting as fallback.
The MultiplexConfigNode has to implement the FilterConfigNode API
because it is used as one by the code which sets passwords in the
filter. For this to work, the base FilterConfigNode implementation must
use virtual method calls.
The TestDBusSessionConfig.testUpdateConfigError checks that the error
generated for an incorrect "sync" property contains the path of the
config.ini file. The meaning of the error message in this case is that
the wrong value is *for* that file, not that the property is already
wrong *in* the file, but that's okay.
The MultiplexConfigNode::getName() can only return a fixed name. To
satisfy the test and because it is the right choice at the moment for
all properties which might trigger such an error, it now is configured
so that it returns the most specific path of the non-shared
properties.
"syncevolution --print-config" shows errors that are in files. Wrong
command line parameters are rejected with a message that refers to the
command line parameter ("--source-property sync=foo").
A future enhancement would be to make the name depend on the
property (MB#8037).
Because an empty string is now a valid configuration name (referencing
the source properties without the per-peer properties) several checks
for such empty strings were removed. The corresponding tests were
updated resp. removed. Instead of talking about "server not found",
the more neutral name "configuration" is used. The new
TestMultipleConfigs.testSharing() covers the semantic of sharing
properties between multiple configs.
Access to non-existant nodes is routed into the new
DevNullConfigNode. It always returns an empty string when reading and
throws an error when trying to write into it. Unintentionally writing
into a config.ini file therefore became harder, compared with the
previous instantiation of SyncContext() with empty config name.
The parsing of incoming messages uses a SyncContext which is bound to
a VolatileConfigNode. This allows reading and writing of properties
without any risk of touching files on disk.
The patch which introduced the new config nodes was not complete yet
with regards to the new layout. Removing nodes and trees used the
wrong root path: getRootPath() refers to the most specific peer
config, m_root to the part without the peer path. SyncConfig must
distinguish between a view with peer-specific properties and one
without, which is done by setting the m_peerPath only if a peer was
selected. Copying properties must know whether writing per-specific
properties ("unshared") is wanted, because trying to do it for a view
without those properties would trigger the DevNullConfigNode
exception.
SyncConfig::removeSyncSource() removes source properties both in the
shared part of the config and in *all* peers. This is used by
Session.SetConfig() for the case that the caller is a) setting instead
of updating the config and b) not providing any properties for the
source. This is clearly a risky operation which should not be done
when there are other peers which still use the source. We might have a
problem in our D-Bus API definition for "removing a peer
configuration" (MB #8059) because it always has an effect on other
peers.
The property registries were initialized implicitly before. With the
recent changes it happened that SyncContext was initialized to analyze
a SyncML message without initializing the registry, which caused
getRemoteDevID() to use a property where the hidden flag had not been
set yet.
Moving all of these additional flags into the property constructors is
awkward (which is why they are in the getRegistry() methods), so this
was fixed by initializing the properties in the SyncConfig
constructors by asking for the registries. Because there is no way to
access them except via the registry and SyncConfig instances (*), this
should ensure that the properties are valid when used.
(*) Exception are some properties which are declared publicly to have access
to their name. Nobody's perfect...
2009-11-13 20:02:44 +01:00
m_serverMode = false ;
2011-04-19 16:56:35 +02:00
m_serverAlerted = false ;
2011-04-21 12:36:12 +02:00
m_configNeeded = true ;
2010-03-04 02:53:04 +01:00
m_firstSourceAccess = true ;
2014-08-29 11:27:07 +02:00
m_quitSync = false ;
2010-03-12 09:34:28 +01:00
m_remoteInitiated = false ;
2018-01-30 17:00:24 +01:00
m_sourceListPtr = nullptr ;
2014-01-31 17:30:04 +01:00
m_syncFreeze = SYNC_FREEZE_NONE ;
2005-11-26 22:16:03 +01:00
}
2009-10-05 14:49:32 +02:00
SyncContext : : ~ SyncContext ( )
2005-11-26 22:16:03 +01:00
{
}
2010-08-01 21:15:02 +02:00
/**
* Utility code for parsing and comparing
* log dir names . Also a binary predicate for
* sorting them .
*/
class LogDirNames {
public :
// internal prefix for backup directory name: "SyncEvolution-"
static const char * const DIR_PREFIX ;
/**
* Compare two directory by its creation time encoded
* in the directory name sort them in ascending order
*/
bool operator ( ) ( const string & str1 , const string & str2 ) {
string iDirPath1 , iStr1 ;
string iDirPath2 , iStr2 ;
parseLogDir ( str1 , iDirPath1 , iStr1 ) ;
parseLogDir ( str2 , iDirPath2 , iStr2 ) ;
string dirPrefix1 , peerName1 , dateTime1 ;
parseDirName ( iStr1 , dirPrefix1 , peerName1 , dateTime1 ) ;
string dirPrefix2 , peerName2 , dateTime2 ;
parseDirName ( iStr2 , dirPrefix2 , peerName2 , dateTime2 ) ;
return dateTime1 < dateTime2 ;
}
/**
* extract backup directory name from a full backup path
* for example , a full path " /home/xxx/.cache/syncevolution/default/funambol-2009-12-08-14-05 "
* is parsed as " /home/xxx/.cache/syncevolution/default " and " funambol-2009-12-08-14-05 "
*/
static void parseLogDir ( const string & fullpath , string & dirPath , string & dirName ) {
string iFullpath = boost : : trim_right_copy_if ( fullpath , boost : : is_any_of ( " / " ) ) ;
size_t off = iFullpath . find_last_of ( ' / ' ) ;
if ( off ! = iFullpath . npos ) {
dirPath = iFullpath . substr ( 0 , off ) ;
dirName = iFullpath . substr ( off + 1 ) ;
} else {
dirPath = " " ;
dirName = iFullpath ;
}
}
// escape '-' and '_' for peer name
static string escapePeer ( const string & prefix ) {
string escaped = prefix ;
boost : : replace_all ( escaped , " _ " , " __ " ) ;
boost : : replace_all ( escaped , " - " , " _+ " ) ;
return escaped ;
}
// un-escape '_+' and '__' for peer name
static string unescapePeer ( const string & escaped ) {
string prefix = escaped ;
boost : : replace_all ( prefix , " _+ " , " - " ) ;
boost : : replace_all ( prefix , " __ " , " _ " ) ;
return prefix ;
}
/**
* parse a directory name into dirPrefix ( empty or DIR_PREFIX ) , peerName , dateTime .
* peerName must be unescaped by the caller to get the real string .
* If directory name is in the format of ' [ DIR_PREFIX ] - peer [ @ context ] - year - month - day - hour - min '
* then parsing is sucessful and these 3 strings are correctly set and true is returned .
* Otherwise , false is returned .
* Here we don ' t check whether the dir name is matching peer name
*/
static bool parseDirName ( const string & dir , string & dirPrefix , string & config , string & dateTime ) {
string iDir = dir ;
if ( ! boost : : starts_with ( iDir , DIR_PREFIX ) ) {
dirPrefix = " " ;
} else {
dirPrefix = DIR_PREFIX ;
boost : : erase_first ( iDir , DIR_PREFIX ) ;
}
size_t off = iDir . find ( ' - ' ) ;
if ( off ! = iDir . npos ) {
config = iDir . substr ( 0 , off ) ;
dateTime = iDir . substr ( off ) ;
// m_prefix doesn't contain peer name or it equals with dirPrefix plus peerName
return checkDirName ( dateTime ) ;
}
return false ;
}
// check the dir name is conforming to what format we write
static bool checkDirName ( const string & value ) {
const char * str = value . c_str ( ) ;
/** need check whether string after prefix is a valid date-time we wrote, format
* should be - YYYY - MM - DD - HH - MM and optional sequence number */
static char table [ ] = { ' - ' , ' 9 ' , ' 9 ' , ' 9 ' , ' 9 ' , //year
' - ' , ' 1 ' , ' 9 ' , //month
' - ' , ' 3 ' , ' 9 ' , //date
' - ' , ' 2 ' , ' 9 ' , //hour
' - ' , ' 5 ' , ' 9 ' //minute
} ;
for ( size_t i = 0 ; i < sizeof ( table ) / sizeof ( table [ 0 ] ) & & * str ; i + + , str + + ) {
switch ( table [ i ] ) {
case ' - ' :
if ( * str ! = ' - ' )
return false ;
break ;
case ' 1 ' :
if ( * str < ' 0 ' | | * str > ' 1 ' )
return false ;
break ;
case ' 2 ' :
if ( * str < ' 0 ' | | * str > ' 2 ' )
return false ;
break ;
case ' 3 ' :
if ( * str < ' 0 ' | | * str > ' 3 ' )
return false ;
break ;
case ' 5 ' :
if ( * str < ' 0 ' | | * str > ' 5 ' )
return false ;
break ;
case ' 9 ' :
if ( * str < ' 0 ' | | * str > ' 9 ' )
return false ;
break ;
default :
return false ;
} ;
}
return true ;
}
} ;
2006-03-19 22:37:30 +01:00
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
class LogDir ;
/**
* Helper class for LogDir : acts as proxy for logging into
* the LogDir ' s reports and log file .
*/
class LogDirLogger : public Logger
{
Logger : : Handle m_parentLogger ; /**< the logger which was active before we started to intercept messages */
2018-01-16 17:17:34 +01:00
std : : weak_ptr < LogDir > m_logdir ; /**< grants access to report and Synthesis engine */
2013-10-01 09:26:41 +02:00
# ifdef USE_DLT
bool m_useDLT ; /**< SyncEvolution and libsynthesis are logging to DLT */
# endif
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
public :
2018-01-16 17:17:34 +01:00
LogDirLogger ( const std : : weak_ptr < LogDir > & logdir ) ;
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
virtual void remove ( ) throw ( ) ;
virtual void messagev ( const MessageOptions & options ,
const char * format ,
va_list args ) ;
} ;
// This class owns the logging directory. It is responsible
2006-03-19 22:37:30 +01:00
// for redirecting output at the start and end of sync (even
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
// in case of exceptions thrown!).
2018-01-16 17:17:34 +01:00
class LogDir : private boost : : noncopyable , private LogDirNames , public enable_weak_from_this < LogDir > {
2009-10-05 14:49:32 +02:00
SyncContext & m_client ;
2009-04-16 17:22:31 +02:00
string m_logdir ; /**< configured backup root dir */
2006-03-19 22:37:30 +01:00
int m_maxlogdirs ; /**< number of backup dirs to preserve, 0 if unlimited */
string m_prefix ; /**< common prefix of backup dirs */
string m_path ; /**< path to current logging and backup dir */
2009-07-07 12:55:58 +02:00
string m_logfile ; /**< Path to log file there, empty if not writing one.
The file is no longer written by this class , nor
does it control the basename of it . Writing the
log file is enabled by the XML configuration that
we prepare for the Synthesis engine ; the base name
of the file is hard - coded in the engine . Despite
that this class still is the central point to ask
for the name of the log file . */
2012-05-22 11:18:49 +02:00
boost : : scoped_ptr < SafeConfigNode > m_info ; /**< key/value representation of sync information */
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
// Access to m_report and m_client must be thread-safe as soon as
// LogDirLogger is active, because they are shared between main
// thread and any thread which might log errors.
friend class LogDirLogger ;
2009-04-15 21:03:26 +02:00
bool m_readonly ; /**< m_info is not to be written to */
2009-04-16 09:26:14 +02:00
SyncReport * m_report ; /**< record start/end times here */
2006-03-19 22:37:30 +01:00
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
PushLogger < LogDirLogger > m_logger ; /**< active logger */
2018-01-30 17:00:24 +01:00
LogDir ( SyncContext & client ) : m_client ( client ) , m_info ( nullptr ) , m_readonly ( false ) , m_report ( nullptr )
2009-04-16 17:22:31 +02:00
{
2009-06-10 15:34:38 +02:00
// Set default log directory. This will be overwritten with a user-specified
// location later on, if one was selected by the user. SyncEvolution >= 0.9 alpha
// and < 0.9 beta 2 used XDG_DATA_HOME because the logs and data base dumps
// were not considered "non-essential data files". Because XDG_DATA_HOME is
// searched for .desktop files and creating large amounts of other files there
// slows down that search, the default was changed to XDG_CACHE_DIR.
//
// To migrate old installations seamlessly, this code here renames the old
// default directory to the new one. Errors (like not found) are silently ignored.
mkdir_p ( SubstEnvironment ( " ${XDG_CACHE_HOME} " ) . c_str ( ) ) ;
rename ( SubstEnvironment ( " ${XDG_DATA_HOME}/applications/syncevolution " ) . c_str ( ) ,
SubstEnvironment ( " ${XDG_CACHE_HOME}/syncevolution " ) . c_str ( ) ) ;
2011-01-18 15:07:46 +01:00
string path = m_client . getLogDir ( ) ;
if ( path . empty ( ) ) {
path = " ${XDG_CACHE_HOME}/syncevolution " ;
}
setLogdir ( path ) ;
2007-11-08 22:22:52 +01:00
}
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
public :
2018-01-16 17:17:34 +01:00
// Construct via make_weak_shared.
friend make_weak_shared ;
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
2007-11-08 22:22:52 +01:00
/**
2010-02-08 09:21:41 +01:00
* Finds previous log directories for context . Reports errors via exceptions .
2007-11-08 22:22:52 +01:00
*
2009-04-15 15:58:05 +02:00
* @ retval dirs vector of full path names , oldest first
2007-11-08 22:22:52 +01:00
*/
2010-02-08 09:21:41 +01:00
void previousLogdirs ( vector < string > & dirs ) {
2009-04-15 15:58:05 +02:00
dirs . clear ( ) ;
2010-02-08 09:21:41 +01:00
getLogdirs ( dirs ) ;
2009-04-15 15:58:05 +02:00
}
2007-11-08 22:22:52 +01:00
2009-04-15 15:58:05 +02:00
/**
* Finds previous log directory . Returns empty string if anything went wrong .
*
2018-01-30 17:00:24 +01:00
* @ param path path to configured backup directy , nullptr if defaulting to / tmp , " none " if not creating log file
2009-04-15 15:58:05 +02:00
* @ return full path of previous log directory , empty string if not found
*/
2010-02-08 09:21:41 +01:00
string previousLogdir ( ) throw ( ) {
2009-04-15 15:58:05 +02:00
try {
vector < string > dirs ;
2010-02-08 09:21:41 +01:00
previousLogdirs ( dirs ) ;
2009-04-15 15:58:05 +02:00
return dirs . empty ( ) ? " " : dirs . back ( ) ;
} catch ( . . . ) {
2009-10-06 17:22:47 +02:00
Exception : : handle ( ) ;
2007-11-08 22:22:52 +01:00
return " " ;
}
}
2010-02-08 09:21:41 +01:00
/**
* Set log dir and base name used for searching and creating sessions .
* Default if not called is the getLogDir ( ) value of the context .
*
2011-01-18 15:07:46 +01:00
* @ param logdir " none " to disable sessions , " " for default , may contain $ { }
2010-02-08 09:21:41 +01:00
* for environment variables
*/
2011-01-18 15:07:46 +01:00
void setLogdir ( const string & logdir ) {
if ( logdir . empty ( ) ) {
2010-02-08 09:21:41 +01:00
return ;
}
m_logdir = SubstEnvironment ( logdir ) ;
m_logdir = boost : : trim_right_copy_if ( m_logdir , boost : : is_any_of ( " / " ) ) ;
if ( m_logdir = = " none " ) {
return ;
}
2014-03-19 14:39:42 +01:00
// Resolve symbolic links in path now, in case that they change later
2018-01-30 17:00:24 +01:00
// while the session runs. Relies on being allowed to pass nullptr. If that's
2014-03-19 14:39:42 +01:00
// not allowed, we ignore the error and continue to use the known path.
errno = 0 ;
m_logdir = RealPath ( m_logdir ) ;
SE_LOG_DEBUG ( NULL , " log path -> %s, %s " ,
m_logdir . c_str ( ) ,
errno ? strerror ( errno ) : " <okay> " ) ;
2010-02-08 09:21:41 +01:00
// the config name has been normalized
string peer = m_client . getConfigName ( ) ;
// escape "_" and "-" the peer name
peer = escapePeer ( peer ) ;
if ( boost : : iends_with ( m_logdir , " syncevolution " ) ) {
// use just the server name as prefix
m_prefix = peer ;
} else {
// SyncEvolution-<server>-<yyyy>-<mm>-<dd>-<hh>-<mm>
m_prefix = DIR_PREFIX ;
m_prefix + = peer ;
}
}
2009-04-15 21:03:26 +02:00
/**
* access existing log directory to extract status information
*/
void openLogdir ( const string & dir ) {
2018-01-16 17:17:34 +01:00
auto filenode = std : : make_shared < IniFileConfigNode > ( dir , " status.ini " , true ) ;
m_info . reset ( new SafeConfigNode ( std : : static_pointer_cast < ConfigNode > ( filenode ) ) ) ;
2009-04-15 21:03:26 +02:00
m_info - > setMode ( false ) ;
m_readonly = true ;
}
2009-12-03 10:37:00 +01:00
/*
* get the corresponding peer name encoded in the logging dir name .
* The dir name must match the format ( see startSession ) . Otherwise ,
* empty string is returned .
*/
string getPeerNameFromLogdir ( const string & dir ) {
// extract the dir name from the fullpath
string iDirPath , iDirName ;
parseLogDir ( dir , iDirPath , iDirName ) ;
// extract the peer name from the dir name
string dirPrefix , peerName , dateTime ;
if ( parseDirName ( iDirName , dirPrefix , peerName , dateTime ) ) {
2010-01-21 06:37:26 +01:00
return unescapePeer ( peerName ) ;
2009-12-03 10:37:00 +01:00
}
return " " ;
}
2009-04-15 21:03:26 +02:00
/**
* read sync report for session selected with openLogdir ( )
*/
void readReport ( SyncReport & report ) {
report . clear ( ) ;
2009-04-16 12:07:49 +02:00
if ( ! m_info ) {
return ;
}
2009-04-29 11:45:11 +02:00
* m_info > > report ;
2009-04-16 12:07:49 +02:00
}
/**
* write sync report for current session
*/
void writeReport ( SyncReport & report ) {
if ( m_info ) {
2009-04-29 11:45:11 +02:00
* m_info < < report ;
/* write in slightly different format and flush at the end */
writeTimestamp ( " start " , report . getStart ( ) , false ) ;
writeTimestamp ( " end " , report . getEnd ( ) , true ) ;
2009-04-16 12:07:49 +02:00
}
2009-04-15 21:03:26 +02:00
}
2010-03-01 15:34:26 +01:00
enum SessionMode {
SESSION_USE_PATH , /**< write directly into path, don't create and manage subdirectories */
SESSION_READ_ONLY , /**< access an existing session directory identified with path */
SESSION_CREATE /**< create a new session directory inside the given path */
} ;
2006-03-19 22:37:30 +01:00
// setup log directory and redirect logging into it
2009-04-16 17:22:31 +02:00
// @param path path to configured backup directy, empty for using default, "none" if not creating log file
2010-03-01 15:34:26 +01:00
// @param mode determines how path is interpreted and which session is accessed
2006-03-19 22:37:30 +01:00
// @param maxlogdirs number of backup dirs to preserve in path, 0 if unlimited
2007-04-21 14:33:23 +02:00
// @param logLevel 0 = default, 1 = ERROR, 2 = INFO, 3 = DEBUG
2018-01-30 17:00:24 +01:00
// @param report record information about session here (may be nullptr)
2011-01-18 15:07:46 +01:00
void startSession ( const string & path , SessionMode mode , int maxlogdirs , int logLevel , SyncReport * report ) {
2006-03-19 22:37:30 +01:00
m_maxlogdirs = maxlogdirs ;
2009-04-16 09:26:14 +02:00
m_report = report ;
2009-07-03 12:27:07 +02:00
m_logfile = " " ;
2011-01-18 15:07:46 +01:00
if ( boost : : iequals ( path , " none " ) ) {
2009-02-16 16:11:17 +01:00
m_path = " " ;
2009-04-16 17:22:31 +02:00
} else {
2010-02-08 09:21:41 +01:00
setLogdir ( path ) ;
2014-03-19 14:39:42 +01:00
SE_LOG_DEBUG ( NULL , " checking log dir %s " , m_logdir . c_str ( ) ) ;
2010-03-01 15:34:26 +01:00
if ( mode = = SESSION_CREATE ) {
2009-02-19 10:52:35 +01:00
// create unique directory name in the given directory
2018-01-30 17:00:24 +01:00
time_t ts = time ( nullptr ) ;
2013-03-22 10:40:30 +01:00
struct tm tmbuffer ;
struct tm * tm = localtime_r ( & ts , & tmbuffer ) ;
if ( ! tm ) {
SE_THROW ( " localtime_r() failed " ) ;
}
2009-02-19 10:52:35 +01:00
stringstream base ;
2009-12-03 10:37:00 +01:00
base < < " - "
2009-02-19 10:52:35 +01:00
< < setfill ( ' 0 ' )
< < setw ( 4 ) < < tm - > tm_year + 1900 < < " - "
< < setw ( 2 ) < < tm - > tm_mon + 1 < < " - "
< < setw ( 2 ) < < tm - > tm_mday < < " - "
< < setw ( 2 ) < < tm - > tm_hour < < " - "
< < setw ( 2 ) < < tm - > tm_min ;
2010-02-18 13:49:43 +01:00
// If other sessions, regardless of which peer, have
// the same date and time, then append a sequence
// number to ensure correct sorting. Solve this by
// finding the maximum sequence number for any kind of
// date time. Backwards running clocks or changing the
// local time will still screw our ordering, though.
typedef std : : map < string , int > SeqMap_t ;
SeqMap_t dateTimes2Seq ;
2010-02-23 16:02:01 +01:00
if ( isDir ( m_logdir ) ) {
ReadDir dir ( m_logdir ) ;
2018-01-16 10:58:04 +01:00
for ( const string & entry : dir ) {
2010-02-23 16:02:01 +01:00
string dirPrefix , peerName , dateTime ;
if ( parseDirName ( entry , dirPrefix , peerName , dateTime ) ) {
// dateTime = -2010-01-31-12-00[-rev]
size_t off = 0 ;
for ( int i = 0 ; off ! = dateTime . npos & & i < 5 ; i + + ) {
off = dateTime . find ( ' - ' , off + 1 ) ;
}
int sequence ;
if ( off ! = dateTime . npos ) {
2010-02-26 10:08:02 +01:00
sequence = dateTime [ off + 1 ] - ' a ' ;
2010-02-23 16:02:01 +01:00
dateTime . resize ( off ) ;
} else {
2010-02-26 10:08:02 +01:00
sequence = - 1 ;
2010-02-23 16:02:01 +01:00
}
pair < SeqMap_t : : iterator , bool > entry = dateTimes2Seq . insert ( make_pair ( dateTime , sequence ) ) ;
if ( sequence > entry . first - > second ) {
entry . first - > second = sequence ;
}
2009-12-03 10:37:00 +01:00
}
}
}
2010-02-18 13:49:43 +01:00
stringstream path ;
path < < base . str ( ) ;
2018-01-31 17:28:28 +01:00
auto it = dateTimes2Seq . find ( path . str ( ) ) ;
2010-02-18 13:49:43 +01:00
if ( it ! = dateTimes2Seq . end ( ) ) {
2010-02-26 10:08:02 +01:00
path < < " - " < < ( char ) ( ' a ' + it - > second + 1 ) ;
2006-03-19 22:37:30 +01:00
}
2010-02-18 13:49:43 +01:00
m_path = m_logdir + " / " ;
m_path + = m_prefix ;
m_path + = path . str ( ) ;
mkdir_p ( m_path ) ;
2009-02-19 10:52:35 +01:00
} else {
2009-04-23 16:47:07 +02:00
m_path = m_logdir ;
2009-02-19 10:52:35 +01:00
if ( mkdir ( m_path . c_str ( ) , S_IRWXU ) & &
errno ! = EEXIST ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " %s: %s " , m_path . c_str ( ) , strerror ( errno ) ) ;
2014-04-02 14:57:56 +02:00
Exception : : throwError ( SE_HERE , m_path , errno ) ;
2006-03-19 22:37:30 +01:00
}
}
2009-11-30 11:23:06 +01:00
m_logfile = m_path + " / " + LogfileBasename + " .html " ;
2014-03-19 14:39:42 +01:00
SE_LOG_DEBUG ( NULL , " logfile: %s " , m_logfile . c_str ( ) ) ;
2007-04-21 14:33:23 +02:00
}
2009-02-16 16:11:17 +01:00
// update log level of default logger and our own replacement
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
Logger : : Level level ;
2009-02-16 16:11:17 +01:00
switch ( logLevel ) {
case 0 :
// default for console output
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
level = Logger : : INFO ;
2009-02-16 16:11:17 +01:00
break ;
case 1 :
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
level = Logger : : ERROR ;
2009-02-16 16:11:17 +01:00
break ;
case 2 :
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
level = Logger : : INFO ;
2009-02-16 16:11:17 +01:00
break ;
default :
2010-10-29 10:25:42 +02:00
if ( m_logfile . empty ( ) | | getenv ( " SYNCEVOLUTION_DEBUG " ) ) {
// no log file or user wants to see everything:
// print all information to the console
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
level = Logger : : DEBUG ;
2009-06-03 20:39:16 +02:00
} else {
// have log file: avoid excessive output to the console,
// full information is in the log file
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
level = Logger : : INFO ;
2009-06-03 20:39:16 +02:00
}
2009-02-16 16:11:17 +01:00
break ;
}
2010-03-01 15:34:26 +01:00
if ( mode ! = SESSION_USE_PATH ) {
2013-04-08 22:43:07 +02:00
Logger : : instance ( ) . setLevel ( level ) ;
2009-02-19 10:52:35 +01:00
}
2018-01-16 17:17:34 +01:00
auto logger = std : : make_shared < LogDirLogger > ( weak_from_this ( ) ) ;
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
logger - > setLevel ( level ) ;
m_logger . reset ( logger ) ;
2009-04-15 21:03:26 +02:00
2018-01-30 17:00:24 +01:00
time_t start = time ( nullptr ) ;
2009-04-16 09:26:14 +02:00
if ( m_report ) {
m_report - > setStart ( start ) ;
}
2010-03-01 15:34:26 +01:00
m_readonly = mode = = SESSION_READ_ONLY ;
2009-04-15 21:03:26 +02:00
if ( ! m_path . empty ( ) ) {
2018-01-16 17:17:34 +01:00
auto filenode = std : : make_shared < IniFileConfigNode > ( m_path , " status.ini " , m_readonly ) ;
m_info . reset ( new SafeConfigNode ( std : : static_pointer_cast < ConfigNode > ( filenode ) ) ) ;
2009-04-15 21:03:26 +02:00
m_info - > setMode ( false ) ;
2010-03-01 15:34:26 +01:00
if ( mode ! = SESSION_READ_ONLY ) {
// Create a status.ini which contains an error.
// Will be overwritten later on, unless we crash.
m_info - > setProperty ( " status " , STATUS_DIED_PREMATURELY ) ;
2012-06-05 12:57:33 +02:00
m_info - > setProperty ( " error " , InitStateString ( " synchronization process died prematurely " , true ) ) ;
2010-03-01 15:34:26 +01:00
writeTimestamp ( " start " , start ) ;
}
2009-04-15 21:03:26 +02:00
}
2006-03-19 22:37:30 +01:00
}
2007-11-08 22:22:52 +01:00
/** sets a fixed directory for database files without redirecting logging */
2014-03-19 14:39:42 +01:00
void setPath ( const string & path ) { m_path = RealPath ( path ) ; SE_LOG_DEBUG ( NULL , " setPath(%s) -> %s " , path . c_str ( ) , m_path . c_str ( ) ) ; }
2007-11-08 22:22:52 +01:00
2006-03-19 22:37:30 +01:00
// return log directory, empty if not enabled
const string & getLogdir ( ) {
return m_path ;
}
// return log file, empty if not enabled
const string & getLogfile ( ) {
return m_logfile ;
}
2010-02-18 15:31:29 +01:00
/**
* remove backup dir ( s ) if exceeding limit
*
* Assign a priority to each session dir , with lower
* meaning " less important " . Then sort by priority and ( if
* equal ) creation time ( aka index ) in ascending
* order . The sessions at the beginning of the sorted
* vector are then removed first .
*
* DUMPS = any kind of database dump was made
* ERROR = session failed
* CHANGES = local data modified since previous dump ( based on dumps
* of the current peer , for simplicity reasons ) ,
* dump created for the first time ,
* changes made during sync ( detected with dumps and statistics )
*
* The goal is to preserve as many database dumps as possible
* and ideally those where something happened .
*
* Some criteria veto the removal of a session :
* - it is the only one holding a dump of a specific source
* - it is the last session
*/
2006-03-19 22:37:30 +01:00
void expire ( ) {
if ( m_logdir . size ( ) & & m_maxlogdirs > 0 ) {
2009-04-15 15:58:05 +02:00
vector < string > dirs ;
2009-04-16 17:22:31 +02:00
getLogdirs ( dirs ) ;
2006-03-19 22:37:30 +01:00
2010-02-18 15:31:29 +01:00
/** stores priority and index in "dirs"; after sorting, delete from the start */
vector < pair < Priority , size_t > > victims ;
/** maps from source name to list of information about dump, oldest first */
typedef map < string , list < DumpInfo > > Dumps_t ;
Dumps_t dumps ;
for ( size_t i = 0 ;
i < dirs . size ( ) ;
i + + ) {
bool changes = false ;
bool havedumps = false ;
bool errors = false ;
LogDir logdir ( m_client ) ;
logdir . openLogdir ( dirs [ i ] ) ;
SyncReport report ;
logdir . readReport ( report ) ;
SyncMLStatus status = report . getStatus ( ) ;
if ( status ! = STATUS_OK & & status ! = STATUS_HTTP_OK ) {
errors = true ;
}
2018-01-16 10:58:04 +01:00
for ( SyncReport : : SourceReport_t source : report ) {
2010-02-18 15:31:29 +01:00
string & sourcename = source . first ;
SyncSourceReport & sourcereport = source . second ;
list < DumpInfo > & dumplist = dumps [ sourcename ] ;
if ( sourcereport . m_backupBefore . isAvailable ( ) | |
sourcereport . m_backupAfter . isAvailable ( ) ) {
// yes, we have backup dumps
havedumps = true ;
DumpInfo info ( i ,
sourcereport . m_backupBefore . getNumItems ( ) ,
sourcereport . m_backupAfter . getNumItems ( ) ) ;
// now check for changes, if none found yet
if ( ! changes ) {
if ( dumplist . empty ( ) ) {
// new backup dump
changes = true ;
} else {
DumpInfo & previous = dumplist . back ( ) ;
changes =
// item count changed -> items changed
previous . m_itemsDumpedAfter ! = info . m_itemsDumpedBefore | |
sourcereport . wasChanged ( SyncSourceReport : : ITEM_LOCAL ) | |
sourcereport . wasChanged ( SyncSourceReport : : ITEM_REMOTE ) | |
haveDifferentContent ( sourcename ,
dirs [ previous . m_dirIndex ] , " after " ,
dirs [ i ] , " before " ) ;
}
}
dumplist . push_back ( info ) ;
}
}
Priority pri =
havedumps ?
( changes ?
HAS_DUMPS_WITH_CHANGES :
errors ?
HAS_DUMPS_NO_CHANGES_WITH_ERRORS :
HAS_DUMPS_NO_CHANGES ) :
( changes ?
NO_DUMPS_WITH_CHANGES :
errors ?
NO_DUMPS_WITH_ERRORS :
NO_DUMPS_NO_ERRORS ) ;
victims . push_back ( make_pair ( pri , i ) ) ;
}
sort ( victims . begin ( ) , victims . end ( ) ) ;
2008-03-24 22:42:47 +01:00
int deleted = 0 ;
2010-02-18 15:31:29 +01:00
for ( size_t e = 0 ;
e < victims . size ( ) & & ( int ) dirs . size ( ) - deleted > m_maxlogdirs ;
+ + e ) {
size_t index = victims [ e ] . second ;
string & path = dirs [ index ] ;
// preserve latest session
if ( index ! = dirs . size ( ) - 1 ) {
bool mustkeep = false ;
// also check whether it holds the only backup of a source
2018-01-16 10:58:04 +01:00
for ( auto dump : dumps ) {
2010-02-18 15:31:29 +01:00
if ( dump . second . size ( ) = = 1 & &
dump . second . front ( ) . m_dirIndex = = index ) {
mustkeep = true ;
break ;
}
}
if ( ! mustkeep ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " removing %s " , path . c_str ( ) ) ;
2010-02-18 15:31:29 +01:00
rm_r ( path ) ;
+ + deleted ;
}
}
2006-03-19 22:37:30 +01:00
}
}
}
2012-05-22 11:18:49 +02:00
// finalize session
void endSession ( )
{
2018-01-30 17:00:24 +01:00
time_t end = time ( nullptr ) ;
2009-04-16 09:26:14 +02:00
if ( m_report ) {
m_report - > setEnd ( end ) ;
}
2009-04-15 21:03:26 +02:00
if ( m_info ) {
if ( ! m_readonly ) {
2009-04-16 09:26:14 +02:00
writeTimestamp ( " end " , end ) ;
2009-04-16 12:07:49 +02:00
if ( m_report ) {
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
RecMutex : : Guard guard = Logger : : lock ( ) ;
2009-04-16 12:07:49 +02:00
writeReport ( * m_report ) ;
}
2009-04-15 21:03:26 +02:00
m_info - > flush ( ) ;
}
2012-05-22 11:18:49 +02:00
m_info . reset ( ) ;
}
}
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
// Remove redirection of logging.
2012-05-22 11:18:49 +02:00
void restore ( ) {
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
m_logger . reset ( ) ;
2006-03-19 22:37:30 +01:00
}
~ LogDir ( ) {
2009-02-16 16:11:17 +01:00
restore ( ) ;
}
2012-01-09 16:23:06 +01:00
#if 0
/**
* A quick check for level = SHOW text dumps whether the text dump
* starts with the [ ERROR ] prefix ; used to detect error messages
* from forked process which go through this instance but are not
* already tagged as error messages and thus would not show up as
* " first error " in sync reports .
*
* Example for the problem :
* [ ERROR ] onConnect not implemented [ from child process ]
* [ ERROR ] child process quit with return code 1 [ from parent ]
* . . .
* Changes applied during synchronization :
* . . .
* First ERROR encountered : child process quit with return code 1
*/
static bool isErrorString ( const char * format ,
va_list args )
{
const char * text ;
if ( ! strcmp ( format , " %s " ) ) {
va_list argscopy ;
va_copy ( argscopy , args ) ;
text = va_arg ( argscopy , const char * ) ;
va_end ( argscopy ) ;
} else {
text = format ;
}
return boost : : starts_with ( text , " [ERROR " ) ;
}
# endif
2010-02-18 10:07:59 +01:00
/**
* Compare two database dumps just based on their inodes .
* @ return true if inodes differ
*/
static bool haveDifferentContent ( const string & sourceName ,
const string & firstDir ,
const string & firstSuffix ,
const string & secondDir ,
const string & secondSuffix )
{
string first = firstDir + " / " + sourceName + " . " + firstSuffix ;
string second = secondDir + " / " + sourceName + " . " + secondSuffix ;
ReadDir firstContent ( first ) ;
ReadDir secondContent ( second ) ;
set < ino_t > firstInodes ;
2018-01-16 10:58:04 +01:00
for ( const string & name : firstContent ) {
2010-02-18 10:07:59 +01:00
struct stat buf ;
string fullpath = first + " / " + name ;
if ( stat ( fullpath . c_str ( ) , & buf ) ) {
2014-04-02 14:57:56 +02:00
Exception : : throwError ( SE_HERE , fullpath , errno ) ;
2010-02-18 10:07:59 +01:00
}
firstInodes . insert ( buf . st_ino ) ;
}
2018-01-16 10:58:04 +01:00
for ( const string & name : secondContent ) {
2010-02-18 10:07:59 +01:00
struct stat buf ;
string fullpath = second + " / " + name ;
if ( stat ( fullpath . c_str ( ) , & buf ) ) {
2014-04-02 14:57:56 +02:00
Exception : : throwError ( SE_HERE , fullpath , errno ) ;
2010-02-18 10:07:59 +01:00
}
2018-01-31 17:28:28 +01:00
auto it = firstInodes . find ( buf . st_ino ) ;
2010-02-18 10:07:59 +01:00
if ( it = = firstInodes . end ( ) ) {
// second dir has different file
return true ;
} else {
firstInodes . erase ( it ) ;
}
}
if ( ! firstInodes . empty ( ) ) {
// first dir has different file
return true ;
}
// exact match of inodes
return false ;
}
2009-04-15 15:58:05 +02:00
private :
2010-02-18 15:31:29 +01:00
enum Priority {
NO_DUMPS_NO_ERRORS ,
NO_DUMPS_WITH_ERRORS ,
NO_DUMPS_WITH_CHANGES ,
HAS_DUMPS_NO_CHANGES ,
HAS_DUMPS_NO_CHANGES_WITH_ERRORS ,
HAS_DUMPS_WITH_CHANGES
} ;
struct DumpInfo {
size_t m_dirIndex ;
int m_itemsDumpedBefore ;
int m_itemsDumpedAfter ;
DumpInfo ( size_t dirIndex ,
int itemsDumpedBefore ,
int itemsDumpedAfter ) :
m_dirIndex ( dirIndex ) ,
m_itemsDumpedBefore ( itemsDumpedBefore ) ,
m_itemsDumpedAfter ( itemsDumpedAfter )
{ }
} ;
2009-12-03 10:37:00 +01:00
/**
* Find all entries in a given directory , return as sorted array of full paths in ascending order .
* If m_prefix doesn ' t contain peer name information , then all log dirs for different peers in the
* logdir are returned .
*/
2009-04-16 17:22:31 +02:00
void getLogdirs ( vector < string > & dirs ) {
2010-02-08 09:21:41 +01:00
if ( m_logdir ! = " none " & & ! isDir ( m_logdir ) ) {
2009-04-16 17:22:31 +02:00
return ;
}
2010-01-21 06:37:26 +01:00
string peer = m_client . getConfigName ( ) ;
string peerName , context ;
SyncConfig : : splitConfigString ( peer , peerName , context ) ;
2009-04-16 17:22:31 +02:00
ReadDir dir ( m_logdir ) ;
2018-01-16 10:58:04 +01:00
for ( const string & entry : dir ) {
2010-01-21 06:37:26 +01:00
string tmpDirPrefix , tmpPeer , tmpDateTime ;
2009-12-03 10:37:00 +01:00
// if directory name could not be parsed, ignore it
2010-01-21 06:37:26 +01:00
if ( parseDirName ( entry , tmpDirPrefix , tmpPeer , tmpDateTime ) ) {
if ( ! peerName . empty ( ) & & ( m_prefix = = ( tmpDirPrefix + tmpPeer ) ) ) {
// if peer name exists, match the logs for the given peer
2009-08-27 12:20:03 +02:00
dirs . push_back ( m_logdir + " / " + entry ) ;
2010-01-21 06:37:26 +01:00
} else if ( peerName . empty ( ) ) {
// if no peer name and only context, match for all logs under the given context
string tmpName , tmpContext ;
2010-02-05 18:48:18 +01:00
SyncConfig : : splitConfigString ( unescapePeer ( tmpPeer ) , tmpName , tmpContext ) ;
2010-01-21 06:37:26 +01:00
if ( context = = tmpContext & & boost : : starts_with ( m_prefix , tmpDirPrefix ) ) {
dirs . push_back ( m_logdir + " / " + entry ) ;
}
2009-08-27 12:20:03 +02:00
}
2009-04-15 15:58:05 +02:00
}
}
2009-12-03 10:37:00 +01:00
// sort vector in ascending order
// if no peer name
2010-01-25 06:30:01 +01:00
if ( peerName . empty ( ) ) {
2010-08-01 21:15:02 +02:00
sort ( dirs . begin ( ) , dirs . end ( ) , LogDirNames ( ) ) ;
2009-12-03 10:37:00 +01:00
} else {
sort ( dirs . begin ( ) , dirs . end ( ) ) ;
}
2009-04-15 15:58:05 +02:00
}
2009-12-03 10:37:00 +01:00
2009-04-15 21:03:26 +02:00
// store time stamp in session info
2009-04-29 11:45:11 +02:00
void writeTimestamp ( const string & key , time_t val , bool flush = true ) {
2009-04-15 21:03:26 +02:00
if ( m_info ) {
char buffer [ 160 ] ;
2013-03-22 10:40:30 +01:00
struct tm tmbuffer , * tm ;
2009-04-15 21:03:26 +02:00
// be nice and store a human-readable date in addition the seconds since the epoch
2013-03-22 10:40:30 +01:00
tm = localtime_r ( & val , & tmbuffer ) ;
if ( tm ) {
strftime ( buffer , sizeof ( buffer ) , " %s, %Y-%m-%d %H:%M:%S %z " , tm ) ;
} else {
// Less suitable fallback. Won't work correctly for 32
// bit long beyond 2038.
sprintf ( buffer , " %lu " , ( long unsigned ) val ) ;
}
2009-04-15 21:03:26 +02:00
m_info - > setProperty ( key , buffer ) ;
2009-04-29 11:45:11 +02:00
if ( flush ) {
m_info - > flush ( ) ;
}
2009-04-15 21:03:26 +02:00
}
}
2006-03-19 22:37:30 +01:00
} ;
2018-01-16 17:17:34 +01:00
LogDirLogger : : LogDirLogger ( const std : : weak_ptr < LogDir > & logdir ) :
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
m_parentLogger ( Logger : : instance ( ) ) ,
m_logdir ( logdir )
2013-10-01 09:26:41 +02:00
# ifdef USE_DLT
2018-01-30 17:00:24 +01:00
, m_useDLT ( getenv ( " SYNCEVOLUTION_USE_DLT " ) ! = nullptr )
2013-10-01 09:26:41 +02:00
# endif
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
{
}
void LogDirLogger : : remove ( ) throw ( )
{
// Forget reference to LogDir. This prevents accessing it in
// future messagev() calls.
RecMutex : : Guard guard = Logger : : lock ( ) ;
m_logdir . reset ( ) ;
}
void LogDirLogger : : messagev ( const MessageOptions & options ,
const char * format ,
va_list args )
{
// Protects ordering of log messages and access to shared
// variables like m_report and m_engine.
RecMutex : : Guard guard = Logger : : lock ( ) ;
// always to parent first (usually stdout):
// if the parent is a LogRedirect instance, then
// it'll flush its own output first, which ensures
// that the new output comes later (as desired)
va_list argscopy ;
va_copy ( argscopy , args ) ;
m_parentLogger . messagev ( options , format , argscopy ) ;
va_end ( argscopy ) ;
2013-10-01 09:26:41 +02:00
// Special handling of our own messages: include in sync report
// (always, because that is how we did it traditionally) and write
// to our own syncevolution-log.html (if not already logged).
//
// The TestLocalSync.testServerFailure and some others check that
// we record the child's error message in our sync report. If we
// don't then it shows up later marked as an "error on the target
// side", which is probably not what we want.
2018-01-16 17:17:34 +01:00
std : : shared_ptr < LogDir > logdir ;
2013-10-01 09:26:41 +02:00
if ( ( bool ) ( logdir = m_logdir . lock ( ) ) ) {
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
if ( logdir - > m_report & &
options . m_level < = ERROR & &
logdir - > m_report - > getError ( ) . empty ( ) ) {
va_list argscopy ;
va_copy ( argscopy , args ) ;
string error = StringPrintfV ( format , argscopy ) ;
va_end ( argscopy ) ;
logdir - > m_report - > setError ( error ) ;
}
2013-10-01 09:26:41 +02:00
if ( ! ( options . m_flags & MessageOptions : : ALREADY_LOGGED ) & &
# ifdef USE_DLT
// Don't send to libsynthesis if using DLT,
// because then it would end up getting logged
// in DLT twice.
! m_useDLT & &
# endif
logdir - > m_client . getEngine ( ) . get ( ) ) {
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
va_list argscopy ;
va_copy ( argscopy , args ) ;
2013-06-10 22:23:23 +02:00
// Once to Synthesis log, with full debugging.
// The API does not support a process name, so
// put it into the prefix as "<procname> <prefix>".
std : : string prefix ;
if ( options . m_processName ) {
prefix + = * options . m_processName ;
}
if ( options . m_prefix ) {
if ( ! prefix . empty ( ) ) {
prefix + = " " ;
}
prefix + = * options . m_prefix ;
}
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
logdir - > m_client . getEngine ( ) . doDebug ( options . m_level ,
2018-01-30 17:00:24 +01:00
prefix . empty ( ) ? nullptr : prefix . c_str ( ) ,
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
options . m_file ,
options . m_line ,
options . m_function ,
format ,
argscopy ) ;
va_end ( argscopy ) ;
}
}
}
2010-08-01 21:15:02 +02:00
const char * const LogDirNames : : DIR_PREFIX = " SyncEvolution- " ;
2009-12-03 10:37:00 +01:00
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
/**
2010-02-15 16:50:06 +01:00
* This class owns the sync sources . For historic reasons ( required
* by Funambol ) SyncSource instances are stored as plain pointers
* deleted by this class . Virtual sync sources were added later
* and are stored as shared pointers which are freed automatically .
* It is possible to iterate over the two classes of sources
* separately .
*
* The SourceList ensures that all sources ( normal and virtual ) have
* a valid and unique integer ID as needed for Synthesis . Traditionally
* this used to be a simple hash of the source name ( which is unique
* by design ) , without checking for hash collisions . Now the ID is assigned
* the first time a source is added here and doesn ' t have one yet .
* For backward compatibility ( the ID is stored in the . synthesis dir ) ,
* the same Hash ( ) value is tested first . Assuming that there were no
* hash conflicts , the same IDs will be generated as before .
*
* Together with a logdir , the SourceList
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
* handles writing of per - sync files as well as the final report .
* It is not stateless . The expectation is that it is instantiated
* together with a SyncContext for one particular operation ( sync
* session , status check , restore ) . In contrast to a SyncContext ,
* this class has to be recreated for another operation .
*
* When running as client , only the active sources get added . They can
* be dumped one after the other before running a sync .
*
* As a server , all sources get added , regardless whether they are
* active . This implies that at least their " type " must be valid . Then
* later when a client really starts using them , they are opened ( ) and
* database dumps are made .
*
2010-02-15 16:50:06 +01:00
* Virtual datastores are stored here when they get initialized
* together with the normal sources by the user of SourceList .
*
*
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
*/
2010-02-15 16:50:06 +01:00
class SourceList : private vector < SyncSource * > {
typedef vector < SyncSource * > inherited ;
2009-04-21 11:22:32 +02:00
public :
enum LogLevel {
LOGGING_QUIET , /**< avoid all extra output */
LOGGING_SUMMARY , /**< sync report, but no database comparison */
LOGGING_FULL /**< everything */
} ;
2018-01-16 17:17:34 +01:00
typedef std : : vector < std : : shared_ptr < VirtualSyncSource > > VirtualSyncSources_t ;
2010-02-15 16:50:06 +01:00
/** reading our set of virtual sources is okay, modifying it is not */
const VirtualSyncSources_t & getVirtualSources ( ) { return m_virtualSources ; }
2018-01-16 17:17:34 +01:00
void addSource ( const std : : shared_ptr < VirtualSyncSource > & source ) { checkSource ( source . get ( ) ) ; m_virtualSources . push_back ( source ) ; }
2010-02-15 16:50:06 +01:00
using inherited : : iterator ;
using inherited : : const_iterator ;
using inherited : : empty ;
using inherited : : begin ;
using inherited : : end ;
using inherited : : rbegin ;
using inherited : : rend ;
SyncSource: optional support for asynchronous insert/update/delete
The wrapper around the actual operation checks if the operation
returned an error or result code (traditional behavior). If not, it
expects a ContinueOperation instance, remembers it and calls it when
the same operation gets called again for the same item.
For add/insert, "same item" is detected based on the KeyH address,
which must not change. For delete, the item local ID is used.
Pre- and post-signals are called exactly once, before the first call
and after the last call of the item.
ContinueOperation is a simple boost::function pointer for now. The
Synthesis engine itself is not able to force completion of the
operation, it just polls. This can lead to many empty messages with
just an Alert inside, thus triggering the "endless loop" protection,
which aborts the sync.
We overcome this limitation in the SyncEvolution layer above the
Synthesis engine: first, we flush pending operations before starting
network IO. This is a good place to batch together all pending
operations. Second, to overcome the "endless loop" problem, we force
a waiting for completion if the last message already was empty. If
that happened, we are done with items and should start sending our
responses.
Binding a function which returns the traditional TSyError still works
because it gets copied transparently into the boost::variant that the
wrapper expects, so no other code in SyncSource or backends needs to
be adapted. Enabling the use of LOCERR_AGAIN in the utility classes
and backends will follow in the next patches.
2013-06-05 17:22:00 +02:00
using inherited : : size ;
2010-02-15 16:50:06 +01:00
2018-01-29 16:45:25 +01:00
/** transfers ownership */
void addSource ( std : : unique_ptr < SyncSource > source ) { checkSource ( source . get ( ) ) ; push_back ( source . release ( ) ) ; }
2010-02-15 16:50:06 +01:00
2009-04-21 11:22:32 +02:00
private :
2010-02-15 16:50:06 +01:00
VirtualSyncSources_t m_virtualSources ; /**< all configured virtual data sources (aka Synthesis <superdatastore>) */
2018-01-16 17:17:34 +01:00
std : : shared_ptr < LogDir > m_logdir ; /**< our logging directory */
2010-01-21 11:58:57 +01:00
SyncContext & m_client ; /**< the context in which we were instantiated */
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
set < string > m_prepared ; /**< remember for which source we dumped databases successfully */
string m_intro ; /**< remembers the dumpLocalChanges() intro and only prints it again
when different from last dumpLocalChanges ( ) call */
2009-02-19 10:52:35 +01:00
bool m_doLogging ; /**< true iff the normal logdir handling is enabled
( creating and expiring directoties , before / after comparison ) */
2006-03-19 22:37:30 +01:00
bool m_reportTodo ; /**< true if syncDone() shall print a final report */
2009-04-21 11:22:32 +02:00
LogLevel m_logLevel ; /**< chooses how much information is printed */
2007-11-08 22:22:52 +01:00
string m_previousLogdir ; /**< remember previous log dir before creating the new one */
2006-03-19 22:37:30 +01:00
2007-11-08 22:22:52 +01:00
/** create name in current (if set) or previous logdir */
2014-01-07 10:16:05 +01:00
string databaseName ( SyncSource & source , const string & suffix , string logdir = " " ) {
2007-11-08 22:22:52 +01:00
if ( ! logdir . size ( ) ) {
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
logdir = m_logdir - > getLogdir ( ) ;
2007-11-08 22:22:52 +01:00
}
return logdir + " / " +
2009-04-22 17:53:04 +02:00
source . getName ( ) + " . " + suffix ;
2006-03-19 22:37:30 +01:00
}
2010-02-15 16:50:06 +01:00
/** ensure that Synthesis ID is set and unique */
void checkSource ( SyncSource * source ) {
if ( source - > getSynthesisID ( ) ) {
return ;
}
int id = Hash ( source - > getName ( ) ) % INT_MAX ;
while ( true ) {
// avoid negative values
if ( id < 0 ) {
id = - id ;
}
// avoid zero, it means unset
if ( ! id ) {
id = 1 ;
}
// check for collisions
bool collision = false ;
2018-01-16 10:58:04 +01:00
for ( const string & other : m_client . getSyncSources ( ) ) {
2018-01-16 17:17:34 +01:00
std : : shared_ptr < PersistentSyncSourceConfig > sc ( m_client . getSyncSourceConfig ( other ) ) ;
2010-02-15 16:50:06 +01:00
int other_id = sc - > getSynthesisID ( ) ;
if ( other_id = = id ) {
+ + id ;
collision = true ;
break ;
}
}
if ( ! collision ) {
source - > setSynthesisID ( id ) ;
return ;
}
}
}
2007-11-08 22:22:52 +01:00
public :
2012-02-13 10:37:22 +01:00
/** allow iterating over sources */
const inherited * getSourceSet ( ) const { return this ; }
2009-04-21 11:22:32 +02:00
LogLevel getLogLevel ( ) const { return m_logLevel ; }
void setLogLevel ( LogLevel logLevel ) { m_logLevel = logLevel ; }
2007-03-23 22:00:32 +01:00
/**
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
* Dump into files with a certain suffix , optionally store report
* in member of SyncSourceReport . Remembers which sources were
* dumped before a sync and only dumps those again afterward .
*
* @ param suffix " before/after/current " - before sync , after sync , during status check
* @ param excludeSource when not empty , only dump that source
2007-03-23 22:00:32 +01:00
*/
2009-04-22 17:53:04 +02:00
void dumpDatabases ( const string & suffix ,
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
BackupReport SyncSourceReport : : * report ,
const string & excludeSource = " " ) {
2010-02-05 19:17:44 +01:00
// Identify all logdirs of current context, of any peer. Used
// to search for previous backups of each source, if
// necessary.
2010-10-25 10:42:02 +02:00
SyncContext context ( m_client . getContextName ( ) ) ;
2018-01-16 17:17:34 +01:00
auto logdir = make_weak_shared : : make < LogDir > ( context ) ;
2010-02-05 19:17:44 +01:00
vector < string > dirs ;
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
logdir - > previousLogdirs ( dirs ) ;
2010-02-05 19:17:44 +01:00
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : * this ) {
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
if ( ( ! excludeSource . empty ( ) & & excludeSource ! = source - > getName ( ) ) | |
( suffix = = " after " & & m_prepared . find ( source - > getName ( ) ) = = m_prepared . end ( ) ) ) {
continue ;
}
2009-04-22 17:53:04 +02:00
string dir = databaseName ( * source , suffix ) ;
2018-01-16 17:17:34 +01:00
std : : shared_ptr < ConfigNode > node = ConfigNode : : createFileNode ( dir + " .ini " ) ;
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " creating %s " , dir . c_str ( ) ) ;
2009-04-23 16:47:07 +02:00
rm_r ( dir ) ;
2009-04-22 17:53:04 +02:00
BackupReport dummy ;
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
if ( source - > getOperations ( ) . m_backupData ) {
2010-02-05 19:17:44 +01:00
SyncSource : : Operations : : ConstBackupInfo oldBackup ;
// Now look for a backup of the current source,
// starting with the most recent one.
2018-01-16 10:58:04 +01:00
for ( const string & sessiondir : reverse ( dirs ) ) {
2010-02-05 19:17:44 +01:00
string oldBackupDir ;
SyncSourceRevisions: cache result of listAllItems() (MB #7708)
When automatic backups are enabled (the default), SyncSourceRevisions
and derived classes like TrackingSyncSource called listAllItems()
twice, once during backup and once while checking for changes.
This patch introduces caching of the result returned by the first
call. During a normal session, that will be during backup, with
change detection reusing the information. If backups are off, the
call will happen during change detection, as before.
Sharing of the listAllItems() result with change tracking only
works for the backup that is created at the start of a sync.
This piece of information is passed to the backend as part of
the BackupInfo structure.
The advantages of this change are:
- more efficient, in particular for sources where listAllItems() is slow
- avoids a race condition (backup stores one set of items, changes
are made, change detection works with another set)
That race condition is both fairly theoretical (short time window) and
had no impact on the correctness of the sync, only on the output (data
comparison would not show all changes synced later).
Now the race condition still exists if changes are made while a sync
runs, which is the bigger problem that still needs to be solved (for EDS,
see MB #3479).
Restoring data also calls listAllItems(). It does not use the cached
information, just in case that it is to be called more than once per
instance, and because there is no benefit.
2010-02-16 17:43:41 +01:00
SyncSource : : Operations : : BackupInfo : : Mode mode =
SyncSource : : Operations : : BackupInfo : : BACKUP_AFTER ;
2010-02-05 19:17:44 +01:00
oldBackupDir = databaseName ( * source , " after " , sessiondir ) ;
if ( ! isDir ( oldBackupDir ) ) {
SyncSourceRevisions: cache result of listAllItems() (MB #7708)
When automatic backups are enabled (the default), SyncSourceRevisions
and derived classes like TrackingSyncSource called listAllItems()
twice, once during backup and once while checking for changes.
This patch introduces caching of the result returned by the first
call. During a normal session, that will be during backup, with
change detection reusing the information. If backups are off, the
call will happen during change detection, as before.
Sharing of the listAllItems() result with change tracking only
works for the backup that is created at the start of a sync.
This piece of information is passed to the backend as part of
the BackupInfo structure.
The advantages of this change are:
- more efficient, in particular for sources where listAllItems() is slow
- avoids a race condition (backup stores one set of items, changes
are made, change detection works with another set)
That race condition is both fairly theoretical (short time window) and
had no impact on the correctness of the sync, only on the output (data
comparison would not show all changes synced later).
Now the race condition still exists if changes are made while a sync
runs, which is the bigger problem that still needs to be solved (for EDS,
see MB #3479).
Restoring data also calls listAllItems(). It does not use the cached
information, just in case that it is to be called more than once per
instance, and because there is no benefit.
2010-02-16 17:43:41 +01:00
mode = SyncSource : : Operations : : BackupInfo : : BACKUP_BEFORE ;
2010-02-05 19:17:44 +01:00
oldBackupDir = databaseName ( * source , " before " , sessiondir ) ;
if ( ! isDir ( oldBackupDir ) ) {
// try next session
continue ;
}
}
SyncSourceRevisions: cache result of listAllItems() (MB #7708)
When automatic backups are enabled (the default), SyncSourceRevisions
and derived classes like TrackingSyncSource called listAllItems()
twice, once during backup and once while checking for changes.
This patch introduces caching of the result returned by the first
call. During a normal session, that will be during backup, with
change detection reusing the information. If backups are off, the
call will happen during change detection, as before.
Sharing of the listAllItems() result with change tracking only
works for the backup that is created at the start of a sync.
This piece of information is passed to the backend as part of
the BackupInfo structure.
The advantages of this change are:
- more efficient, in particular for sources where listAllItems() is slow
- avoids a race condition (backup stores one set of items, changes
are made, change detection works with another set)
That race condition is both fairly theoretical (short time window) and
had no impact on the correctness of the sync, only on the output (data
comparison would not show all changes synced later).
Now the race condition still exists if changes are made while a sync
runs, which is the bigger problem that still needs to be solved (for EDS,
see MB #3479).
Restoring data also calls listAllItems(). It does not use the cached
information, just in case that it is to be called more than once per
instance, and because there is no benefit.
2010-02-16 17:43:41 +01:00
oldBackup . m_mode = mode ;
2010-02-05 19:17:44 +01:00
oldBackup . m_dirname = oldBackupDir ;
oldBackup . m_node = ConfigNode : : createFileNode ( oldBackupDir + " .ini " ) ;
break ;
}
mkdir_p ( dir ) ;
SyncSourceRevisions: cache result of listAllItems() (MB #7708)
When automatic backups are enabled (the default), SyncSourceRevisions
and derived classes like TrackingSyncSource called listAllItems()
twice, once during backup and once while checking for changes.
This patch introduces caching of the result returned by the first
call. During a normal session, that will be during backup, with
change detection reusing the information. If backups are off, the
call will happen during change detection, as before.
Sharing of the listAllItems() result with change tracking only
works for the backup that is created at the start of a sync.
This piece of information is passed to the backend as part of
the BackupInfo structure.
The advantages of this change are:
- more efficient, in particular for sources where listAllItems() is slow
- avoids a race condition (backup stores one set of items, changes
are made, change detection works with another set)
That race condition is both fairly theoretical (short time window) and
had no impact on the correctness of the sync, only on the output (data
comparison would not show all changes synced later).
Now the race condition still exists if changes are made while a sync
runs, which is the bigger problem that still needs to be solved (for EDS,
see MB #3479).
Restoring data also calls listAllItems(). It does not use the cached
information, just in case that it is to be called more than once per
instance, and because there is no benefit.
2010-02-16 17:43:41 +01:00
SyncSource : : Operations : : BackupInfo newBackup ( suffix = = " before " ?
SyncSource : : Operations : : BackupInfo : : BACKUP_BEFORE :
suffix = = " after " ?
SyncSource : : Operations : : BackupInfo : : BACKUP_AFTER :
SyncSource : : Operations : : BackupInfo : : BACKUP_OTHER ,
dir , node ) ;
source - > getOperations ( ) . m_backupData ( oldBackup , newBackup ,
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
report ? source - > * report : dummy ) ;
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " %s created " , dir . c_str ( ) ) ;
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
// remember that we have dumped at the beginning of a sync
if ( suffix = = " before " ) {
m_prepared . insert ( source - > getName ( ) ) ;
}
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
}
2006-03-19 22:37:30 +01:00
}
}
2007-11-08 22:22:52 +01:00
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
void restoreDatabase ( SyncSource & source , const string & suffix , bool dryrun , SyncSourceReport & report )
2009-04-23 16:47:07 +02:00
{
string dir = databaseName ( source , suffix ) ;
2018-01-16 17:17:34 +01:00
std : : shared_ptr < ConfigNode > node = ConfigNode : : createFileNode ( dir + " .ini " ) ;
2009-04-23 16:47:07 +02:00
if ( ! node - > exists ( ) ) {
2014-04-02 14:57:56 +02:00
Exception : : throwError ( SE_HERE , dir + " : no such database backup found " ) ;
2009-04-23 16:47:07 +02:00
}
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
if ( source . getOperations ( ) . m_restoreData ) {
SyncSourceRevisions: cache result of listAllItems() (MB #7708)
When automatic backups are enabled (the default), SyncSourceRevisions
and derived classes like TrackingSyncSource called listAllItems()
twice, once during backup and once while checking for changes.
This patch introduces caching of the result returned by the first
call. During a normal session, that will be during backup, with
change detection reusing the information. If backups are off, the
call will happen during change detection, as before.
Sharing of the listAllItems() result with change tracking only
works for the backup that is created at the start of a sync.
This piece of information is passed to the backend as part of
the BackupInfo structure.
The advantages of this change are:
- more efficient, in particular for sources where listAllItems() is slow
- avoids a race condition (backup stores one set of items, changes
are made, change detection works with another set)
That race condition is both fairly theoretical (short time window) and
had no impact on the correctness of the sync, only on the output (data
comparison would not show all changes synced later).
Now the race condition still exists if changes are made while a sync
runs, which is the bigger problem that still needs to be solved (for EDS,
see MB #3479).
Restoring data also calls listAllItems(). It does not use the cached
information, just in case that it is to be called more than once per
instance, and because there is no benefit.
2010-02-16 17:43:41 +01:00
source . getOperations ( ) . m_restoreData ( SyncSource : : Operations : : ConstBackupInfo ( SyncSource : : Operations : : BackupInfo : : BACKUP_OTHER , dir , node ) ,
2010-02-05 17:57:03 +01:00
dryrun , report ) ;
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
}
2009-04-23 16:47:07 +02:00
}
2009-10-05 14:49:32 +02:00
SourceList ( SyncContext & client , bool doLogging ) :
2018-01-16 17:17:34 +01:00
m_logdir ( make_weak_shared : : make < LogDir > ( client ) ) ,
2010-01-21 11:58:57 +01:00
m_client ( client ) ,
2006-03-19 22:37:30 +01:00
m_doLogging ( doLogging ) ,
2007-11-08 22:22:52 +01:00
m_reportTodo ( true ) ,
2009-04-21 11:22:32 +02:00
m_logLevel ( LOGGING_FULL )
2007-11-08 22:22:52 +01:00
{
2006-03-19 22:37:30 +01:00
}
// call as soon as logdir settings are known
2011-01-18 15:07:46 +01:00
void startSession ( const string & logDirPath , int maxlogdirs , int logLevel , SyncReport * report ) {
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
m_logdir - > setLogdir ( logDirPath ) ;
m_previousLogdir = m_logdir - > previousLogdir ( ) ;
2006-03-19 22:37:30 +01:00
if ( m_doLogging ) {
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
m_logdir - > startSession ( logDirPath , LogDir : : SESSION_CREATE , maxlogdirs , logLevel , report ) ;
2006-04-06 19:02:43 +02:00
} else {
2009-02-19 10:52:35 +01:00
// Run debug session without paying attention to
// the normal logdir handling. The log level here
// refers to stdout. The log file will be as complete
// as possible.
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
m_logdir - > startSession ( logDirPath , LogDir : : SESSION_USE_PATH , 0 , 1 , report ) ;
2006-04-06 19:02:43 +02:00
}
2006-03-19 22:37:30 +01:00
}
2010-03-01 15:34:26 +01:00
/** read-only access to existing session, identified in logDirPath */
2011-01-18 15:07:46 +01:00
void accessSession ( const string & logDirPath ) {
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
m_logdir - > setLogdir ( logDirPath ) ;
m_previousLogdir = m_logdir - > previousLogdir ( ) ;
2018-01-30 17:00:24 +01:00
m_logdir - > startSession ( logDirPath , LogDir : : SESSION_READ_ONLY , 0 , 0 , nullptr ) ;
2010-03-01 15:34:26 +01:00
}
2009-01-18 22:14:24 +01:00
/** return log directory, empty if not enabled */
const string & getLogdir ( ) {
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
return m_logdir - > getLogdir ( ) ;
2009-01-18 22:14:24 +01:00
}
2009-04-16 09:26:14 +02:00
/** return previous log dir found in startSession() */
2007-11-08 22:22:52 +01:00
const string & getPrevLogdir ( ) const { return m_previousLogdir ; }
/** set directory for database files without actually redirecting the logging */
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
void setPath ( const string & path ) { m_logdir - > setPath ( path ) ; }
2007-11-08 22:22:52 +01:00
/**
2009-04-23 16:47:07 +02:00
* If possible ( directory to compare against available ) and enabled ,
2007-11-08 22:22:52 +01:00
* then dump changes applied locally .
*
2010-02-08 10:16:36 +01:00
* @ param oldSession directory to compare against ; " " searches in sessions of current peer
* as selected by context for the lastest one involving each source
2007-11-08 22:22:52 +01:00
* @ param oldSuffix suffix of old database dump : usually " after "
* @ param currentSuffix the current database dump suffix : " current "
* when not doing a sync , otherwise " before "
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
* @ param excludeSource when not empty , only dump that source
2007-11-08 22:22:52 +01:00
*/
2010-02-08 10:16:36 +01:00
bool dumpLocalChanges ( const string & oldSession ,
2009-04-23 16:47:07 +02:00
const string & oldSuffix , const string & newSuffix ,
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
const string & excludeSource ,
const string & intro = " Local data changes to be applied remotely during synchronization: \n " ,
2009-04-23 16:47:07 +02:00
const string & config = " CLIENT_TEST_LEFT_NAME='after last sync' CLIENT_TEST_RIGHT_NAME='current data' CLIENT_TEST_REMOVED='removed since last sync' CLIENT_TEST_ADDED='added since last sync' " ) {
2010-02-08 10:16:36 +01:00
if ( m_logLevel < = LOGGING_SUMMARY ) {
2007-11-08 22:22:52 +01:00
return false ;
}
2010-02-08 10:16:36 +01:00
vector < string > dirs ;
if ( oldSession . empty ( ) ) {
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
m_logdir - > previousLogdirs ( dirs ) ;
2010-02-08 10:16:36 +01:00
}
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : * this ) {
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
if ( ( ! excludeSource . empty ( ) & & excludeSource ! = source - > getName ( ) ) | |
( newSuffix = = " after " & & m_prepared . find ( source - > getName ( ) ) = = m_prepared . end ( ) ) ) {
continue ;
}
// dump only if not done before or changed
if ( m_intro ! = intro ) {
2013-04-08 19:17:36 +02:00
SE_LOG_SHOW ( NULL , " %s " , intro . c_str ( ) ) ;
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
m_intro = intro ;
}
2010-02-08 10:16:36 +01:00
string oldDir ;
if ( oldSession . empty ( ) ) {
// Now look for the latest session involving the current source,
// starting with the most recent one.
2018-01-16 10:58:04 +01:00
for ( const string & sessiondir : reverse ( dirs ) ) {
2018-01-16 17:17:34 +01:00
auto oldsession = make_weak_shared : : make < LogDir > ( m_client ) ;
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
oldsession - > openLogdir ( sessiondir ) ;
2010-02-08 10:16:36 +01:00
SyncReport report ;
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
oldsession - > readReport ( report ) ;
2010-02-08 10:16:36 +01:00
if ( report . find ( source - > getName ( ) ) ! = report . end ( ) ) {
// source was active in that session, use dump
// made there
oldDir = databaseName ( * source , oldSuffix , sessiondir ) ;
break ;
}
}
} else {
oldDir = databaseName ( * source , oldSuffix , oldSession ) ;
}
string newDir = databaseName ( * source , newSuffix ) ;
2013-04-08 19:17:36 +02:00
SE_LOG_SHOW ( NULL , " *** %s *** " , source - > getDisplayName ( ) . c_str ( ) ) ;
2010-03-31 20:10:38 +02:00
string cmd = string ( " env CLIENT_TEST_COMPARISON_FAILED=10 " + config + " synccompare ' " ) +
2010-02-08 10:16:36 +01:00
oldDir + " ' ' " + newDir + " ' " ;
2010-03-31 20:10:38 +02:00
int ret = Execute ( cmd , EXECUTE_NO_STDERR ) ;
switch ( ret = = - 1 ? ret :
WIFEXITED ( ret ) ? WEXITSTATUS ( ret ) :
- 1 ) {
2007-11-08 22:22:52 +01:00
case 0 :
2013-04-08 19:17:36 +02:00
SE_LOG_SHOW ( NULL , " no changes " ) ;
2007-11-08 22:22:52 +01:00
break ;
case 10 :
break ;
default :
2013-04-08 19:17:36 +02:00
SE_LOG_SHOW ( NULL , " Comparison was impossible. " ) ;
2007-11-08 22:22:52 +01:00
break ;
}
}
2013-04-08 19:17:36 +02:00
SE_LOG_SHOW ( NULL , " \n " ) ;
2007-11-08 22:22:52 +01:00
return true ;
}
2006-03-19 22:37:30 +01:00
// call when all sync sources are ready to dump
// pre-sync databases
2012-02-07 16:35:46 +01:00
// @param sourceName limit preparation to that source
void syncPrepare ( const string & sourceName ) {
if ( m_prepared . find ( sourceName ) ! = m_prepared . end ( ) ) {
// data dump was already done (can happen when running multiple
// SyncML sessions)
return ;
}
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
if ( m_logdir - > getLogfile ( ) . size ( ) & &
2012-01-09 18:33:39 +01:00
m_doLogging & &
( m_client . getDumpData ( ) | | m_client . getPrintChanges ( ) ) ) {
2006-03-19 22:37:30 +01:00
// dump initial databases
2014-07-28 15:29:41 +02:00
SE_LOG_INFO ( NULL , " creating complete data backup of datastore %s before sync (%s) " ,
2012-02-07 16:35:46 +01:00
sourceName . c_str ( ) ,
2012-01-10 09:25:38 +01:00
( m_client . getDumpData ( ) & & m_client . getPrintChanges ( ) ) ? " enabled with dumpData and needed for printChanges " :
m_client . getDumpData ( ) ? " because it was enabled with dumpData " :
m_client . getPrintChanges ( ) ? " needed for printChanges " :
" ??? " ) ;
2012-02-07 16:35:46 +01:00
dumpDatabases ( " before " , & SyncSourceReport : : m_backupBefore , sourceName ) ;
2012-01-09 18:33:39 +01:00
if ( m_client . getPrintChanges ( ) ) {
2010-10-29 16:00:50 +02:00
// compare against the old "after" database dump
2012-02-07 16:35:46 +01:00
dumpLocalChanges ( " " , " after " , " before " , sourceName ,
2010-10-29 16:00:50 +02:00
StringPrintf ( " %s data changes to be applied during synchronization: \n " ,
m_client . isLocalSync ( ) ? m_client . getContextName ( ) . c_str ( ) : " Local " ) ) ;
}
2006-03-19 22:37:30 +01:00
}
}
// call at the end of a sync with success == true
// if all went well to print report
2009-05-08 09:57:28 +02:00
void syncDone ( SyncMLStatus status , SyncReport * report ) {
// record status - failures from now only affect post-processing
// and thus do no longer change that result
if ( report ) {
report - > setStatus ( status = = 0 ? STATUS_HTTP_OK : status ) ;
}
2010-10-29 16:00:50 +02:00
// dump database after sync if explicitly enabled or
// needed for comparison;
// in the latter case only if dumping it at the beginning completed
2012-01-09 18:33:39 +01:00
if ( m_doLogging & &
( m_client . getDumpData ( ) | |
( m_client . getPrintChanges ( ) & & m_reportTodo & & ! m_prepared . empty ( ) ) ) ) {
2010-10-29 16:00:50 +02:00
try {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " creating complete data backup after sync (%s) " ,
2012-01-10 09:25:38 +01:00
( m_client . getDumpData ( ) & & m_client . getPrintChanges ( ) ) ? " enabled with dumpData and needed for printChanges " :
m_client . getDumpData ( ) ? " because it was enabled with dumpData " :
m_client . getPrintChanges ( ) ? " needed for printChanges " :
" ??? " ) ;
2010-10-29 16:00:50 +02:00
dumpDatabases ( " after " , & SyncSourceReport : : m_backupAfter ) ;
} catch ( . . . ) {
Exception : : handle ( ) ;
// not exactly sure what the problem was, but don't
// try it again
m_prepared . clear ( ) ;
}
}
2006-03-19 22:37:30 +01:00
if ( m_doLogging ) {
2010-10-29 16:00:50 +02:00
if ( m_reportTodo & & ! m_prepared . empty ( ) & & report ) {
// update report with more recent information about m_backupAfter
updateSyncReport ( * report ) ;
2009-04-29 10:36:42 +02:00
}
2012-05-22 11:18:49 +02:00
// ensure that stderr is seen again
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
m_logdir - > restore ( ) ;
2006-03-19 22:37:30 +01:00
2012-05-22 11:18:49 +02:00
// write out session status
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
m_logdir - > endSession ( ) ;
2012-05-22 11:18:49 +02:00
2006-03-19 22:37:30 +01:00
if ( m_reportTodo ) {
// haven't looked at result of sync yet;
// don't do it again
m_reportTodo = false ;
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
string logfile = m_logdir - > getLogfile ( ) ;
2009-05-08 09:57:28 +02:00
if ( status = = STATUS_OK ) {
2013-04-08 19:17:36 +02:00
SE_LOG_SHOW ( NULL , " \n Synchronization successful. " ) ;
2007-04-21 14:33:23 +02:00
} else if ( logfile . size ( ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_SHOW ( NULL , " \n Synchronization failed, see %s for details. " ,
command line: cleaned up output
The user-visible part of this change is that command line output now
uses the same [ERROR/INFO] prefixes like the rest of SyncEvolution,
instead of "Error:". Several messages were split into [ERROR] and
[INFO] parts on seperate lines. Multi-line messages with such a prefix
now have the prefix at the start of each line. Full sentences start
with captital letters.
All usage errors related to the synopsis of the command line now
include the synopsis, without the detailed documentation of all
options. Some of those errors dumped the full documentation, which was
way too much information and pushed the actual synopsis off the
screen. Some other errors did not include usage information at all.
All output still goes to stdout, stderr is not used at all. Should be
changed in a seperate patch, because currently error messages during
operations like "--export -" get mixed with the result of the
operation.
Technically the output handling was simplified. All output is printed
via the logging system, instead of using a mixture of logging and
streaming into std::cout. The advantage is that it will be easier to
redirect all regular output inside the syncevo-dbus-helper to the
parent. In particular, the following code could be removed:
- the somewhat hacky std::streambuf->logging bridge code (CmdlineStreamBuf)
- SyncContext set/getOutput()
- ostream constructor parameters for Cmdline and derived classes
The new code uses SE_LOG_SHOW() to produce output without prefix. Each
call ends at the next line, regardless whether the string ends in a
newline or not. The LoggerStdout was adapted to behave according to
that expectation, and it inserts the line prefix at the start of each
line - probably didn't matter before, because hardly any (no?!)
message had line breaks.
Because of this implicit newline in the logging code, some newlines
become redundant; SE_LOG_SHOW("") is used to insert an empty line
where needed. Calls to the logging system are minimized if possible by
assembling output in buffers first, to reduce overhead and to adhere
to the "one call per message" guideline.
Testing was adapted accordingly. It's a bit stricter now, too, because
it checks the entire error output instead of just the last line. The
previous use of Cmdline ostreams to capture output from the class was
replaced with loggers which hook into the logging system while the
test runs and store the output. Same with SyncContext testing.
Conflicts:
src/dbus/server/cmdline-wrapper.h
2012-04-11 10:22:57 +02:00
logfile . c_str ( ) ) ;
2007-04-21 14:33:23 +02:00
} else {
2013-04-08 19:17:36 +02:00
SE_LOG_SHOW ( NULL , " \n Synchronization failed. " ) ;
2006-03-19 22:37:30 +01:00
}
2007-10-14 18:25:39 +02:00
// pretty-print report
2009-04-21 11:22:32 +02:00
if ( m_logLevel > LOGGING_QUIET ) {
local sync: improved target side output
Added a "target side of local sync ready" INFO message to introduce
the output which has the target context in the [INFO] tag. The sync report
from the target side now has the target context embedded in brackets
after the "Changes applied during synchronization" header, to avoid
ambiguities.
Output from the target side of a local sync was passed through stderr
redirection as chunks of text to the frontends. This had several
drawbacks:
- forwarding only happened when the local sync parent was processing
the output redirection, which (due to limitations of the implementation)
only happens when it needs to print something itself
- debug messages were not forwarded
- message boundaries might have been lost
In particular INFO messages about delays on the target side are
relevant while the sync runs and need to be shown immediately.
Now the output is passed through D-Bus, which happens immediately,
preserves message boundaries and is done for all output. The frontend
can decide separately whether it shows debug messages (not currently
supported by the command line tool).
Implementing this required extending the D-Bus API. The
Server.LogOutput signal now has an additional "process name"
parameter. Normally it is empty. For messages originating from the
target side, it carries that extra target context string.
This D-Bus API change is backward compatible. Older clients can still
subscribe to and decode the LogOutput messages, they'll simply ignore
the extra parameter. Newer clients expecting that extra parameter
won't work with an older D-Bus daemon: they'll fail to decode the
D-Bus message.
This revealed that the last error messages in a session was
incorrectly attributed to the syncevo-dbus-server. Might also have
happened with several other error messages. Now everything happening
in the server while working on code related to a session is logged as
coming from that sessions. It's not perfect either (some of the output
could be from unrelated events encountered indirectly while running
that code), but it should be better than before.
The patch changes the handling or errors in the local sync parent
slightly: because it logs real ERROR messages now instead of plain
text, it'll record the child's errors in its own sync report. That's
okay, user's typically shouldn't have to care about where the error
occurred. The D-Bus tests need to be adapted for this, because it
removes the "failure in local sync child" from the sync report.
Another, more internal change is that the syncevo-local-sync helper
does its own output redirection instead of relying on the stderr
handling of the parent. That way libneon debug output ends up in the
log file of the child side (where it belongs) and not in the parent's
log.
2012-06-28 14:58:05 +02:00
std : : string procname = Logger : : getProcessName ( ) ;
2013-04-08 19:17:36 +02:00
SE_LOG_SHOW ( NULL , " \n Changes applied during synchronization%s%s%s: " ,
local sync: improved target side output
Added a "target side of local sync ready" INFO message to introduce
the output which has the target context in the [INFO] tag. The sync report
from the target side now has the target context embedded in brackets
after the "Changes applied during synchronization" header, to avoid
ambiguities.
Output from the target side of a local sync was passed through stderr
redirection as chunks of text to the frontends. This had several
drawbacks:
- forwarding only happened when the local sync parent was processing
the output redirection, which (due to limitations of the implementation)
only happens when it needs to print something itself
- debug messages were not forwarded
- message boundaries might have been lost
In particular INFO messages about delays on the target side are
relevant while the sync runs and need to be shown immediately.
Now the output is passed through D-Bus, which happens immediately,
preserves message boundaries and is done for all output. The frontend
can decide separately whether it shows debug messages (not currently
supported by the command line tool).
Implementing this required extending the D-Bus API. The
Server.LogOutput signal now has an additional "process name"
parameter. Normally it is empty. For messages originating from the
target side, it carries that extra target context string.
This D-Bus API change is backward compatible. Older clients can still
subscribe to and decode the LogOutput messages, they'll simply ignore
the extra parameter. Newer clients expecting that extra parameter
won't work with an older D-Bus daemon: they'll fail to decode the
D-Bus message.
This revealed that the last error messages in a session was
incorrectly attributed to the syncevo-dbus-server. Might also have
happened with several other error messages. Now everything happening
in the server while working on code related to a session is logged as
coming from that sessions. It's not perfect either (some of the output
could be from unrelated events encountered indirectly while running
that code), but it should be better than before.
The patch changes the handling or errors in the local sync parent
slightly: because it logs real ERROR messages now instead of plain
text, it'll record the child's errors in its own sync report. That's
okay, user's typically shouldn't have to care about where the error
occurred. The D-Bus tests need to be adapted for this, because it
removes the "failure in local sync child" from the sync report.
Another, more internal change is that the syncevo-local-sync helper
does its own output redirection instead of relying on the stderr
handling of the parent. That way libneon debug output ends up in the
log file of the child side (where it belongs) and not in the parent's
log.
2012-06-28 14:58:05 +02:00
procname . empty ( ) ? " " : " ( " ,
procname . c_str ( ) ,
procname . empty ( ) ? " " : " ) " ) ;
2007-11-08 22:22:52 +01:00
}
2009-04-21 11:22:32 +02:00
if ( m_logLevel > LOGGING_QUIET & & report ) {
command line: cleaned up output
The user-visible part of this change is that command line output now
uses the same [ERROR/INFO] prefixes like the rest of SyncEvolution,
instead of "Error:". Several messages were split into [ERROR] and
[INFO] parts on seperate lines. Multi-line messages with such a prefix
now have the prefix at the start of each line. Full sentences start
with captital letters.
All usage errors related to the synopsis of the command line now
include the synopsis, without the detailed documentation of all
options. Some of those errors dumped the full documentation, which was
way too much information and pushed the actual synopsis off the
screen. Some other errors did not include usage information at all.
All output still goes to stdout, stderr is not used at all. Should be
changed in a seperate patch, because currently error messages during
operations like "--export -" get mixed with the result of the
operation.
Technically the output handling was simplified. All output is printed
via the logging system, instead of using a mixture of logging and
streaming into std::cout. The advantage is that it will be easier to
redirect all regular output inside the syncevo-dbus-helper to the
parent. In particular, the following code could be removed:
- the somewhat hacky std::streambuf->logging bridge code (CmdlineStreamBuf)
- SyncContext set/getOutput()
- ostream constructor parameters for Cmdline and derived classes
The new code uses SE_LOG_SHOW() to produce output without prefix. Each
call ends at the next line, regardless whether the string ends in a
newline or not. The LoggerStdout was adapted to behave according to
that expectation, and it inserts the line prefix at the start of each
line - probably didn't matter before, because hardly any (no?!)
message had line breaks.
Because of this implicit newline in the logging code, some newlines
become redundant; SE_LOG_SHOW("") is used to insert an empty line
where needed. Calls to the logging system are minimized if possible by
assembling output in buffers first, to reduce overhead and to adhere
to the "one call per message" guideline.
Testing was adapted accordingly. It's a bit stricter now, too, because
it checks the entire error output instead of just the last line. The
previous use of Cmdline ostreams to capture output from the class was
replaced with loggers which hook into the logging system while the
test runs and store the output. Same with SyncContext testing.
Conflicts:
src/dbus/server/cmdline-wrapper.h
2012-04-11 10:22:57 +02:00
ostringstream out ;
2010-03-26 09:53:27 +01:00
out < < * report ;
2010-01-21 11:58:57 +01:00
std : : string slowSync = report - > slowSyncExplanation ( m_client . getPeer ( ) ) ;
if ( ! slowSync . empty ( ) ) {
2010-03-26 09:53:27 +01:00
out < < endl < < slowSync ;
2010-01-21 11:58:57 +01:00
}
2013-04-08 19:17:36 +02:00
SE_LOG_SHOW ( NULL , " %s " , out . str ( ) . c_str ( ) ) ;
2007-10-14 18:25:39 +02:00
}
2007-03-23 22:00:32 +01:00
// compare databases?
2012-01-09 18:33:39 +01:00
if ( m_client . getPrintChanges ( ) ) {
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
dumpLocalChanges ( m_logdir - > getLogdir ( ) ,
2012-01-09 18:33:39 +01:00
" before " , " after " , " " ,
StringPrintf ( " \n Data modified %s during synchronization: \n " ,
m_client . isLocalSync ( ) ? m_client . getContextName ( ) . c_str ( ) : " locally " ) ,
" CLIENT_TEST_LEFT_NAME='before sync' CLIENT_TEST_RIGHT_NAME='after sync' CLIENT_TEST_REMOVED='removed during sync' CLIENT_TEST_ADDED='added during sync' " ) ;
}
// now remove some old logdirs
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
m_logdir - > expire ( ) ;
2005-11-26 22:16:03 +01:00
}
2012-05-22 11:18:49 +02:00
} else {
// finish debug session
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
m_logdir - > restore ( ) ;
m_logdir - > endSession ( ) ;
2005-11-26 22:16:03 +01:00
}
2006-03-19 22:37:30 +01:00
}
2006-09-07 21:47:29 +02:00
2009-04-22 17:53:04 +02:00
/** copies information about sources into sync report */
void updateSyncReport ( SyncReport & report ) {
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : * this ) {
2009-04-22 17:53:04 +02:00
report . addSyncSourceReport ( source - > getName ( ) , * source ) ;
}
}
2008-04-05 14:09:44 +02:00
/** returns names of active sources */
set < string > getSources ( ) {
set < string > res ;
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : * this ) {
2008-04-05 14:09:44 +02:00
res . insert ( source - > getName ( ) ) ;
}
return res ;
}
2006-09-07 21:47:29 +02:00
2006-03-19 22:37:30 +01:00
~ SourceList ( ) {
// free sync sources
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : * this ) {
2008-07-11 22:25:02 +02:00
delete source ;
2006-03-19 22:37:30 +01:00
}
}
2008-03-06 23:23:13 +01:00
2010-02-16 08:38:05 +01:00
/** find sync source by name (both normal and virtual sources) */
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
SyncSource * operator [ ] ( const string & name ) {
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : * this ) {
2008-07-11 22:25:02 +02:00
if ( name = = source - > getName ( ) ) {
return source ;
2008-03-06 23:23:13 +01:00
}
}
2018-01-16 17:17:34 +01:00
for ( std : : shared_ptr < VirtualSyncSource > & source : m_virtualSources ) {
2010-02-16 08:38:05 +01:00
if ( name = = source - > getName ( ) ) {
return source . get ( ) ;
}
}
2018-01-30 17:00:24 +01:00
return nullptr ;
2008-03-06 23:23:13 +01:00
}
2010-02-15 16:50:06 +01:00
/** find by XML <dbtypeid> (the ID used by Synthesis to identify sources in progress events) */
SyncSource * lookupBySynthesisID ( int synthesisid ) {
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : * this ) {
2010-02-15 16:50:06 +01:00
if ( source - > getSynthesisID ( ) = = synthesisid ) {
return source ;
}
}
2018-01-16 17:17:34 +01:00
for ( std : : shared_ptr < VirtualSyncSource > & source : m_virtualSources ) {
2010-02-15 16:50:06 +01:00
if ( source - > getSynthesisID ( ) = = synthesisid ) {
return source . get ( ) ;
}
}
2018-01-30 17:00:24 +01:00
return nullptr ;
2010-02-15 16:50:06 +01:00
}
2013-07-29 13:57:46 +02:00
std : : list < std : : string > getSourceNames ( ) const ;
2006-03-19 22:37:30 +01:00
} ;
2013-07-29 13:57:46 +02:00
std : : list < std : : string > SourceList : : getSourceNames ( ) const
{
std : : list < std : : string > sourceNames ;
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : * this ) {
2013-07-29 13:57:46 +02:00
sourceNames . push_back ( source - > getName ( ) ) ;
}
return sourceNames ;
}
2006-08-14 22:52:34 +02:00
void unref ( SourceList * sourceList )
{
delete sourceList ;
}
2012-03-06 13:34:51 +01:00
UserInterface & SyncContext : : getUserInterfaceNonNull ( )
2008-04-03 22:01:56 +02:00
{
2012-03-06 13:34:51 +01:00
if ( m_userInterface ) {
return * m_userInterface ;
2008-04-03 22:01:56 +02:00
} else {
2013-09-13 14:07:25 +02:00
// Doesn't use keyring.
static SimpleUserInterface dummy ( " 0 " ) ;
2012-03-06 13:34:51 +01:00
return dummy ;
2008-04-03 22:01:56 +02:00
}
}
2012-02-03 17:47:51 +01:00
void SyncContext : : requestAnotherSync ( )
{
if ( m_activeContext & &
m_activeContext - > m_engine . get ( ) & &
m_activeContext - > m_session ) {
SharedKey sessionKey =
m_activeContext - > m_engine . OpenSessionKey ( m_activeContext - > m_session ) ;
m_activeContext - > m_engine . SetInt32Value ( sessionKey ,
" restartsync " ,
true ) ;
}
}
2012-02-13 10:37:22 +01:00
const std : : vector < SyncSource * > * SyncContext : : getSources ( ) const
{
return m_sourceListPtr ?
m_sourceListPtr - > getSourceSet ( ) :
2018-01-30 17:00:24 +01:00
nullptr ;
2012-02-13 10:37:22 +01:00
}
2010-01-29 06:27:06 +01:00
string SyncContext : : getUsedSyncURL ( ) {
vector < string > urls = getSyncURL ( ) ;
2018-01-16 10:58:04 +01:00
for ( string url : urls ) {
2010-01-29 06:27:06 +01:00
if ( boost : : starts_with ( url , " http:// " ) | |
boost : : starts_with ( url , " https:// " ) ) {
# ifdef ENABLE_LIBSOUP
return url ;
# elif defined(ENABLE_LIBCURL)
return url ;
# endif
} else if ( url . find ( " obex-bt:// " ) = = 0 ) {
# ifdef ENABLE_BLUETOOTH
return url ;
# endif
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
} else if ( boost : : starts_with ( url , " local:// " ) ) {
return url ;
2010-01-29 06:27:06 +01:00
}
}
return " " ;
}
rewrote signal handling
Having the signal handling code in SyncContext created an unnecessary
dependency of some classes (in particular the transports) on
SyncContext.h. Now the code is in its own SuspendFlags.cpp/h files.
Cleaning up when the caller is done with signal handling is now part
of the utility class (removed automatically when guard instance is
freed).
The signal handlers now push one byte for each caught signal into a
pipe. That byte tells the rest of the code which message it needs to
print, which cannot be done in the signal handlers (because the
logging code is not reentrant and thus not safe to call from a signal
handler).
Compared to the previous solution, this solves several problems:
- no more race condition between setting and printing the message
- the pipe can be watched in a glib event loop, thus removing
the need to poll at regular intervals; polling is still possible
(and necessary) in those transports which do not integrate with
the event loop (CurlTransport) while it can be removed from
others (SoupTransport, OBEXTransport)
A boost::signal is emitted when the global SuspendFlags change.
Automatic connection management is used to disconnect instances which
are managed by boost::shared_ptr. For example, the current transport's
cancel() method is called when the state changes to "aborted".
The early connection phase of the OBEX transport now also can be
aborted (required cleaning up that transport!).
Currently watching for aborts via the event loop only works for real
Unix signals, but not for "abort" flags set in derived SyncContext
instances. The plan is to change that by allowing a "set abort" on
SuspendFlags and thus making
SyncContext::checkForSuspend/checkForAbort() redundant.
The new class is used as follows:
- syncevolution command line without daemon uses it to control
suspend/abort directly
- syncevolution command line as client of syncevo-dbus-server
connects to the state change signal and relays it to the
syncevo-dbus-server session via D-Bus; now all operations
are protected like that, not just syncing
- syncevo-dbus-server installs its own handlers for SIGINT
and SIGTERM and tries to shut down when either of them
is received. SuspendFlags then doesn't activate its own
handler. Instead that handler is invoked by the
syncevo-dbus-server niam() handler, to suspend or abort
a running sync. Once syncs run in a separate process, the
syncevo-dbus-server should request that these processes
suspend or abort before shutting down itself.
- The syncevo-local-sync helper ignores SIGINT after a sync
has started. It would receive that signal when forked by
syncevolution in non-daemon mode and the user presses
CTRL-C. Now the signal is only handled in the parent
process, which suspends as part of its own side of
the SyncML session and aborts by sending a SIGTERM+SIGINT
to syncevo-local-sync. SIGTERM in syncevo-local-sync is
handled by SuspendFlags and is meant to abort whatever
is going on there at the moment (see below).
Aborting long-running operations like import/export or communication
via CardDAV or ActiveSync still needs further work. The backends need
to check the abort state and return early instead of continuing.
2012-01-19 16:11:22 +01:00
static void CancelTransport ( TransportAgent * agent , SuspendFlags & flags )
{
if ( flags . getState ( ) = = SuspendFlags : : ABORT ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " CancelTransport: cancelling because of SuspendFlags::ABORT " ) ;
rewrote signal handling
Having the signal handling code in SyncContext created an unnecessary
dependency of some classes (in particular the transports) on
SyncContext.h. Now the code is in its own SuspendFlags.cpp/h files.
Cleaning up when the caller is done with signal handling is now part
of the utility class (removed automatically when guard instance is
freed).
The signal handlers now push one byte for each caught signal into a
pipe. That byte tells the rest of the code which message it needs to
print, which cannot be done in the signal handlers (because the
logging code is not reentrant and thus not safe to call from a signal
handler).
Compared to the previous solution, this solves several problems:
- no more race condition between setting and printing the message
- the pipe can be watched in a glib event loop, thus removing
the need to poll at regular intervals; polling is still possible
(and necessary) in those transports which do not integrate with
the event loop (CurlTransport) while it can be removed from
others (SoupTransport, OBEXTransport)
A boost::signal is emitted when the global SuspendFlags change.
Automatic connection management is used to disconnect instances which
are managed by boost::shared_ptr. For example, the current transport's
cancel() method is called when the state changes to "aborted".
The early connection phase of the OBEX transport now also can be
aborted (required cleaning up that transport!).
Currently watching for aborts via the event loop only works for real
Unix signals, but not for "abort" flags set in derived SyncContext
instances. The plan is to change that by allowing a "set abort" on
SuspendFlags and thus making
SyncContext::checkForSuspend/checkForAbort() redundant.
The new class is used as follows:
- syncevolution command line without daemon uses it to control
suspend/abort directly
- syncevolution command line as client of syncevo-dbus-server
connects to the state change signal and relays it to the
syncevo-dbus-server session via D-Bus; now all operations
are protected like that, not just syncing
- syncevo-dbus-server installs its own handlers for SIGINT
and SIGTERM and tries to shut down when either of them
is received. SuspendFlags then doesn't activate its own
handler. Instead that handler is invoked by the
syncevo-dbus-server niam() handler, to suspend or abort
a running sync. Once syncs run in a separate process, the
syncevo-dbus-server should request that these processes
suspend or abort before shutting down itself.
- The syncevo-local-sync helper ignores SIGINT after a sync
has started. It would receive that signal when forked by
syncevolution in non-daemon mode and the user presses
CTRL-C. Now the signal is only handled in the parent
process, which suspends as part of its own side of
the SyncML session and aborts by sending a SIGTERM+SIGINT
to syncevo-local-sync. SIGTERM in syncevo-local-sync is
handled by SuspendFlags and is meant to abort whatever
is going on there at the moment (see below).
Aborting long-running operations like import/export or communication
via CardDAV or ActiveSync still needs further work. The backends need
to check the abort state and return early instead of continuing.
2012-01-19 16:11:22 +01:00
agent - > cancel ( ) ;
}
}
/**
* common initialization for all kinds of transports , to be called
* before using them
*/
2018-01-16 17:17:34 +01:00
static void InitializeTransport ( const std : : shared_ptr < TransportAgent > & agent ,
rewrote signal handling
Having the signal handling code in SyncContext created an unnecessary
dependency of some classes (in particular the transports) on
SyncContext.h. Now the code is in its own SuspendFlags.cpp/h files.
Cleaning up when the caller is done with signal handling is now part
of the utility class (removed automatically when guard instance is
freed).
The signal handlers now push one byte for each caught signal into a
pipe. That byte tells the rest of the code which message it needs to
print, which cannot be done in the signal handlers (because the
logging code is not reentrant and thus not safe to call from a signal
handler).
Compared to the previous solution, this solves several problems:
- no more race condition between setting and printing the message
- the pipe can be watched in a glib event loop, thus removing
the need to poll at regular intervals; polling is still possible
(and necessary) in those transports which do not integrate with
the event loop (CurlTransport) while it can be removed from
others (SoupTransport, OBEXTransport)
A boost::signal is emitted when the global SuspendFlags change.
Automatic connection management is used to disconnect instances which
are managed by boost::shared_ptr. For example, the current transport's
cancel() method is called when the state changes to "aborted".
The early connection phase of the OBEX transport now also can be
aborted (required cleaning up that transport!).
Currently watching for aborts via the event loop only works for real
Unix signals, but not for "abort" flags set in derived SyncContext
instances. The plan is to change that by allowing a "set abort" on
SuspendFlags and thus making
SyncContext::checkForSuspend/checkForAbort() redundant.
The new class is used as follows:
- syncevolution command line without daemon uses it to control
suspend/abort directly
- syncevolution command line as client of syncevo-dbus-server
connects to the state change signal and relays it to the
syncevo-dbus-server session via D-Bus; now all operations
are protected like that, not just syncing
- syncevo-dbus-server installs its own handlers for SIGINT
and SIGTERM and tries to shut down when either of them
is received. SuspendFlags then doesn't activate its own
handler. Instead that handler is invoked by the
syncevo-dbus-server niam() handler, to suspend or abort
a running sync. Once syncs run in a separate process, the
syncevo-dbus-server should request that these processes
suspend or abort before shutting down itself.
- The syncevo-local-sync helper ignores SIGINT after a sync
has started. It would receive that signal when forked by
syncevolution in non-daemon mode and the user presses
CTRL-C. Now the signal is only handled in the parent
process, which suspends as part of its own side of
the SyncML session and aborts by sending a SIGTERM+SIGINT
to syncevo-local-sync. SIGTERM in syncevo-local-sync is
handled by SuspendFlags and is meant to abort whatever
is going on there at the moment (see below).
Aborting long-running operations like import/export or communication
via CardDAV or ActiveSync still needs further work. The backends need
to check the abort state and return early instead of continuing.
2012-01-19 16:11:22 +01:00
int timeout )
{
agent - > setTimeout ( timeout ) ;
// Automatically call cancel() when we an abort request
// is detected. Relies of automatic connection management
// to disconnect when agent is deconstructed.
SuspendFlags & flags ( SuspendFlags : : getSuspendFlags ( ) ) ;
2021-01-24 10:31:39 +01:00
flags . m_stateChanged . connect ( SuspendFlags : : StateChanged_t : : slot_type ( CancelTransport , agent . get ( ) , boost : : placeholders : : _1 ) . track_foreign ( agent ) ) ;
rewrote signal handling
Having the signal handling code in SyncContext created an unnecessary
dependency of some classes (in particular the transports) on
SyncContext.h. Now the code is in its own SuspendFlags.cpp/h files.
Cleaning up when the caller is done with signal handling is now part
of the utility class (removed automatically when guard instance is
freed).
The signal handlers now push one byte for each caught signal into a
pipe. That byte tells the rest of the code which message it needs to
print, which cannot be done in the signal handlers (because the
logging code is not reentrant and thus not safe to call from a signal
handler).
Compared to the previous solution, this solves several problems:
- no more race condition between setting and printing the message
- the pipe can be watched in a glib event loop, thus removing
the need to poll at regular intervals; polling is still possible
(and necessary) in those transports which do not integrate with
the event loop (CurlTransport) while it can be removed from
others (SoupTransport, OBEXTransport)
A boost::signal is emitted when the global SuspendFlags change.
Automatic connection management is used to disconnect instances which
are managed by boost::shared_ptr. For example, the current transport's
cancel() method is called when the state changes to "aborted".
The early connection phase of the OBEX transport now also can be
aborted (required cleaning up that transport!).
Currently watching for aborts via the event loop only works for real
Unix signals, but not for "abort" flags set in derived SyncContext
instances. The plan is to change that by allowing a "set abort" on
SuspendFlags and thus making
SyncContext::checkForSuspend/checkForAbort() redundant.
The new class is used as follows:
- syncevolution command line without daemon uses it to control
suspend/abort directly
- syncevolution command line as client of syncevo-dbus-server
connects to the state change signal and relays it to the
syncevo-dbus-server session via D-Bus; now all operations
are protected like that, not just syncing
- syncevo-dbus-server installs its own handlers for SIGINT
and SIGTERM and tries to shut down when either of them
is received. SuspendFlags then doesn't activate its own
handler. Instead that handler is invoked by the
syncevo-dbus-server niam() handler, to suspend or abort
a running sync. Once syncs run in a separate process, the
syncevo-dbus-server should request that these processes
suspend or abort before shutting down itself.
- The syncevo-local-sync helper ignores SIGINT after a sync
has started. It would receive that signal when forked by
syncevolution in non-daemon mode and the user presses
CTRL-C. Now the signal is only handled in the parent
process, which suspends as part of its own side of
the SyncML session and aborts by sending a SIGTERM+SIGINT
to syncevo-local-sync. SIGTERM in syncevo-local-sync is
handled by SuspendFlags and is meant to abort whatever
is going on there at the moment (see below).
Aborting long-running operations like import/export or communication
via CardDAV or ActiveSync still needs further work. The backends need
to check the abort state and return early instead of continuing.
2012-01-19 16:11:22 +01:00
}
2018-01-16 17:17:34 +01:00
std : : shared_ptr < TransportAgent > SyncContext : : createTransportAgent ( void * gmainloop )
2009-02-15 15:22:07 +01:00
{
2010-01-29 06:27:06 +01:00
string url = getUsedSyncURL ( ) ;
2010-03-31 06:48:34 +02:00
m_retryInterval = getRetryInterval ( ) ;
m_retryDuration = getRetryDuration ( ) ;
2013-04-24 10:01:54 +02:00
int timeout = m_serverMode ? m_retryDuration : min ( m_retryInterval , m_retryDuration ) ;
2010-03-31 06:48:34 +02:00
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
if ( m_localSync ) {
string peer = url . substr ( strlen ( " local:// " ) ) ;
2018-01-16 17:17:34 +01:00
auto agent = make_weak_shared : : make < LocalTransportAgent > ( this , peer , gmainloop ) ;
rewrote signal handling
Having the signal handling code in SyncContext created an unnecessary
dependency of some classes (in particular the transports) on
SyncContext.h. Now the code is in its own SuspendFlags.cpp/h files.
Cleaning up when the caller is done with signal handling is now part
of the utility class (removed automatically when guard instance is
freed).
The signal handlers now push one byte for each caught signal into a
pipe. That byte tells the rest of the code which message it needs to
print, which cannot be done in the signal handlers (because the
logging code is not reentrant and thus not safe to call from a signal
handler).
Compared to the previous solution, this solves several problems:
- no more race condition between setting and printing the message
- the pipe can be watched in a glib event loop, thus removing
the need to poll at regular intervals; polling is still possible
(and necessary) in those transports which do not integrate with
the event loop (CurlTransport) while it can be removed from
others (SoupTransport, OBEXTransport)
A boost::signal is emitted when the global SuspendFlags change.
Automatic connection management is used to disconnect instances which
are managed by boost::shared_ptr. For example, the current transport's
cancel() method is called when the state changes to "aborted".
The early connection phase of the OBEX transport now also can be
aborted (required cleaning up that transport!).
Currently watching for aborts via the event loop only works for real
Unix signals, but not for "abort" flags set in derived SyncContext
instances. The plan is to change that by allowing a "set abort" on
SuspendFlags and thus making
SyncContext::checkForSuspend/checkForAbort() redundant.
The new class is used as follows:
- syncevolution command line without daemon uses it to control
suspend/abort directly
- syncevolution command line as client of syncevo-dbus-server
connects to the state change signal and relays it to the
syncevo-dbus-server session via D-Bus; now all operations
are protected like that, not just syncing
- syncevo-dbus-server installs its own handlers for SIGINT
and SIGTERM and tries to shut down when either of them
is received. SuspendFlags then doesn't activate its own
handler. Instead that handler is invoked by the
syncevo-dbus-server niam() handler, to suspend or abort
a running sync. Once syncs run in a separate process, the
syncevo-dbus-server should request that these processes
suspend or abort before shutting down itself.
- The syncevo-local-sync helper ignores SIGINT after a sync
has started. It would receive that signal when forked by
syncevolution in non-daemon mode and the user presses
CTRL-C. Now the signal is only handled in the parent
process, which suspends as part of its own side of
the SyncML session and aborts by sending a SIGTERM+SIGINT
to syncevo-local-sync. SIGTERM in syncevo-local-sync is
handled by SuspendFlags and is meant to abort whatever
is going on there at the moment (see below).
Aborting long-running operations like import/export or communication
via CardDAV or ActiveSync still needs further work. The backends need
to check the abort state and return early instead of continuing.
2012-01-19 16:11:22 +01:00
InitializeTransport ( agent , timeout ) ;
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
agent - > start ( ) ;
return agent ;
} else if ( boost : : starts_with ( url , " http:// " ) | |
2009-11-27 15:15:44 +01:00
boost : : starts_with ( url , " https:// " ) ) {
2009-02-15 15:22:07 +01:00
# ifdef ENABLE_LIBSOUP
2018-01-16 17:17:34 +01:00
auto agent = make_weak_shared : : make < SoupTransportAgent > ( static_cast < GMainLoop * > ( gmainloop ) ) ;
OBEX Client Transport: in-process OBEX client (binding over Bluetooth, #5188)
Outgoing OBEX connection implementation, only binds over Bluetooth now.
Integrates with gmainloop so that the opertaions in the transport will not
block the whole application.
It uses Bluetooth sdp to automatically discovery the corresponding service
channel providing SyncML service; the process is asynchronous. Callback
sdp_source_cb and sdp_callback are used for this purpose. sdp_source_cb is a
GIOChannel watch event callback which poll the underlying sdp socket, the
sdp_callback is invoked by Bluez during processing sdp packets.
Callback obex_fd_source and obex_callback are related to the OBEX processing
(Connect, Put, Get, Disconnect). obex_fd_source is a GIOChannel event source
callback which poll the underlying OBEX interface, the obex_callback is
invoked by libopenobex when it needs to delivering events to the application.
Connect is splited by several steps, see CONNECT_STATUS for more detail.
Disconnect will be invoked when shutDown is called or processing in
obex_fd_source_cb is failed, timeout occurs or user suspention. It will first
try to send a "Disconnect" command to server and waiting for response. If
such opertaion is failed it will disconnect anyway. It is important to call
wait after shutdown to ensure the transport is properly cleaned up.
Each callback function is protected by a "Try-catch" block to ensure no
exception is thrown in the C stack. This is important otherwise the application
will abort if an exception is really thrown.
Using several smart pointers to avoid potential resource leak. After initialized
the resource is held by ObexTransportAgent. Copy the smart pointer to the local
stack entering a function and return to ObexTransportAgent if the whole
process is correct and we want to continue. First, it ensures the resource is
released at least during ObexTransportAgent destructing; Second, it can also
try to release the resource as early as possible. For example cxxptr<ObexEvent>
will release the resource during each wait() so that the underlying poll will
not be processed if no transport activity is expected by the application.
"SyncURL" is used consistently for the address of the remote peer to
contact with.
2009-11-13 06:13:12 +01:00
agent - > setConfig ( * this ) ;
rewrote signal handling
Having the signal handling code in SyncContext created an unnecessary
dependency of some classes (in particular the transports) on
SyncContext.h. Now the code is in its own SuspendFlags.cpp/h files.
Cleaning up when the caller is done with signal handling is now part
of the utility class (removed automatically when guard instance is
freed).
The signal handlers now push one byte for each caught signal into a
pipe. That byte tells the rest of the code which message it needs to
print, which cannot be done in the signal handlers (because the
logging code is not reentrant and thus not safe to call from a signal
handler).
Compared to the previous solution, this solves several problems:
- no more race condition between setting and printing the message
- the pipe can be watched in a glib event loop, thus removing
the need to poll at regular intervals; polling is still possible
(and necessary) in those transports which do not integrate with
the event loop (CurlTransport) while it can be removed from
others (SoupTransport, OBEXTransport)
A boost::signal is emitted when the global SuspendFlags change.
Automatic connection management is used to disconnect instances which
are managed by boost::shared_ptr. For example, the current transport's
cancel() method is called when the state changes to "aborted".
The early connection phase of the OBEX transport now also can be
aborted (required cleaning up that transport!).
Currently watching for aborts via the event loop only works for real
Unix signals, but not for "abort" flags set in derived SyncContext
instances. The plan is to change that by allowing a "set abort" on
SuspendFlags and thus making
SyncContext::checkForSuspend/checkForAbort() redundant.
The new class is used as follows:
- syncevolution command line without daemon uses it to control
suspend/abort directly
- syncevolution command line as client of syncevo-dbus-server
connects to the state change signal and relays it to the
syncevo-dbus-server session via D-Bus; now all operations
are protected like that, not just syncing
- syncevo-dbus-server installs its own handlers for SIGINT
and SIGTERM and tries to shut down when either of them
is received. SuspendFlags then doesn't activate its own
handler. Instead that handler is invoked by the
syncevo-dbus-server niam() handler, to suspend or abort
a running sync. Once syncs run in a separate process, the
syncevo-dbus-server should request that these processes
suspend or abort before shutting down itself.
- The syncevo-local-sync helper ignores SIGINT after a sync
has started. It would receive that signal when forked by
syncevolution in non-daemon mode and the user presses
CTRL-C. Now the signal is only handled in the parent
process, which suspends as part of its own side of
the SyncML session and aborts by sending a SIGTERM+SIGINT
to syncevo-local-sync. SIGTERM in syncevo-local-sync is
handled by SuspendFlags and is meant to abort whatever
is going on there at the moment (see below).
Aborting long-running operations like import/export or communication
via CardDAV or ActiveSync still needs further work. The backends need
to check the abort state and return early instead of continuing.
2012-01-19 16:11:22 +01:00
InitializeTransport ( agent , timeout ) ;
OBEX Client Transport: in-process OBEX client (binding over Bluetooth, #5188)
Outgoing OBEX connection implementation, only binds over Bluetooth now.
Integrates with gmainloop so that the opertaions in the transport will not
block the whole application.
It uses Bluetooth sdp to automatically discovery the corresponding service
channel providing SyncML service; the process is asynchronous. Callback
sdp_source_cb and sdp_callback are used for this purpose. sdp_source_cb is a
GIOChannel watch event callback which poll the underlying sdp socket, the
sdp_callback is invoked by Bluez during processing sdp packets.
Callback obex_fd_source and obex_callback are related to the OBEX processing
(Connect, Put, Get, Disconnect). obex_fd_source is a GIOChannel event source
callback which poll the underlying OBEX interface, the obex_callback is
invoked by libopenobex when it needs to delivering events to the application.
Connect is splited by several steps, see CONNECT_STATUS for more detail.
Disconnect will be invoked when shutDown is called or processing in
obex_fd_source_cb is failed, timeout occurs or user suspention. It will first
try to send a "Disconnect" command to server and waiting for response. If
such opertaion is failed it will disconnect anyway. It is important to call
wait after shutdown to ensure the transport is properly cleaned up.
Each callback function is protected by a "Try-catch" block to ensure no
exception is thrown in the C stack. This is important otherwise the application
will abort if an exception is really thrown.
Using several smart pointers to avoid potential resource leak. After initialized
the resource is held by ObexTransportAgent. Copy the smart pointer to the local
stack entering a function and return to ObexTransportAgent if the whole
process is correct and we want to continue. First, it ensures the resource is
released at least during ObexTransportAgent destructing; Second, it can also
try to release the resource as early as possible. For example cxxptr<ObexEvent>
will release the resource during each wait() so that the underlying poll will
not be processed if no transport activity is expected by the application.
"SyncURL" is used consistently for the address of the remote peer to
contact with.
2009-11-13 06:13:12 +01:00
return agent ;
2009-02-15 15:22:07 +01:00
# elif defined(ENABLE_LIBCURL)
2018-01-16 17:17:34 +01:00
auto agent = std : : make_shared < CurlTransportAgent > ( ) ;
2012-09-24 15:33:15 +02:00
agent - > setConfig ( * this ) ;
InitializeTransport ( agent , timeout ) ;
return agent ;
2009-02-15 15:22:07 +01:00
# endif
2016-08-29 13:52:49 +02:00
} else if ( boost : : starts_with ( url , " obex-bt:// " ) ) {
OBEX Client Transport: in-process OBEX client (binding over Bluetooth, #5188)
Outgoing OBEX connection implementation, only binds over Bluetooth now.
Integrates with gmainloop so that the opertaions in the transport will not
block the whole application.
It uses Bluetooth sdp to automatically discovery the corresponding service
channel providing SyncML service; the process is asynchronous. Callback
sdp_source_cb and sdp_callback are used for this purpose. sdp_source_cb is a
GIOChannel watch event callback which poll the underlying sdp socket, the
sdp_callback is invoked by Bluez during processing sdp packets.
Callback obex_fd_source and obex_callback are related to the OBEX processing
(Connect, Put, Get, Disconnect). obex_fd_source is a GIOChannel event source
callback which poll the underlying OBEX interface, the obex_callback is
invoked by libopenobex when it needs to delivering events to the application.
Connect is splited by several steps, see CONNECT_STATUS for more detail.
Disconnect will be invoked when shutDown is called or processing in
obex_fd_source_cb is failed, timeout occurs or user suspention. It will first
try to send a "Disconnect" command to server and waiting for response. If
such opertaion is failed it will disconnect anyway. It is important to call
wait after shutdown to ensure the transport is properly cleaned up.
Each callback function is protected by a "Try-catch" block to ensure no
exception is thrown in the C stack. This is important otherwise the application
will abort if an exception is really thrown.
Using several smart pointers to avoid potential resource leak. After initialized
the resource is held by ObexTransportAgent. Copy the smart pointer to the local
stack entering a function and return to ObexTransportAgent if the whole
process is correct and we want to continue. First, it ensures the resource is
released at least during ObexTransportAgent destructing; Second, it can also
try to release the resource as early as possible. For example cxxptr<ObexEvent>
will release the resource during each wait() so that the underlying poll will
not be processed if no transport activity is expected by the application.
"SyncURL" is used consistently for the address of the remote peer to
contact with.
2009-11-13 06:13:12 +01:00
# ifdef ENABLE_BLUETOOTH
std : : string btUrl = url . substr ( strlen ( " obex-bt:// " ) , std : : string : : npos ) ;
2018-01-16 17:17:34 +01:00
auto agent = std : : make_shared < ObexTransportAgent > ( ObexTransportAgent : : OBEX_BLUETOOTH ,
static_cast < GMainLoop * > ( gmainloop ) ) ;
OBEX Client Transport: in-process OBEX client (binding over Bluetooth, #5188)
Outgoing OBEX connection implementation, only binds over Bluetooth now.
Integrates with gmainloop so that the opertaions in the transport will not
block the whole application.
It uses Bluetooth sdp to automatically discovery the corresponding service
channel providing SyncML service; the process is asynchronous. Callback
sdp_source_cb and sdp_callback are used for this purpose. sdp_source_cb is a
GIOChannel watch event callback which poll the underlying sdp socket, the
sdp_callback is invoked by Bluez during processing sdp packets.
Callback obex_fd_source and obex_callback are related to the OBEX processing
(Connect, Put, Get, Disconnect). obex_fd_source is a GIOChannel event source
callback which poll the underlying OBEX interface, the obex_callback is
invoked by libopenobex when it needs to delivering events to the application.
Connect is splited by several steps, see CONNECT_STATUS for more detail.
Disconnect will be invoked when shutDown is called or processing in
obex_fd_source_cb is failed, timeout occurs or user suspention. It will first
try to send a "Disconnect" command to server and waiting for response. If
such opertaion is failed it will disconnect anyway. It is important to call
wait after shutdown to ensure the transport is properly cleaned up.
Each callback function is protected by a "Try-catch" block to ensure no
exception is thrown in the C stack. This is important otherwise the application
will abort if an exception is really thrown.
Using several smart pointers to avoid potential resource leak. After initialized
the resource is held by ObexTransportAgent. Copy the smart pointer to the local
stack entering a function and return to ObexTransportAgent if the whole
process is correct and we want to continue. First, it ensures the resource is
released at least during ObexTransportAgent destructing; Second, it can also
try to release the resource as early as possible. For example cxxptr<ObexEvent>
will release the resource during each wait() so that the underlying poll will
not be processed if no transport activity is expected by the application.
"SyncURL" is used consistently for the address of the remote peer to
contact with.
2009-11-13 06:13:12 +01:00
agent - > setURL ( btUrl ) ;
rewrote signal handling
Having the signal handling code in SyncContext created an unnecessary
dependency of some classes (in particular the transports) on
SyncContext.h. Now the code is in its own SuspendFlags.cpp/h files.
Cleaning up when the caller is done with signal handling is now part
of the utility class (removed automatically when guard instance is
freed).
The signal handlers now push one byte for each caught signal into a
pipe. That byte tells the rest of the code which message it needs to
print, which cannot be done in the signal handlers (because the
logging code is not reentrant and thus not safe to call from a signal
handler).
Compared to the previous solution, this solves several problems:
- no more race condition between setting and printing the message
- the pipe can be watched in a glib event loop, thus removing
the need to poll at regular intervals; polling is still possible
(and necessary) in those transports which do not integrate with
the event loop (CurlTransport) while it can be removed from
others (SoupTransport, OBEXTransport)
A boost::signal is emitted when the global SuspendFlags change.
Automatic connection management is used to disconnect instances which
are managed by boost::shared_ptr. For example, the current transport's
cancel() method is called when the state changes to "aborted".
The early connection phase of the OBEX transport now also can be
aborted (required cleaning up that transport!).
Currently watching for aborts via the event loop only works for real
Unix signals, but not for "abort" flags set in derived SyncContext
instances. The plan is to change that by allowing a "set abort" on
SuspendFlags and thus making
SyncContext::checkForSuspend/checkForAbort() redundant.
The new class is used as follows:
- syncevolution command line without daemon uses it to control
suspend/abort directly
- syncevolution command line as client of syncevo-dbus-server
connects to the state change signal and relays it to the
syncevo-dbus-server session via D-Bus; now all operations
are protected like that, not just syncing
- syncevo-dbus-server installs its own handlers for SIGINT
and SIGTERM and tries to shut down when either of them
is received. SuspendFlags then doesn't activate its own
handler. Instead that handler is invoked by the
syncevo-dbus-server niam() handler, to suspend or abort
a running sync. Once syncs run in a separate process, the
syncevo-dbus-server should request that these processes
suspend or abort before shutting down itself.
- The syncevo-local-sync helper ignores SIGINT after a sync
has started. It would receive that signal when forked by
syncevolution in non-daemon mode and the user presses
CTRL-C. Now the signal is only handled in the parent
process, which suspends as part of its own side of
the SyncML session and aborts by sending a SIGTERM+SIGINT
to syncevo-local-sync. SIGTERM in syncevo-local-sync is
handled by SuspendFlags and is meant to abort whatever
is going on there at the moment (see below).
Aborting long-running operations like import/export or communication
via CardDAV or ActiveSync still needs further work. The backends need
to check the abort state and return early instead of continuing.
2012-01-19 16:11:22 +01:00
InitializeTransport ( agent , timeout ) ;
// this will block already
OBEX Client Transport: in-process OBEX client (binding over Bluetooth, #5188)
Outgoing OBEX connection implementation, only binds over Bluetooth now.
Integrates with gmainloop so that the opertaions in the transport will not
block the whole application.
It uses Bluetooth sdp to automatically discovery the corresponding service
channel providing SyncML service; the process is asynchronous. Callback
sdp_source_cb and sdp_callback are used for this purpose. sdp_source_cb is a
GIOChannel watch event callback which poll the underlying sdp socket, the
sdp_callback is invoked by Bluez during processing sdp packets.
Callback obex_fd_source and obex_callback are related to the OBEX processing
(Connect, Put, Get, Disconnect). obex_fd_source is a GIOChannel event source
callback which poll the underlying OBEX interface, the obex_callback is
invoked by libopenobex when it needs to delivering events to the application.
Connect is splited by several steps, see CONNECT_STATUS for more detail.
Disconnect will be invoked when shutDown is called or processing in
obex_fd_source_cb is failed, timeout occurs or user suspention. It will first
try to send a "Disconnect" command to server and waiting for response. If
such opertaion is failed it will disconnect anyway. It is important to call
wait after shutdown to ensure the transport is properly cleaned up.
Each callback function is protected by a "Try-catch" block to ensure no
exception is thrown in the C stack. This is important otherwise the application
will abort if an exception is really thrown.
Using several smart pointers to avoid potential resource leak. After initialized
the resource is held by ObexTransportAgent. Copy the smart pointer to the local
stack entering a function and return to ObexTransportAgent if the whole
process is correct and we want to continue. First, it ensures the resource is
released at least during ObexTransportAgent destructing; Second, it can also
try to release the resource as early as possible. For example cxxptr<ObexEvent>
will release the resource during each wait() so that the underlying poll will
not be processed if no transport activity is expected by the application.
"SyncURL" is used consistently for the address of the remote peer to
contact with.
2009-11-13 06:13:12 +01:00
agent - > connect ( ) ;
return agent ;
# endif
}
2009-11-17 12:45:38 +01:00
SE_THROW ( " unsupported transport type is specified in the configuration " ) ;
2009-02-15 15:22:07 +01:00
}
2009-10-05 14:49:32 +02:00
void SyncContext : : displayServerMessage ( const string & message )
2009-02-01 16:16:16 +01:00
{
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " message from server: %s " , message . c_str ( ) ) ;
2009-02-01 16:16:16 +01:00
}
2009-10-05 14:49:32 +02:00
void SyncContext : : displaySyncProgress ( sysync : : TProgressEventEnum type ,
2009-02-01 16:16:16 +01:00
int32_t extra1 , int32_t extra2 , int32_t extra3 )
{
}
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
bool SyncContext : : displaySourceProgress ( SyncSource & source ,
const SyncSourceEvent & event ,
bool flush )
2009-02-01 16:16:16 +01:00
{
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
if ( ! flush ) {
// Certain events do not need to be printed immediately.
// For example, instead of multiple PEV_ITEMRECEIVED events
// foo: received 1/100
// foo: received 2/100
// foo: ...
// foo: received 100/100
// it is better to just print one:
// foo: received 100/100
switch ( event . m_type ) {
case sysync : : PEV_ITEMPROCESSED :
// Ignore this one completely. There is one such event
// after each PEV_ITEMRECEIVED, so processing
// PEV_ITEMPROCESSED would break the merging of
// PEV_ITEMRECEIVED, at least the way it is implemented
// now. PEV_ITEMPROCESSED also doesn't add much
// information.
return true ;
case sysync : : PEV_DELETING :
case sysync : : PEV_ITEMRECEIVED :
case sysync : : PEV_ITEMSENT :
// Flush when switching to a different event type or source.
if ( m_sourceEvent . m_type ! = sysync : : PEV_NOP & &
( m_sourceEvent . m_type ! = event . m_type | |
m_sourceProgress ! = & source ) ) {
displaySourceProgress ( * m_sourceProgress , m_sourceEvent , true ) ;
}
m_sourceEvent . m_type = event . m_type ;
m_sourceEvent . m_extra1 = event . m_extra1 ;
m_sourceEvent . m_extra2 = event . m_extra2 ;
m_sourceEvent . m_extra3 = event . m_extra3 ;
m_sourceProgress = & source ;
return true ;
break ;
default :
if ( m_sourceEvent . m_type ! = sysync : : PEV_NOP ) {
displaySourceProgress ( * m_sourceProgress , m_sourceEvent , true ) ;
m_sourceEvent . m_type = sysync : : PEV_NOP ;
}
break ;
}
}
switch ( event . m_type ) {
2009-02-23 16:36:17 +01:00
case sysync : : PEV_PREPARING :
2009-02-01 16:16:16 +01:00
/* preparing (e.g. preflight in some clients), extra1=progress, extra2=total */
2009-02-03 10:06:41 +01:00
/* extra2 might be zero */
2010-02-19 18:32:07 +01:00
/*
* At the moment , preparing items doesn ' t do any real work .
* Printing this progress just increases the output and slows
* us down . Disabled .
*/
if ( true | | source . getFinalSyncMode ( ) = = SYNC_NONE ) {
2009-09-23 14:48:38 +02:00
// not active, suppress output
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
} else if ( event . m_extra2 ) {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s: preparing %d/%d " ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
source . getDisplayName ( ) . c_str ( ) , event . m_extra1 , event . m_extra2 ) ;
2009-02-03 10:06:41 +01:00
} else {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s: preparing %d " ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
source . getDisplayName ( ) . c_str ( ) , event . m_extra1 ) ;
2009-02-03 10:06:41 +01:00
}
2009-02-01 16:16:16 +01:00
break ;
2009-02-23 16:36:17 +01:00
case sysync : : PEV_DELETING :
2009-02-01 16:16:16 +01:00
/* deleting (zapping datastore), extra1=progress, extra2=total */
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
if ( event . m_extra2 ) {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s: deleting %d/%d " ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
source . getDisplayName ( ) . c_str ( ) , event . m_extra1 , event . m_extra2 ) ;
2009-02-03 10:06:41 +01:00
} else {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s: deleting %d " ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
source . getDisplayName ( ) . c_str ( ) , event . m_extra1 ) ;
2009-02-03 10:06:41 +01:00
}
2009-02-01 16:16:16 +01:00
break ;
2009-02-23 16:36:17 +01:00
case sysync : : PEV_ALERTED : {
2009-02-01 16:16:16 +01:00
/* datastore alerted (extra1=0 for normal, 1 for slow, 2 for first time slow,
2009-02-19 16:00:26 +01:00
extra2 = 1 for resumed session ,
extra3 0 = twoway , 1 = fromserver , 2 = fromclient */
2009-12-22 09:47:31 +01:00
// -1 is used for alerting a restore from backup. Synthesis won't use this
2011-10-24 19:52:01 +02:00
bool peerIsClient = getPeerIsClient ( ) ;
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
if ( event . m_extra1 ! = - 1 ) {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s: %s %s sync%s (%s) " ,
2011-01-18 15:07:46 +01:00
source . getDisplayName ( ) . c_str ( ) ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra2 ? " resuming " : " starting " ,
event . m_extra1 = = 0 ? " normal " :
event . m_extra1 = = 1 ? " slow " :
event . m_extra1 = = 2 ? " first time " :
2009-12-22 09:47:31 +01:00
" unknown " ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra3 = = 0 ? " , two-way " :
event . m_extra3 = = 1 ? " from server " :
event . m_extra3 = = 2 ? " from client " :
2011-10-24 19:52:01 +02:00
" , unknown direction " ,
peerIsClient ? " peer is client " : " peer is server " ) ;
2009-12-22 09:47:31 +01:00
2011-10-24 19:52:01 +02:00
SimpleSyncMode mode = SIMPLE_SYNC_NONE ;
2012-08-31 12:21:11 +02:00
SyncMode sync = StringToSyncMode ( source . getSync ( ) ) ;
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
switch ( event . m_extra1 ) {
2009-02-19 16:00:26 +01:00
case 0 :
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
switch ( event . m_extra3 ) {
2009-12-22 09:47:31 +01:00
case 0 :
2011-10-24 19:52:01 +02:00
mode = SIMPLE_SYNC_TWO_WAY ;
2011-04-20 15:17:40 +02:00
if ( m_serverMode & &
engine: local cache sync mode
This patch introduces support for true one-way syncing ("caching"):
the local datastore is meant to be an exact copy of the data on the
remote side. The assumption is that no modifications are ever made
locally outside of syncing. This is different from one-way sync modes,
which allows local changes and only temporarily disables sending them
to the remote side.
Another goal of the new mode is to avoid data writes as much as
possible.
This new mode only works on the server side of a sync, where the
engine has enough control over the data flow.
Most of the changes are in libsynthesis. SyncEvolution only needs to
enable the new mode, which is done via an extension of the "sync"
property:
- "local-cache-incremental" will do an incremental sync (if possible)
or a slow sync (otherwise). This is usually the right mode to use,
and thus has "local-cache" as alias.
- "local-cache-slow" will always do a slow sync. Useful for
debugging or after (accidentally) making changes on the server side.
An incremental sync will ignore such changes because they are not
meant to happen and thus leave client and sync out-of-sync!
Both modes are recorded in the sync report of the local side. The
target side is the client and records the normal "two-way" or "slow"
sync modes.
With the current SyncEvolution contact field list, first, middle and
last name are used to find matches during any kind of slow sync. The
organization field is ignored for matching during the initial slow
sync and used in all following ones. That's okay, the difference won't
matter in practice because the initial slow sync in PBAP caching will
be done with no local data. The test achieve the same result in both
cases by keeping the organization set in the reduced data set.
It's also okay to include the property in the comparison, because it
might help to distinguish between "John Doe" in different companies.
It might be worthwhile to add more fields as match criteria, for
example the birthday. Currently they are excluded, probably because
they are not trusted to be supported by SyncML peers. In caching mode
the situation is different, because all our data came from the peer.
The downside is that in cases where matching has to be done all the
time because change detection is not supported (PBAP), including the
birthday as criteria will cause unnecessary contact removed/added
events (and thus disk IO) when a contact was originally created
without birthday locally and then a birthday gets added on the phone.
Testing is done as part of the D-Bus testing framework, because usually
this functionality will be used as part of the D-Bus server and writing
tests in Python is easier.
A new test class "TestLocalCache" contains the new tests. They include
tests for removing extra items during a slow sync (testItemRemoval),
adding new client items under various conditions (testItemAdd*) and
updating/removing an item during incremental syncing
(testItemUpdate/Delete*). Doing these changes during a slow sync could
also be tested (not currently covered).
The tests for removing properties (testPropertyRemoval*) cover
removing almost all contact properties during an initial slow sync, a
second slow sync (which is treated differently in libsynthesis, see
merge=always and merge=slowsync), and an incremental sync.
2012-08-23 14:25:55 +02:00
m_serverAlerted ) {
if ( sync = = SYNC_ONE_WAY_FROM_SERVER | |
sync = = SYNC_ONE_WAY_FROM_LOCAL ) {
// As in the slow/refresh-from-server case below,
// pretending to do a two-way incremental sync
// is a correct way of executing the requested
// one-way sync, as long as the client doesn't
// send any of its own changes. The Synthesis
// engine does that.
mode = SIMPLE_SYNC_ONE_WAY_FROM_LOCAL ;
} else if ( sync = = SYNC_LOCAL_CACHE_SLOW | |
sync = = SYNC_LOCAL_CACHE_INCREMENTAL ) {
mode = SIMPLE_SYNC_LOCAL_CACHE_INCREMENTAL ;
}
2011-04-20 15:17:40 +02:00
}
2009-12-22 09:47:31 +01:00
break ;
case 1 :
2011-10-24 19:52:01 +02:00
mode = peerIsClient ? SIMPLE_SYNC_ONE_WAY_FROM_LOCAL : SIMPLE_SYNC_ONE_WAY_FROM_REMOTE ;
2009-12-22 09:47:31 +01:00
break ;
case 2 :
2011-10-24 19:52:01 +02:00
mode = peerIsClient ? SIMPLE_SYNC_ONE_WAY_FROM_REMOTE : SIMPLE_SYNC_ONE_WAY_FROM_LOCAL ;
2009-12-22 09:47:31 +01:00
break ;
}
2009-02-19 16:00:26 +01:00
break ;
case 1 :
case 2 :
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
switch ( event . m_extra3 ) {
2009-12-22 09:47:31 +01:00
case 0 :
2011-10-24 19:52:01 +02:00
mode = SIMPLE_SYNC_SLOW ;
2011-04-19 16:56:35 +02:00
if ( m_serverMode & &
engine: local cache sync mode
This patch introduces support for true one-way syncing ("caching"):
the local datastore is meant to be an exact copy of the data on the
remote side. The assumption is that no modifications are ever made
locally outside of syncing. This is different from one-way sync modes,
which allows local changes and only temporarily disables sending them
to the remote side.
Another goal of the new mode is to avoid data writes as much as
possible.
This new mode only works on the server side of a sync, where the
engine has enough control over the data flow.
Most of the changes are in libsynthesis. SyncEvolution only needs to
enable the new mode, which is done via an extension of the "sync"
property:
- "local-cache-incremental" will do an incremental sync (if possible)
or a slow sync (otherwise). This is usually the right mode to use,
and thus has "local-cache" as alias.
- "local-cache-slow" will always do a slow sync. Useful for
debugging or after (accidentally) making changes on the server side.
An incremental sync will ignore such changes because they are not
meant to happen and thus leave client and sync out-of-sync!
Both modes are recorded in the sync report of the local side. The
target side is the client and records the normal "two-way" or "slow"
sync modes.
With the current SyncEvolution contact field list, first, middle and
last name are used to find matches during any kind of slow sync. The
organization field is ignored for matching during the initial slow
sync and used in all following ones. That's okay, the difference won't
matter in practice because the initial slow sync in PBAP caching will
be done with no local data. The test achieve the same result in both
cases by keeping the organization set in the reduced data set.
It's also okay to include the property in the comparison, because it
might help to distinguish between "John Doe" in different companies.
It might be worthwhile to add more fields as match criteria, for
example the birthday. Currently they are excluded, probably because
they are not trusted to be supported by SyncML peers. In caching mode
the situation is different, because all our data came from the peer.
The downside is that in cases where matching has to be done all the
time because change detection is not supported (PBAP), including the
birthday as criteria will cause unnecessary contact removed/added
events (and thus disk IO) when a contact was originally created
without birthday locally and then a birthday gets added on the phone.
Testing is done as part of the D-Bus testing framework, because usually
this functionality will be used as part of the D-Bus server and writing
tests in Python is easier.
A new test class "TestLocalCache" contains the new tests. They include
tests for removing extra items during a slow sync (testItemRemoval),
adding new client items under various conditions (testItemAdd*) and
updating/removing an item during incremental syncing
(testItemUpdate/Delete*). Doing these changes during a slow sync could
also be tested (not currently covered).
The tests for removing properties (testPropertyRemoval*) cover
removing almost all contact properties during an initial slow sync, a
second slow sync (which is treated differently in libsynthesis, see
merge=always and merge=slowsync), and an incremental sync.
2012-08-23 14:25:55 +02:00
m_serverAlerted ) {
if ( sync = = SYNC_REFRESH_FROM_SERVER | |
sync = = SYNC_REFRESH_FROM_LOCAL ) {
// We run as server and told the client to refresh
// its data. A slow sync is how some clients (the
// Synthesis engine included) execute that sync mode;
// let's be optimistic and assume that the client
// did as it was told and deleted its data.
mode = SIMPLE_SYNC_REFRESH_FROM_LOCAL ;
} else if ( sync = = SYNC_LOCAL_CACHE_SLOW | |
sync = = SYNC_LOCAL_CACHE_INCREMENTAL ) {
mode = SIMPLE_SYNC_LOCAL_CACHE_SLOW ;
}
2011-04-19 16:56:35 +02:00
}
2009-12-22 09:47:31 +01:00
break ;
case 1 :
2011-10-24 19:52:01 +02:00
mode = peerIsClient ? SIMPLE_SYNC_REFRESH_FROM_LOCAL : SIMPLE_SYNC_REFRESH_FROM_REMOTE ;
2009-12-22 09:47:31 +01:00
break ;
case 2 :
2011-10-24 19:52:01 +02:00
mode = peerIsClient ? SIMPLE_SYNC_REFRESH_FROM_REMOTE : SIMPLE_SYNC_REFRESH_FROM_LOCAL ;
2009-12-22 09:47:31 +01:00
break ;
}
2009-02-19 16:00:26 +01:00
break ;
}
2013-06-28 08:58:18 +02:00
if ( SyncMode ( mode ) ! = SYNC_NONE ) {
SE_LOG_DEBUG ( NULL , " reading: set read-ahead based on sync mode %s " ,
PrettyPrintSyncMode ( SyncMode ( mode ) ) . c_str ( ) ) ;
switch ( mode ) {
case SIMPLE_SYNC_NONE :
case SIMPLE_SYNC_INVALID :
case SIMPLE_SYNC_RESTORE_FROM_BACKUP :
case SIMPLE_SYNC_ONE_WAY_FROM_REMOTE :
case SIMPLE_SYNC_REFRESH_FROM_REMOTE :
case SIMPLE_SYNC_LOCAL_CACHE_INCREMENTAL :
source . setReadAheadOrder ( SyncSourceBase : : READ_NONE ) ;
break ;
case SIMPLE_SYNC_TWO_WAY :
case SIMPLE_SYNC_ONE_WAY_FROM_LOCAL :
source . setReadAheadOrder ( SyncSourceBase : : READ_CHANGED_ITEMS ) ;
break ;
case SIMPLE_SYNC_SLOW :
case SIMPLE_SYNC_REFRESH_FROM_LOCAL :
case SIMPLE_SYNC_LOCAL_CACHE_SLOW :
source . setReadAheadOrder ( SyncSourceBase : : READ_ALL_ITEMS ) ;
break ;
}
}
2012-02-03 17:41:54 +01:00
if ( source . getFinalSyncMode ( ) = = SYNC_NONE ) {
source . recordFinalSyncMode ( SyncMode ( mode ) ) ;
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
source . recordFirstSync ( event . m_extra1 = = 2 ) ;
source . recordResumeSync ( event . m_extra2 = = 1 ) ;
2012-02-03 17:41:54 +01:00
} else if ( SyncMode ( mode ) ! = SYNC_NONE ) {
2013-05-07 16:39:50 +02:00
// Broadcast statistics before moving into next cycle.
m_sourceSyncedSignal ( source . getName ( ) , source ) ;
2012-02-03 17:41:54 +01:00
// may happen when the source is used in multiple
// SyncML sessions; only remember the initial sync
// mode in that case and count all following syncs
// (they should only finish the work of the initial
// one)
source . recordRestart ( ) ;
PBAP: incremental sync (FDO #59551)
Depending on the SYNCEVOLUTION_PBAP_SYNC env variable, syncing reads
all properties as configured ("all"), excludes photos ("text") or
first text, then all ("incremental").
When excluding photos, only known properties get requested. This
avoids issues with phones which reject the request when enabling
properties via the bit flags. This also helps with
"databaseFormat=^PHOTO".
When excluding photos, the vcard merge script as used by EDS ensures
that existing photo data is preserved. This only works during a slow
sync (merge script not called otherwise, okay for PBAP because it
always syncs in slow sync) and EDS (other backends do not use the
merge script, okay at the moment because PIM Manager is hard-coded to
use EDS).
The PBAP backend must be aware of the PBAP sync mode and request a
second cycle, which again must be a slow sync. This only works because
the sync engine is aware of the special mode and sets a new session
variable "keepPhotoData". It would be better to have the PBAP backend
send CTCap with PHOTO marked as not supported for text-only syncs and
enabled when sending PHOTO data, but that is considerably harder to
implement (CTCap cannot be adjusted at runtime).
beginSync() may only ask for a slow sync when not already called
for one. That's what the command line tool does when accessing
items. It fails when getting the 508 status.
The original goal of overlapping syncing with download has not been
achieved yet. It turned out that all item IDs get requested before
syncing starts, which thus depends on downloading all items in the current
implementation. Can be fixed by making up IDs based on the number of
existing items (see GetSize() in PBAP) and then downloading later when
the data is needed.
2013-07-05 10:39:21 +02:00
if ( m_serverMode ) {
// Done with first cycle, revert to normal photo
// handling if it was disabled.
SharedKey sessionKey = m_engine . OpenSessionKey ( m_session ) ;
SharedKey contextKey = m_engine . OpenKeyByPath ( sessionKey , " /sessionvars " ) ;
m_engine . SetInt32Value ( contextKey , " keepPhotoData " , false ) ;
}
2014-01-30 21:02:10 +01:00
// Reset "started" flags for PEV_SYNCSTART.
m_sourceStarted . clear ( ) ;
2012-02-03 17:41:54 +01:00
}
2009-12-22 09:47:31 +01:00
} else {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s: restore from backup " , source . getDisplayName ( ) . c_str ( ) ) ;
2009-12-22 09:47:31 +01:00
source . recordFinalSyncMode ( SYNC_RESTORE_FROM_BACKUP ) ;
2009-02-19 16:00:26 +01:00
}
2009-02-01 16:16:16 +01:00
break ;
2009-02-19 16:00:26 +01:00
}
2009-02-23 16:36:17 +01:00
case sysync : : PEV_SYNCSTART :
2009-02-01 16:16:16 +01:00
/* sync started */
2014-01-30 21:02:10 +01:00
/* Get's triggered by libsynthesis frequently. Limit it to once per sync cycle. */
if ( m_sourceStarted . find ( source . getName ( ) ) = = m_sourceStarted . end ( ) ) {
SE_LOG_INFO ( NULL , " %s: started " ,
source . getDisplayName ( ) . c_str ( ) ) ;
m_sourceStarted . insert ( source . getName ( ) ) ;
}
2009-02-01 16:16:16 +01:00
break ;
2009-02-23 16:36:17 +01:00
case sysync : : PEV_ITEMRECEIVED :
2009-02-01 16:16:16 +01:00
/* item received, extra1=current item count,
extra2 = number of expected changes ( if > = 0 ) */
2009-09-23 14:48:38 +02:00
if ( source . getFinalSyncMode ( ) = = SYNC_NONE ) {
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
} else if ( event . m_extra2 > 0 ) {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s: received %d/%d " ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
source . getDisplayName ( ) . c_str ( ) , event . m_extra1 , event . m_extra2 ) ;
2009-02-01 16:16:16 +01:00
} else {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s: received %d " ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
source . getDisplayName ( ) . c_str ( ) , event . m_extra1 ) ;
2009-02-01 16:16:16 +01:00
}
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
source . recordTotalNumItemsReceived ( event . m_extra1 ) ;
2009-02-01 16:16:16 +01:00
break ;
2009-02-23 16:36:17 +01:00
case sysync : : PEV_ITEMSENT :
2009-02-01 16:16:16 +01:00
/* item sent, extra1=current item count,
extra2 = number of expected items to be sent ( if > = 0 ) */
2009-09-23 14:48:38 +02:00
if ( source . getFinalSyncMode ( ) = = SYNC_NONE ) {
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
} else if ( event . m_extra2 > 0 ) {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s: sent %d/%d " ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
source . getDisplayName ( ) . c_str ( ) , event . m_extra1 , event . m_extra2 ) ;
2009-02-01 16:16:16 +01:00
} else {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s: sent %d " ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
source . getDisplayName ( ) . c_str ( ) , event . m_extra1 ) ;
2009-02-01 16:16:16 +01:00
}
2014-08-29 11:27:07 +02:00
source . recordTotalNumItemsSent ( event . m_extra1 ) ;
2009-02-01 16:16:16 +01:00
break ;
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
// Not reached, see above.
2009-02-23 16:36:17 +01:00
case sysync : : PEV_ITEMPROCESSED :
2009-02-01 16:16:16 +01:00
/* item locally processed, extra1=# added,
extra2 = # updated ,
extra3 = # deleted */
2009-09-23 14:48:38 +02:00
if ( source . getFinalSyncMode ( ) = = SYNC_NONE ) {
} else if ( source . getFinalSyncMode ( ) ! = SYNC_NONE ) {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s: added %d, updated %d, removed %d " ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
source . getDisplayName ( ) . c_str ( ) , event . m_extra1 , event . m_extra2 , event . m_extra3 ) ;
2009-09-23 14:48:38 +02:00
}
2009-02-01 16:16:16 +01:00
break ;
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
case sysync : : PEV_SYNCEND : {
2009-02-01 16:16:16 +01:00
/* sync finished, probably with error in extra1 (0=ok),
syncmode in extra2 ( 0 = normal , 1 = slow , 2 = first time ) ,
extra3 = 1 for resumed session ) */
2009-09-23 14:48:38 +02:00
if ( source . getFinalSyncMode ( ) = = SYNC_NONE ) {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s: inactive " , source . getDisplayName ( ) . c_str ( ) ) ;
2009-12-22 09:47:31 +01:00
} else if ( source . getFinalSyncMode ( ) = = SYNC_RESTORE_FROM_BACKUP ) {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s: restore done %s " ,
2011-01-18 15:07:46 +01:00
source . getDisplayName ( ) . c_str ( ) ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra1 ? " unsuccessfully " : " successfully " ) ;
2009-09-23 14:48:38 +02:00
} else {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s: %s%s sync done %s " ,
2011-01-18 15:07:46 +01:00
source . getDisplayName ( ) . c_str ( ) ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra3 ? " resumed " : " " ,
event . m_extra2 = = 0 ? " normal " :
event . m_extra2 = = 1 ? " slow " :
event . m_extra2 = = 2 ? " first time " :
2009-09-23 14:48:38 +02:00
" unknown " ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra1 ? " unsuccessfully " : " successfully " ) ;
2009-09-23 14:48:38 +02:00
}
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
int32_t extra1 = event . m_extra1 ;
2009-02-23 16:36:17 +01:00
switch ( extra1 ) {
case 401 :
// TODO: reset cached password
2013-07-26 10:22:11 +02:00
SE_LOG_INFO ( NULL , " authorization failed, check username '%s' and password " , getSyncUser ( ) . toString ( ) . c_str ( ) ) ;
2009-02-23 16:36:17 +01:00
break ;
case 403 :
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( source . getDisplayName ( ) , " log in succeeded, but server refuses access - contact server operator " ) ;
2009-02-23 16:36:17 +01:00
break ;
case 407 :
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " proxy authorization failed, check proxy username and password " ) ;
2009-02-23 16:36:17 +01:00
break ;
case 404 :
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( source . getDisplayName ( ) , " server database not found, check URI '%s' " , source . getURINonEmpty ( ) . c_str ( ) ) ;
2009-02-23 16:36:17 +01:00
break ;
2009-12-15 18:16:39 +01:00
case 0 :
break ;
2010-02-04 21:30:54 +01:00
case sysync : : LOCERR_DATASTORE_ABORT :
// this can mean only one thing in SyncEvolution: unexpected slow sync
extra1 = STATUS_UNEXPECTED_SLOW_SYNC ;
// no break!
2009-12-15 18:16:39 +01:00
default :
// Printing unknown status codes here is of somewhat questionable value,
// because even "good" sources will get a bad status when the overall
// session turns bad. We also don't have good explanations for the
// status here.
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
SE_LOG_ERROR ( source . getDisplayName ( ) , " %s " , Status2String ( SyncMLStatus ( event . m_extra1 ) ) . c_str ( ) ) ;
2009-12-15 18:16:39 +01:00
break ;
2009-02-23 16:36:17 +01:00
}
2009-02-19 16:00:26 +01:00
source . recordStatus ( SyncMLStatus ( extra1 ) ) ;
2009-02-01 16:16:16 +01:00
break ;
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
}
2009-02-23 16:36:17 +01:00
case sysync : : PEV_DSSTATS_L :
2009-02-01 16:16:16 +01:00
/* datastore statistics for local (extra1=# added,
extra2 = # updated ,
extra3 = # deleted ) */
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
source . setItemStat ( SyncSource : : ITEM_LOCAL ,
SyncSource : : ITEM_ADDED ,
SyncSource : : ITEM_TOTAL ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra1 ) ;
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
source . setItemStat ( SyncSource : : ITEM_LOCAL ,
SyncSource : : ITEM_UPDATED ,
SyncSource : : ITEM_TOTAL ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra2 ) ;
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
source . setItemStat ( SyncSource : : ITEM_LOCAL ,
SyncSource : : ITEM_REMOVED ,
SyncSource : : ITEM_TOTAL ,
2009-07-30 16:31:19 +02:00
// Synthesis engine doesn't count locally
// deleted items during
2011-04-20 10:33:53 +02:00
// refresh-from-server/client. That's a matter of
2009-07-30 16:31:19 +02:00
// taste. In SyncEvolution we'd like these
// items to show up, so add it here.
2011-10-24 19:52:01 +02:00
( source . getFinalSyncMode ( ) = = ( m_serverMode ? SYNC_REFRESH_FROM_CLIENT : SYNC_REFRESH_FROM_SERVER ) | |
source . getFinalSyncMode ( ) = = SYNC_REFRESH_FROM_REMOTE ) ?
2009-07-30 16:31:19 +02:00
source . getNumDeleted ( ) :
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra3 ) ;
2009-02-01 16:16:16 +01:00
break ;
2009-02-23 16:36:17 +01:00
case sysync : : PEV_DSSTATS_R :
2009-02-01 16:16:16 +01:00
/* datastore statistics for remote (extra1=# added,
extra2 = # updated ,
extra3 = # deleted ) */
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
source . setItemStat ( SyncSource : : ITEM_REMOTE ,
SyncSource : : ITEM_ADDED ,
SyncSource : : ITEM_TOTAL ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra1 ) ;
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
source . setItemStat ( SyncSource : : ITEM_REMOTE ,
SyncSource : : ITEM_UPDATED ,
SyncSource : : ITEM_TOTAL ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra2 ) ;
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
source . setItemStat ( SyncSource : : ITEM_REMOTE ,
SyncSource : : ITEM_REMOVED ,
SyncSource : : ITEM_TOTAL ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra3 ) ;
2009-02-01 16:16:16 +01:00
break ;
2009-02-23 16:36:17 +01:00
case sysync : : PEV_DSSTATS_E :
2009-02-01 16:16:16 +01:00
/* datastore statistics for local/remote rejects (extra1=# locally rejected,
extra2 = # remotely rejected ) */
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
source . setItemStat ( SyncSource : : ITEM_LOCAL ,
SyncSource : : ITEM_ANY ,
SyncSource : : ITEM_REJECT ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra1 ) ;
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
source . setItemStat ( SyncSource : : ITEM_REMOTE ,
SyncSource : : ITEM_ANY ,
SyncSource : : ITEM_REJECT ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra2 ) ;
2009-02-01 16:16:16 +01:00
break ;
2009-02-23 16:36:17 +01:00
case sysync : : PEV_DSSTATS_S :
2009-02-02 15:07:23 +01:00
/* datastore statistics for server slowsync (extra1=# slowsync matches) */
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
source . setItemStat ( SyncSource : : ITEM_REMOTE ,
SyncSource : : ITEM_ANY ,
SyncSource : : ITEM_MATCH ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra1 ) ;
2009-02-01 16:16:16 +01:00
break ;
2009-02-23 16:36:17 +01:00
case sysync : : PEV_DSSTATS_C :
2009-02-01 16:16:16 +01:00
/* datastore statistics for server conflicts (extra1=# server won,
extra2 = # client won ,
extra3 = # duplicated ) */
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
source . setItemStat ( SyncSource : : ITEM_REMOTE ,
SyncSource : : ITEM_ANY ,
SyncSource : : ITEM_CONFLICT_SERVER_WON ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra1 ) ;
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
source . setItemStat ( SyncSource : : ITEM_REMOTE ,
SyncSource : : ITEM_ANY ,
SyncSource : : ITEM_CONFLICT_CLIENT_WON ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra2 ) ;
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
source . setItemStat ( SyncSource : : ITEM_REMOTE ,
SyncSource : : ITEM_ANY ,
SyncSource : : ITEM_CONFLICT_DUPLICATED ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra3 ) ;
2009-02-01 16:16:16 +01:00
break ;
2009-02-23 16:36:17 +01:00
case sysync : : PEV_DSSTATS_D :
2009-02-01 16:16:16 +01:00
/* datastore statistics for data volume (extra1=outgoing bytes,
extra2 = incoming bytes ) */
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
source . setItemStat ( SyncSource : : ITEM_LOCAL ,
SyncSource : : ITEM_ANY ,
SyncSource : : ITEM_SENT_BYTES ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra1 ) ;
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
source . setItemStat ( SyncSource : : ITEM_LOCAL ,
SyncSource : : ITEM_ANY ,
SyncSource : : ITEM_RECEIVED_BYTES ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_extra2 ) ;
break ;
case sysync : : PEV_NOP :
// Handled, do not process further.
return true ;
2009-02-01 16:16:16 +01:00
break ;
default :
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " %s: progress event %d, extra %d/%d/%d " ,
2011-01-18 15:07:46 +01:00
source . getDisplayName ( ) . c_str ( ) ,
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
event . m_type , event . m_extra1 , event . m_extra2 , event . m_extra3 ) ;
2009-02-01 16:16:16 +01:00
}
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
return false ;
2009-02-01 16:16:16 +01:00
}
2007-12-15 17:43:12 +01:00
/*
* There have been segfaults inside glib in the background
* thread which ran the second event loop . Disabled it again ,
* even though the synchronous EDS API calls will block then
* when EDS dies .
*/
#if 0 && defined(HAVE_GLIB) && defined(HAVE_EDS)
2007-11-03 18:07:42 +01:00
# define RUN_GLIB_LOOP
# endif
# ifdef RUN_GLIB_LOOP
static void * mainLoopThread ( void * )
{
2007-11-28 19:05:22 +01:00
// The test framework uses SIGALRM for timeouts.
// Block the signal here because a) the signal handler
// prints a stack back trace when called and we are not
// interessted in the background thread's stack and b)
// it seems to have confused glib/libebook enough to
// access invalid memory and segfault when it gets the SIGALRM.
sigset_t blocked ;
sigemptyset ( & blocked ) ;
sigaddset ( & blocked , SIGALRM ) ;
2018-01-30 17:00:24 +01:00
pthread_sigmask ( SIG_BLOCK , & blocked , nullptr ) ;
2007-11-28 19:05:22 +01:00
2018-01-30 17:00:24 +01:00
GMainLoop * mainloop = g_main_loop_new ( nullptr , TRUE ) ;
2007-11-03 18:07:42 +01:00
if ( mainloop ) {
g_main_loop_run ( mainloop ) ;
g_main_loop_unref ( mainloop ) ;
}
2018-01-30 17:00:24 +01:00
return nullptr ;
2007-11-03 18:07:42 +01:00
}
# endif
2009-10-05 14:49:32 +02:00
void SyncContext : : startLoopThread ( )
2007-11-03 18:07:42 +01:00
{
# ifdef RUN_GLIB_LOOP
// when using Evolution we must have a running main loop,
// otherwise loss of connection won't be reported to us
static pthread_t loopthread ;
static bool loopthreadrunning ;
if ( ! loopthreadrunning ) {
2018-01-30 17:00:24 +01:00
loopthreadrunning = ! pthread_create ( & loopthread , nullptr , mainLoopThread , nullptr ) ;
2007-11-03 18:07:42 +01:00
}
2007-10-11 23:02:49 +02:00
# endif
}
2011-09-02 09:42:19 +02:00
SyncSource * SyncContext : : findSource ( const std : : string & name )
2009-02-06 17:52:18 +01:00
{
2010-05-27 15:33:06 +02:00
if ( ! m_activeContext | | ! m_activeContext - > m_sourceListPtr ) {
2018-01-30 17:00:24 +01:00
return nullptr ;
2010-02-15 18:03:56 +01:00
}
2011-09-02 09:42:19 +02:00
const char * realname = strrchr ( name . c_str ( ) , m_findSourceSeparator ) ;
2010-02-15 18:03:56 +01:00
if ( realname ) {
realname + + ;
} else {
2011-09-02 09:42:19 +02:00
realname = name . c_str ( ) ;
2010-02-15 18:03:56 +01:00
}
2010-05-27 15:33:06 +02:00
return ( * m_activeContext - > m_sourceListPtr ) [ realname ] ;
2009-02-06 17:52:18 +01:00
}
2009-10-25 22:46:09 +01:00
SyncContext * SyncContext : : findContext ( const char * sessionName )
{
return m_activeContext ;
}
2009-10-05 14:49:32 +02:00
void SyncContext : : initSources ( SourceList & sourceList )
2007-11-08 22:22:52 +01:00
{
2008-03-06 23:23:13 +01:00
list < string > configuredSources = getSyncSources ( ) ;
2010-02-12 17:02:31 +01:00
map < string , string > subSources ;
2009-12-15 10:08:01 +01:00
2010-10-22 14:02:01 +02:00
// Disambiguate source names because we have multiple with the same
// name active?
string contextName ;
if ( m_localSync ) {
2010-10-25 10:42:02 +02:00
contextName = getContextName ( ) ;
2010-10-22 14:02:01 +02:00
}
2009-12-15 10:08:01 +01:00
// Phase 1, check all virtual sync soruces
2018-01-16 10:58:04 +01:00
for ( const string & name : configuredSources ) {
2018-01-16 17:17:34 +01:00
std : : shared_ptr < PersistentSyncSourceConfig > sc ( getSyncSourceConfig ( name ) ) ;
2009-11-23 02:43:56 +01:00
SyncSourceNodes source = getSyncSourceNodes ( name ) ;
2011-10-24 19:52:01 +02:00
std : : string sync = sc - > getSync ( ) ;
2012-08-31 12:21:11 +02:00
SyncMode mode = StringToSyncMode ( sync ) ;
if ( mode ! = SYNC_NONE ) {
2011-09-14 13:26:00 +02:00
SourceType sourceType = SyncSource : : getSourceType ( source ) ;
2009-11-23 02:43:56 +01:00
if ( sourceType . m_backend = = " virtual " ) {
2009-12-15 10:08:01 +01:00
//This is a virtual sync source, check and enable the referenced
//sub syncsources here
2018-01-16 17:17:34 +01:00
SyncSourceParams params ( name , source , std : : shared_ptr < SyncConfig > ( this , SyncConfigNOP ( ) ) , contextName ) ;
2018-01-29 16:45:25 +01:00
std : : shared_ptr < VirtualSyncSource > vSource = std : : shared_ptr < VirtualSyncSource > ( new VirtualSyncSource ( params ) ) ;
2010-02-08 15:04:59 +01:00
std : : vector < std : : string > mappedSources = vSource - > getMappedSources ( ) ;
2018-01-16 10:58:04 +01:00
for ( std : : string source : mappedSources ) {
2009-12-15 10:08:01 +01:00
//check whether the mapped source is really available
2018-01-16 17:17:34 +01:00
std : : shared_ptr < PersistentSyncSourceConfig > source_config
2009-12-15 10:08:01 +01:00
= getSyncSourceConfig ( source ) ;
if ( ! source_config | | ! source_config - > exists ( ) ) {
2014-04-02 14:57:56 +02:00
Exception : : throwError ( SE_HERE ,
2014-07-28 15:29:41 +02:00
StringPrintf ( " Virtual datastore \" %s \" references a nonexistent datasource \" %s \" . " , name . c_str ( ) , source . c_str ( ) ) ) ;
2009-12-15 10:08:01 +01:00
}
2010-02-12 17:02:31 +01:00
pair < map < string , string > : : iterator , bool > res = subSources . insert ( make_pair ( source , name ) ) ;
if ( ! res . second ) {
2014-04-02 14:57:56 +02:00
Exception : : throwError ( SE_HERE ,
2014-07-28 15:29:41 +02:00
StringPrintf ( " Datastore \" %s \" included in the virtual datastores \" %s \" and \" %s \" . It can only be included in one virtual datastore at a time. " ,
2014-04-02 14:57:56 +02:00
source . c_str ( ) , res . first - > second . c_str ( ) , name . c_str ( ) ) ) ;
2009-12-15 10:08:01 +01:00
}
2010-02-12 17:02:31 +01:00
}
FilterConfigNode : : ConfigFilter vFilter ;
vFilter [ " sync " ] = sync ;
2010-02-10 17:36:12 +01:00
if ( ! m_serverMode ) {
// must set special URI for clients so that
// engine knows about superdatastore and its
// URI
2011-04-21 12:20:00 +02:00
vFilter [ " uri " ] = string ( " < " ) + vSource - > getName ( ) + " > " + vSource - > getURINonEmpty ( ) ;
2010-02-10 17:36:12 +01:00
}
2018-01-16 10:58:04 +01:00
for ( std : : string source : mappedSources ) {
2010-02-12 17:02:31 +01:00
setConfigFilter ( false , source , vFilter ) ;
2009-12-15 10:08:01 +01:00
}
2010-02-15 16:50:06 +01:00
sourceList . addSource ( vSource ) ;
2009-12-15 10:08:01 +01:00
}
}
}
2018-01-16 10:58:04 +01:00
for ( const string & name : configuredSources ) {
2018-01-16 17:17:34 +01:00
std : : shared_ptr < PersistentSyncSourceConfig > sc ( getSyncSourceConfig ( name ) ) ;
2009-12-15 10:08:01 +01:00
SyncSourceNodes source = getSyncSourceNodes ( name ) ;
2011-10-24 19:52:01 +02:00
if ( ! sc - > isDisabled ( ) ) {
2011-09-14 13:26:00 +02:00
SourceType sourceType = SyncSource : : getSourceType ( source ) ;
2009-12-15 10:08:01 +01:00
if ( sourceType . m_backend ! = " virtual " ) {
2009-11-23 02:43:56 +01:00
SyncSourceParams params ( name ,
2010-10-08 15:00:08 +02:00
source ,
2018-01-16 17:17:34 +01:00
std : : shared_ptr < SyncConfig > ( this , SyncConfigNOP ( ) ) ,
2010-10-22 14:02:01 +02:00
contextName ) ;
2018-01-29 16:45:25 +01:00
auto syncSource = SyncSource : : createSource ( params ) ;
2009-11-23 02:43:56 +01:00
if ( ! syncSource ) {
2014-04-02 14:57:56 +02:00
Exception : : throwError ( SE_HERE , name + " : type unknown " ) ;
2009-11-23 02:43:56 +01:00
}
2010-02-12 17:07:59 +01:00
if ( subSources . find ( name ) ! = subSources . end ( ) ) {
syncSource - > recordVirtualSource ( subSources [ name ] ) ;
}
2018-01-29 16:45:25 +01:00
sourceList . addSource ( std : : move ( syncSource ) ) ;
2007-11-08 22:22:52 +01:00
}
2009-09-23 14:48:38 +02:00
} else {
// the Synthesis engine is never going to see this source,
// therefore we have to mark it as 100% complete and
// "done"
2010-10-22 14:02:01 +02:00
class DummySyncSource source ( name , contextName ) ;
2009-09-23 14:48:38 +02:00
source . recordFinalSyncMode ( SYNC_NONE ) ;
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
displaySourceProgress ( source ,
SyncSourceEvent ( sysync : : PEV_PREPARING , 0 , 0 , 0 ) ,
true ) ;
displaySourceProgress ( source ,
SyncSourceEvent ( sysync : : PEV_ITEMPROCESSED , 0 , 0 , 0 ) ,
true ) ;
displaySourceProgress ( source ,
SyncSourceEvent ( sysync : : PEV_ITEMRECEIVED , 0 , 0 , 0 ) ,
true ) ;
displaySourceProgress ( source ,
SyncSourceEvent ( sysync : : PEV_ITEMSENT , 0 , 0 , 0 ) ,
true ) ;
displaySourceProgress ( source ,
SyncSourceEvent ( sysync : : PEV_SYNCEND , 0 , 0 , 0 ) ,
true ) ;
2007-11-08 22:22:52 +01:00
}
}
}
XML config: use configuration composed from fragments (MB #7712)
This patch replaces src/syncclient_sample_config.xml with a
combination of src/syncevo/configs/syncevolution.xml and the
config fragments that are shared with Synthesis upstream.
These fragments are installed in /usr/share/syncevolution/xml (or
the corresponding data path). From there they are read at runtime
to compose the final XML configuration. Users can copy individual files
into the corresponding directory hierarchy rooted at
$XDG_CONFIG_HOME/syncevolution-xml to replace individual fragments.
New fragments can be added there or in /usr/share.
For testing, these two directories can be overridden with the
SYNCEVOLUTION_XML_CONFIG_DIR env variable. No tests have been added
for this yet. There's also no documentation about it except this
commit message - add something to the HACKING guide once this
new concept stabilizes.
Developers can add new fragments in the source tree, invoke make and run the
resulting binary in client mode. As before, a complete config is included
in the binary. However, it is only sufficient for SyncML client mode.
For server mode, the files are expected to be installed (no need to maintain
a list of files in a Makefile for that) or SYNCEVOLUTION_XML_CONFIG_DIR
must be set.
At the moment, the following sub-directories are scanned for .xml files:
- the root directory to find syncevolution.xml
- datatypes, datatypes/client, datatypes/server
- scripting, scripting/client, scripting/server
- remoterules, remoterules/client, remoterules/server
Files inside "client" or "server" sub-directories are only used
when assembling a config for the corresponding mode of operation.
The goal of this patch is to simplify config sharing with Synthesis
(individual files are easier to manage than the monolitithic one), to
share files between client and server with the possibility to add
mode-specific files, and to allow users to extend the XML
configuration. The most likely use case for the latter is support for
more devices.
Previously, remote rules for the different devices listed in
syncserv_sample_config.xml were not used by SyncEvolution.
This patch moves the ZYB remote rule into a client-specific remote rule,
thus removing a complaint from libsynthesis about the unknown <client>
element when running as server.
Because we are using the unified upstream config, some parts of the config
have changed:
- There is a SYNCLVL field in all field list. This is currently unused
by SyncEvolution, but doesn't hurt either.
- A new iCalendar 2.0 all-day sanity check was added (for older Oracle servers?).
- The CATEGORIES defition in vBookmark was extended.
- some comment and white space changes
Because this is such fundamental change, extra care was taken to
minimize and verify the config changes. Here's the command which compares
old and new config for clients plus its output:
$ update-samples.pl syncevolution.xml client | diff -c -b syncclient_sample_config.xml -
***************
*** 31,42 ****
<scripting>
<looptimeout>5</looptimeout>
- <function><![CDATA[
- // create a UID
- string newuid() {
- return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
- }
- ]]></function>
<macro name="VCARD_BEFOREWRITE_SCRIPT_EVOLUTION"><![CDATA[
// a wordaround for cellphone in evolution. for incoming contacts, if there is only one CELL,
// strip the HOME or WORK flag from it. Evolution then should show it. */
--- 30,35 ----
***************
*** 118,123 ****
--- 111,124 ----
}
]]></macro>
+ <function><![CDATA[
+ // create a UID
+ string newuid() {
+ return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
+ }
+ ]]></function>
+
+
<!-- define script macros for scripts that are used by both vCalendar 1.0 and iCalendar 2.0 -->
<macro name="VCALENDAR_INCOMING_SCRIPT"><![CDATA[
***************
*** 145,150 ****
--- 146,158 ----
DTSTART = CONVERTTOUSERZONE(DTSTART);
MAKEALLDAY(DTSTART,DTEND,i);
}
+ else {
+ // iCalendar 2.0 - only if DTSTART is a date-only value this really is an allday
+ if (ISDATEONLY(DTSTART)) {
+ // reshape to make sure we don't have invalid zero-duration alldays (old OCS 9 servers)
+ MAKEALLDAY(DTSTART,DTEND,i);
+ }
+ }
// Make sure that all EXDATE times are in the same timezone as the start
// time. Some servers send them as UTC, which is all fine and well, but
***************
*** 265,275 ****
</scripting>
-
<datatypes>
-
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
--- 274,283 ----
</scripting>
<datatypes>
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
***************
*** 680,689 ****
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
-
-
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
--- 688,696 ----
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
***************
*** 787,793 ****
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for todoz -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
--- 792,798 ----
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for tasks -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
***************
*** 1394,1401 ****
<!-- non-standard properties -->
! <property name="CATEGORIES">
! <value field="CATEGORIES"/>
</property>
<property name="CLASS" suppressempty="yes">
--- 1394,1402 ----
<!-- non-standard properties -->
! <!-- inherit CATEGORIES from vCard 3.0, i.e. comma separated -->
! <property name="CATEGORIES" values="list" valueseparator="," altvalueseparator=";">
! <value field="CATEGORIES" combine=","/>
</property>
<property name="CLASS" suppressempty="yes">
***************
*** 1416,1435 ****
<use profile="vBookmark"/>
</datatype>
! <fieldlists/>
! <profiles/>
! <datatypes/>
</datatypes>
<clientorserver/>
-
- <client type="plugin">
- <remoterule name="ZYB">
- <manufacturer>ZYB</manufacturer>
- <model>ZYB</model>
- <!-- information to disable anchors checking -->
- <lenientmode>yes</lenientmode>
- </remoterule>
- </client>
-
</sysync_config>
--- 1417,1424 ----
<use profile="vBookmark"/>
</datatype>
!
</datatypes>
<clientorserver/>
</sysync_config>
2010-02-02 21:29:53 +01:00
// XML configuration converted to C string constants
2009-02-06 17:52:18 +01:00
extern " C " {
XML config: use configuration composed from fragments (MB #7712)
This patch replaces src/syncclient_sample_config.xml with a
combination of src/syncevo/configs/syncevolution.xml and the
config fragments that are shared with Synthesis upstream.
These fragments are installed in /usr/share/syncevolution/xml (or
the corresponding data path). From there they are read at runtime
to compose the final XML configuration. Users can copy individual files
into the corresponding directory hierarchy rooted at
$XDG_CONFIG_HOME/syncevolution-xml to replace individual fragments.
New fragments can be added there or in /usr/share.
For testing, these two directories can be overridden with the
SYNCEVOLUTION_XML_CONFIG_DIR env variable. No tests have been added
for this yet. There's also no documentation about it except this
commit message - add something to the HACKING guide once this
new concept stabilizes.
Developers can add new fragments in the source tree, invoke make and run the
resulting binary in client mode. As before, a complete config is included
in the binary. However, it is only sufficient for SyncML client mode.
For server mode, the files are expected to be installed (no need to maintain
a list of files in a Makefile for that) or SYNCEVOLUTION_XML_CONFIG_DIR
must be set.
At the moment, the following sub-directories are scanned for .xml files:
- the root directory to find syncevolution.xml
- datatypes, datatypes/client, datatypes/server
- scripting, scripting/client, scripting/server
- remoterules, remoterules/client, remoterules/server
Files inside "client" or "server" sub-directories are only used
when assembling a config for the corresponding mode of operation.
The goal of this patch is to simplify config sharing with Synthesis
(individual files are easier to manage than the monolitithic one), to
share files between client and server with the possibility to add
mode-specific files, and to allow users to extend the XML
configuration. The most likely use case for the latter is support for
more devices.
Previously, remote rules for the different devices listed in
syncserv_sample_config.xml were not used by SyncEvolution.
This patch moves the ZYB remote rule into a client-specific remote rule,
thus removing a complaint from libsynthesis about the unknown <client>
element when running as server.
Because we are using the unified upstream config, some parts of the config
have changed:
- There is a SYNCLVL field in all field list. This is currently unused
by SyncEvolution, but doesn't hurt either.
- A new iCalendar 2.0 all-day sanity check was added (for older Oracle servers?).
- The CATEGORIES defition in vBookmark was extended.
- some comment and white space changes
Because this is such fundamental change, extra care was taken to
minimize and verify the config changes. Here's the command which compares
old and new config for clients plus its output:
$ update-samples.pl syncevolution.xml client | diff -c -b syncclient_sample_config.xml -
***************
*** 31,42 ****
<scripting>
<looptimeout>5</looptimeout>
- <function><![CDATA[
- // create a UID
- string newuid() {
- return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
- }
- ]]></function>
<macro name="VCARD_BEFOREWRITE_SCRIPT_EVOLUTION"><![CDATA[
// a wordaround for cellphone in evolution. for incoming contacts, if there is only one CELL,
// strip the HOME or WORK flag from it. Evolution then should show it. */
--- 30,35 ----
***************
*** 118,123 ****
--- 111,124 ----
}
]]></macro>
+ <function><![CDATA[
+ // create a UID
+ string newuid() {
+ return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
+ }
+ ]]></function>
+
+
<!-- define script macros for scripts that are used by both vCalendar 1.0 and iCalendar 2.0 -->
<macro name="VCALENDAR_INCOMING_SCRIPT"><![CDATA[
***************
*** 145,150 ****
--- 146,158 ----
DTSTART = CONVERTTOUSERZONE(DTSTART);
MAKEALLDAY(DTSTART,DTEND,i);
}
+ else {
+ // iCalendar 2.0 - only if DTSTART is a date-only value this really is an allday
+ if (ISDATEONLY(DTSTART)) {
+ // reshape to make sure we don't have invalid zero-duration alldays (old OCS 9 servers)
+ MAKEALLDAY(DTSTART,DTEND,i);
+ }
+ }
// Make sure that all EXDATE times are in the same timezone as the start
// time. Some servers send them as UTC, which is all fine and well, but
***************
*** 265,275 ****
</scripting>
-
<datatypes>
-
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
--- 274,283 ----
</scripting>
<datatypes>
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
***************
*** 680,689 ****
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
-
-
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
--- 688,696 ----
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
***************
*** 787,793 ****
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for todoz -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
--- 792,798 ----
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for tasks -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
***************
*** 1394,1401 ****
<!-- non-standard properties -->
! <property name="CATEGORIES">
! <value field="CATEGORIES"/>
</property>
<property name="CLASS" suppressempty="yes">
--- 1394,1402 ----
<!-- non-standard properties -->
! <!-- inherit CATEGORIES from vCard 3.0, i.e. comma separated -->
! <property name="CATEGORIES" values="list" valueseparator="," altvalueseparator=";">
! <value field="CATEGORIES" combine=","/>
</property>
<property name="CLASS" suppressempty="yes">
***************
*** 1416,1435 ****
<use profile="vBookmark"/>
</datatype>
! <fieldlists/>
! <profiles/>
! <datatypes/>
</datatypes>
<clientorserver/>
-
- <client type="plugin">
- <remoterule name="ZYB">
- <manufacturer>ZYB</manufacturer>
- <model>ZYB</model>
- <!-- information to disable anchors checking -->
- <lenientmode>yes</lenientmode>
- </remoterule>
- </client>
-
</sysync_config>
--- 1417,1424 ----
<use profile="vBookmark"/>
</datatype>
!
</datatypes>
<clientorserver/>
</sysync_config>
2010-02-02 21:29:53 +01:00
// including all known fragments for a client
extern const char * SyncEvolutionXMLClient ;
// the remote rules for a client
extern const char * SyncEvolutionXMLClientRules ;
2009-02-06 17:52:18 +01:00
}
XML config: use configuration composed from fragments (MB #7712)
This patch replaces src/syncclient_sample_config.xml with a
combination of src/syncevo/configs/syncevolution.xml and the
config fragments that are shared with Synthesis upstream.
These fragments are installed in /usr/share/syncevolution/xml (or
the corresponding data path). From there they are read at runtime
to compose the final XML configuration. Users can copy individual files
into the corresponding directory hierarchy rooted at
$XDG_CONFIG_HOME/syncevolution-xml to replace individual fragments.
New fragments can be added there or in /usr/share.
For testing, these two directories can be overridden with the
SYNCEVOLUTION_XML_CONFIG_DIR env variable. No tests have been added
for this yet. There's also no documentation about it except this
commit message - add something to the HACKING guide once this
new concept stabilizes.
Developers can add new fragments in the source tree, invoke make and run the
resulting binary in client mode. As before, a complete config is included
in the binary. However, it is only sufficient for SyncML client mode.
For server mode, the files are expected to be installed (no need to maintain
a list of files in a Makefile for that) or SYNCEVOLUTION_XML_CONFIG_DIR
must be set.
At the moment, the following sub-directories are scanned for .xml files:
- the root directory to find syncevolution.xml
- datatypes, datatypes/client, datatypes/server
- scripting, scripting/client, scripting/server
- remoterules, remoterules/client, remoterules/server
Files inside "client" or "server" sub-directories are only used
when assembling a config for the corresponding mode of operation.
The goal of this patch is to simplify config sharing with Synthesis
(individual files are easier to manage than the monolitithic one), to
share files between client and server with the possibility to add
mode-specific files, and to allow users to extend the XML
configuration. The most likely use case for the latter is support for
more devices.
Previously, remote rules for the different devices listed in
syncserv_sample_config.xml were not used by SyncEvolution.
This patch moves the ZYB remote rule into a client-specific remote rule,
thus removing a complaint from libsynthesis about the unknown <client>
element when running as server.
Because we are using the unified upstream config, some parts of the config
have changed:
- There is a SYNCLVL field in all field list. This is currently unused
by SyncEvolution, but doesn't hurt either.
- A new iCalendar 2.0 all-day sanity check was added (for older Oracle servers?).
- The CATEGORIES defition in vBookmark was extended.
- some comment and white space changes
Because this is such fundamental change, extra care was taken to
minimize and verify the config changes. Here's the command which compares
old and new config for clients plus its output:
$ update-samples.pl syncevolution.xml client | diff -c -b syncclient_sample_config.xml -
***************
*** 31,42 ****
<scripting>
<looptimeout>5</looptimeout>
- <function><![CDATA[
- // create a UID
- string newuid() {
- return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
- }
- ]]></function>
<macro name="VCARD_BEFOREWRITE_SCRIPT_EVOLUTION"><![CDATA[
// a wordaround for cellphone in evolution. for incoming contacts, if there is only one CELL,
// strip the HOME or WORK flag from it. Evolution then should show it. */
--- 30,35 ----
***************
*** 118,123 ****
--- 111,124 ----
}
]]></macro>
+ <function><![CDATA[
+ // create a UID
+ string newuid() {
+ return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
+ }
+ ]]></function>
+
+
<!-- define script macros for scripts that are used by both vCalendar 1.0 and iCalendar 2.0 -->
<macro name="VCALENDAR_INCOMING_SCRIPT"><![CDATA[
***************
*** 145,150 ****
--- 146,158 ----
DTSTART = CONVERTTOUSERZONE(DTSTART);
MAKEALLDAY(DTSTART,DTEND,i);
}
+ else {
+ // iCalendar 2.0 - only if DTSTART is a date-only value this really is an allday
+ if (ISDATEONLY(DTSTART)) {
+ // reshape to make sure we don't have invalid zero-duration alldays (old OCS 9 servers)
+ MAKEALLDAY(DTSTART,DTEND,i);
+ }
+ }
// Make sure that all EXDATE times are in the same timezone as the start
// time. Some servers send them as UTC, which is all fine and well, but
***************
*** 265,275 ****
</scripting>
-
<datatypes>
-
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
--- 274,283 ----
</scripting>
<datatypes>
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
***************
*** 680,689 ****
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
-
-
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
--- 688,696 ----
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
***************
*** 787,793 ****
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for todoz -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
--- 792,798 ----
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for tasks -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
***************
*** 1394,1401 ****
<!-- non-standard properties -->
! <property name="CATEGORIES">
! <value field="CATEGORIES"/>
</property>
<property name="CLASS" suppressempty="yes">
--- 1394,1402 ----
<!-- non-standard properties -->
! <!-- inherit CATEGORIES from vCard 3.0, i.e. comma separated -->
! <property name="CATEGORIES" values="list" valueseparator="," altvalueseparator=";">
! <value field="CATEGORIES" combine=","/>
</property>
<property name="CLASS" suppressempty="yes">
***************
*** 1416,1435 ****
<use profile="vBookmark"/>
</datatype>
! <fieldlists/>
! <profiles/>
! <datatypes/>
</datatypes>
<clientorserver/>
-
- <client type="plugin">
- <remoterule name="ZYB">
- <manufacturer>ZYB</manufacturer>
- <model>ZYB</model>
- <!-- information to disable anchors checking -->
- <lenientmode>yes</lenientmode>
- </remoterule>
- </client>
-
</sysync_config>
--- 1417,1424 ----
<use profile="vBookmark"/>
</datatype>
!
</datatypes>
<clientorserver/>
</sysync_config>
2010-02-02 21:29:53 +01:00
/**
* helper class which scans directories for
* XML config files
*/
class XMLFiles
2009-02-06 17:52:18 +01:00
{
XML config: use configuration composed from fragments (MB #7712)
This patch replaces src/syncclient_sample_config.xml with a
combination of src/syncevo/configs/syncevolution.xml and the
config fragments that are shared with Synthesis upstream.
These fragments are installed in /usr/share/syncevolution/xml (or
the corresponding data path). From there they are read at runtime
to compose the final XML configuration. Users can copy individual files
into the corresponding directory hierarchy rooted at
$XDG_CONFIG_HOME/syncevolution-xml to replace individual fragments.
New fragments can be added there or in /usr/share.
For testing, these two directories can be overridden with the
SYNCEVOLUTION_XML_CONFIG_DIR env variable. No tests have been added
for this yet. There's also no documentation about it except this
commit message - add something to the HACKING guide once this
new concept stabilizes.
Developers can add new fragments in the source tree, invoke make and run the
resulting binary in client mode. As before, a complete config is included
in the binary. However, it is only sufficient for SyncML client mode.
For server mode, the files are expected to be installed (no need to maintain
a list of files in a Makefile for that) or SYNCEVOLUTION_XML_CONFIG_DIR
must be set.
At the moment, the following sub-directories are scanned for .xml files:
- the root directory to find syncevolution.xml
- datatypes, datatypes/client, datatypes/server
- scripting, scripting/client, scripting/server
- remoterules, remoterules/client, remoterules/server
Files inside "client" or "server" sub-directories are only used
when assembling a config for the corresponding mode of operation.
The goal of this patch is to simplify config sharing with Synthesis
(individual files are easier to manage than the monolitithic one), to
share files between client and server with the possibility to add
mode-specific files, and to allow users to extend the XML
configuration. The most likely use case for the latter is support for
more devices.
Previously, remote rules for the different devices listed in
syncserv_sample_config.xml were not used by SyncEvolution.
This patch moves the ZYB remote rule into a client-specific remote rule,
thus removing a complaint from libsynthesis about the unknown <client>
element when running as server.
Because we are using the unified upstream config, some parts of the config
have changed:
- There is a SYNCLVL field in all field list. This is currently unused
by SyncEvolution, but doesn't hurt either.
- A new iCalendar 2.0 all-day sanity check was added (for older Oracle servers?).
- The CATEGORIES defition in vBookmark was extended.
- some comment and white space changes
Because this is such fundamental change, extra care was taken to
minimize and verify the config changes. Here's the command which compares
old and new config for clients plus its output:
$ update-samples.pl syncevolution.xml client | diff -c -b syncclient_sample_config.xml -
***************
*** 31,42 ****
<scripting>
<looptimeout>5</looptimeout>
- <function><![CDATA[
- // create a UID
- string newuid() {
- return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
- }
- ]]></function>
<macro name="VCARD_BEFOREWRITE_SCRIPT_EVOLUTION"><![CDATA[
// a wordaround for cellphone in evolution. for incoming contacts, if there is only one CELL,
// strip the HOME or WORK flag from it. Evolution then should show it. */
--- 30,35 ----
***************
*** 118,123 ****
--- 111,124 ----
}
]]></macro>
+ <function><![CDATA[
+ // create a UID
+ string newuid() {
+ return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
+ }
+ ]]></function>
+
+
<!-- define script macros for scripts that are used by both vCalendar 1.0 and iCalendar 2.0 -->
<macro name="VCALENDAR_INCOMING_SCRIPT"><![CDATA[
***************
*** 145,150 ****
--- 146,158 ----
DTSTART = CONVERTTOUSERZONE(DTSTART);
MAKEALLDAY(DTSTART,DTEND,i);
}
+ else {
+ // iCalendar 2.0 - only if DTSTART is a date-only value this really is an allday
+ if (ISDATEONLY(DTSTART)) {
+ // reshape to make sure we don't have invalid zero-duration alldays (old OCS 9 servers)
+ MAKEALLDAY(DTSTART,DTEND,i);
+ }
+ }
// Make sure that all EXDATE times are in the same timezone as the start
// time. Some servers send them as UTC, which is all fine and well, but
***************
*** 265,275 ****
</scripting>
-
<datatypes>
-
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
--- 274,283 ----
</scripting>
<datatypes>
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
***************
*** 680,689 ****
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
-
-
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
--- 688,696 ----
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
***************
*** 787,793 ****
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for todoz -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
--- 792,798 ----
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for tasks -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
***************
*** 1394,1401 ****
<!-- non-standard properties -->
! <property name="CATEGORIES">
! <value field="CATEGORIES"/>
</property>
<property name="CLASS" suppressempty="yes">
--- 1394,1402 ----
<!-- non-standard properties -->
! <!-- inherit CATEGORIES from vCard 3.0, i.e. comma separated -->
! <property name="CATEGORIES" values="list" valueseparator="," altvalueseparator=";">
! <value field="CATEGORIES" combine=","/>
</property>
<property name="CLASS" suppressempty="yes">
***************
*** 1416,1435 ****
<use profile="vBookmark"/>
</datatype>
! <fieldlists/>
! <profiles/>
! <datatypes/>
</datatypes>
<clientorserver/>
-
- <client type="plugin">
- <remoterule name="ZYB">
- <manufacturer>ZYB</manufacturer>
- <model>ZYB</model>
- <!-- information to disable anchors checking -->
- <lenientmode>yes</lenientmode>
- </remoterule>
- </client>
-
</sysync_config>
--- 1417,1424 ----
<use profile="vBookmark"/>
</datatype>
!
</datatypes>
<clientorserver/>
</sysync_config>
2010-02-02 21:29:53 +01:00
public :
enum Category {
MAIN , /**< files directly under searched directories */
DATATYPES , /**< inside datatypes and datatypes/<mode> */
SCRIPTING , /**< inside scripting and scripting/<mode> */
REMOTERULES , /**< inside remoterules and remoterules/<mode> */
MAX_CATEGORY
} ;
/** search file system for XML config fragments */
void scan ( const string & mode ) ;
/** datatypes, scripts and rules concatenated, empty if none found */
string get ( Category category ) ;
/** main file, typically "syncevolution.xml", empty if not found */
string get ( const string & file ) ;
static const string m_syncevolutionXML ;
private :
/* base name as sort key + full file path, iterating is done in lexical order */
StringMap m_files [ MAX_CATEGORY ] ;
/**
* scan a specific directory for main files directly inside it
* and inside datatypes , scripting , remoterules ;
* it is not an error when it does not exist or is not a directory
*/
void scanRoot ( const string & mode , const string & dir ) ;
2009-02-06 17:52:18 +01:00
/**
XML config: use configuration composed from fragments (MB #7712)
This patch replaces src/syncclient_sample_config.xml with a
combination of src/syncevo/configs/syncevolution.xml and the
config fragments that are shared with Synthesis upstream.
These fragments are installed in /usr/share/syncevolution/xml (or
the corresponding data path). From there they are read at runtime
to compose the final XML configuration. Users can copy individual files
into the corresponding directory hierarchy rooted at
$XDG_CONFIG_HOME/syncevolution-xml to replace individual fragments.
New fragments can be added there or in /usr/share.
For testing, these two directories can be overridden with the
SYNCEVOLUTION_XML_CONFIG_DIR env variable. No tests have been added
for this yet. There's also no documentation about it except this
commit message - add something to the HACKING guide once this
new concept stabilizes.
Developers can add new fragments in the source tree, invoke make and run the
resulting binary in client mode. As before, a complete config is included
in the binary. However, it is only sufficient for SyncML client mode.
For server mode, the files are expected to be installed (no need to maintain
a list of files in a Makefile for that) or SYNCEVOLUTION_XML_CONFIG_DIR
must be set.
At the moment, the following sub-directories are scanned for .xml files:
- the root directory to find syncevolution.xml
- datatypes, datatypes/client, datatypes/server
- scripting, scripting/client, scripting/server
- remoterules, remoterules/client, remoterules/server
Files inside "client" or "server" sub-directories are only used
when assembling a config for the corresponding mode of operation.
The goal of this patch is to simplify config sharing with Synthesis
(individual files are easier to manage than the monolitithic one), to
share files between client and server with the possibility to add
mode-specific files, and to allow users to extend the XML
configuration. The most likely use case for the latter is support for
more devices.
Previously, remote rules for the different devices listed in
syncserv_sample_config.xml were not used by SyncEvolution.
This patch moves the ZYB remote rule into a client-specific remote rule,
thus removing a complaint from libsynthesis about the unknown <client>
element when running as server.
Because we are using the unified upstream config, some parts of the config
have changed:
- There is a SYNCLVL field in all field list. This is currently unused
by SyncEvolution, but doesn't hurt either.
- A new iCalendar 2.0 all-day sanity check was added (for older Oracle servers?).
- The CATEGORIES defition in vBookmark was extended.
- some comment and white space changes
Because this is such fundamental change, extra care was taken to
minimize and verify the config changes. Here's the command which compares
old and new config for clients plus its output:
$ update-samples.pl syncevolution.xml client | diff -c -b syncclient_sample_config.xml -
***************
*** 31,42 ****
<scripting>
<looptimeout>5</looptimeout>
- <function><![CDATA[
- // create a UID
- string newuid() {
- return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
- }
- ]]></function>
<macro name="VCARD_BEFOREWRITE_SCRIPT_EVOLUTION"><![CDATA[
// a wordaround for cellphone in evolution. for incoming contacts, if there is only one CELL,
// strip the HOME or WORK flag from it. Evolution then should show it. */
--- 30,35 ----
***************
*** 118,123 ****
--- 111,124 ----
}
]]></macro>
+ <function><![CDATA[
+ // create a UID
+ string newuid() {
+ return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
+ }
+ ]]></function>
+
+
<!-- define script macros for scripts that are used by both vCalendar 1.0 and iCalendar 2.0 -->
<macro name="VCALENDAR_INCOMING_SCRIPT"><![CDATA[
***************
*** 145,150 ****
--- 146,158 ----
DTSTART = CONVERTTOUSERZONE(DTSTART);
MAKEALLDAY(DTSTART,DTEND,i);
}
+ else {
+ // iCalendar 2.0 - only if DTSTART is a date-only value this really is an allday
+ if (ISDATEONLY(DTSTART)) {
+ // reshape to make sure we don't have invalid zero-duration alldays (old OCS 9 servers)
+ MAKEALLDAY(DTSTART,DTEND,i);
+ }
+ }
// Make sure that all EXDATE times are in the same timezone as the start
// time. Some servers send them as UTC, which is all fine and well, but
***************
*** 265,275 ****
</scripting>
-
<datatypes>
-
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
--- 274,283 ----
</scripting>
<datatypes>
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
***************
*** 680,689 ****
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
-
-
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
--- 688,696 ----
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
***************
*** 787,793 ****
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for todoz -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
--- 792,798 ----
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for tasks -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
***************
*** 1394,1401 ****
<!-- non-standard properties -->
! <property name="CATEGORIES">
! <value field="CATEGORIES"/>
</property>
<property name="CLASS" suppressempty="yes">
--- 1394,1402 ----
<!-- non-standard properties -->
! <!-- inherit CATEGORIES from vCard 3.0, i.e. comma separated -->
! <property name="CATEGORIES" values="list" valueseparator="," altvalueseparator=";">
! <value field="CATEGORIES" combine=","/>
</property>
<property name="CLASS" suppressempty="yes">
***************
*** 1416,1435 ****
<use profile="vBookmark"/>
</datatype>
! <fieldlists/>
! <profiles/>
! <datatypes/>
</datatypes>
<clientorserver/>
-
- <client type="plugin">
- <remoterule name="ZYB">
- <manufacturer>ZYB</manufacturer>
- <model>ZYB</model>
- <!-- information to disable anchors checking -->
- <lenientmode>yes</lenientmode>
- </remoterule>
- </client>
-
</sysync_config>
--- 1417,1424 ----
<use profile="vBookmark"/>
</datatype>
!
</datatypes>
<clientorserver/>
</sysync_config>
2010-02-02 21:29:53 +01:00
* scan a datatypes / scripting / remoterules sub directory ,
* including the < mode > sub - directory
2009-02-06 17:52:18 +01:00
*/
XML config: use configuration composed from fragments (MB #7712)
This patch replaces src/syncclient_sample_config.xml with a
combination of src/syncevo/configs/syncevolution.xml and the
config fragments that are shared with Synthesis upstream.
These fragments are installed in /usr/share/syncevolution/xml (or
the corresponding data path). From there they are read at runtime
to compose the final XML configuration. Users can copy individual files
into the corresponding directory hierarchy rooted at
$XDG_CONFIG_HOME/syncevolution-xml to replace individual fragments.
New fragments can be added there or in /usr/share.
For testing, these two directories can be overridden with the
SYNCEVOLUTION_XML_CONFIG_DIR env variable. No tests have been added
for this yet. There's also no documentation about it except this
commit message - add something to the HACKING guide once this
new concept stabilizes.
Developers can add new fragments in the source tree, invoke make and run the
resulting binary in client mode. As before, a complete config is included
in the binary. However, it is only sufficient for SyncML client mode.
For server mode, the files are expected to be installed (no need to maintain
a list of files in a Makefile for that) or SYNCEVOLUTION_XML_CONFIG_DIR
must be set.
At the moment, the following sub-directories are scanned for .xml files:
- the root directory to find syncevolution.xml
- datatypes, datatypes/client, datatypes/server
- scripting, scripting/client, scripting/server
- remoterules, remoterules/client, remoterules/server
Files inside "client" or "server" sub-directories are only used
when assembling a config for the corresponding mode of operation.
The goal of this patch is to simplify config sharing with Synthesis
(individual files are easier to manage than the monolitithic one), to
share files between client and server with the possibility to add
mode-specific files, and to allow users to extend the XML
configuration. The most likely use case for the latter is support for
more devices.
Previously, remote rules for the different devices listed in
syncserv_sample_config.xml were not used by SyncEvolution.
This patch moves the ZYB remote rule into a client-specific remote rule,
thus removing a complaint from libsynthesis about the unknown <client>
element when running as server.
Because we are using the unified upstream config, some parts of the config
have changed:
- There is a SYNCLVL field in all field list. This is currently unused
by SyncEvolution, but doesn't hurt either.
- A new iCalendar 2.0 all-day sanity check was added (for older Oracle servers?).
- The CATEGORIES defition in vBookmark was extended.
- some comment and white space changes
Because this is such fundamental change, extra care was taken to
minimize and verify the config changes. Here's the command which compares
old and new config for clients plus its output:
$ update-samples.pl syncevolution.xml client | diff -c -b syncclient_sample_config.xml -
***************
*** 31,42 ****
<scripting>
<looptimeout>5</looptimeout>
- <function><![CDATA[
- // create a UID
- string newuid() {
- return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
- }
- ]]></function>
<macro name="VCARD_BEFOREWRITE_SCRIPT_EVOLUTION"><![CDATA[
// a wordaround for cellphone in evolution. for incoming contacts, if there is only one CELL,
// strip the HOME or WORK flag from it. Evolution then should show it. */
--- 30,35 ----
***************
*** 118,123 ****
--- 111,124 ----
}
]]></macro>
+ <function><![CDATA[
+ // create a UID
+ string newuid() {
+ return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
+ }
+ ]]></function>
+
+
<!-- define script macros for scripts that are used by both vCalendar 1.0 and iCalendar 2.0 -->
<macro name="VCALENDAR_INCOMING_SCRIPT"><![CDATA[
***************
*** 145,150 ****
--- 146,158 ----
DTSTART = CONVERTTOUSERZONE(DTSTART);
MAKEALLDAY(DTSTART,DTEND,i);
}
+ else {
+ // iCalendar 2.0 - only if DTSTART is a date-only value this really is an allday
+ if (ISDATEONLY(DTSTART)) {
+ // reshape to make sure we don't have invalid zero-duration alldays (old OCS 9 servers)
+ MAKEALLDAY(DTSTART,DTEND,i);
+ }
+ }
// Make sure that all EXDATE times are in the same timezone as the start
// time. Some servers send them as UTC, which is all fine and well, but
***************
*** 265,275 ****
</scripting>
-
<datatypes>
-
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
--- 274,283 ----
</scripting>
<datatypes>
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
***************
*** 680,689 ****
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
-
-
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
--- 688,696 ----
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
***************
*** 787,793 ****
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for todoz -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
--- 792,798 ----
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for tasks -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
***************
*** 1394,1401 ****
<!-- non-standard properties -->
! <property name="CATEGORIES">
! <value field="CATEGORIES"/>
</property>
<property name="CLASS" suppressempty="yes">
--- 1394,1402 ----
<!-- non-standard properties -->
! <!-- inherit CATEGORIES from vCard 3.0, i.e. comma separated -->
! <property name="CATEGORIES" values="list" valueseparator="," altvalueseparator=";">
! <value field="CATEGORIES" combine=","/>
</property>
<property name="CLASS" suppressempty="yes">
***************
*** 1416,1435 ****
<use profile="vBookmark"/>
</datatype>
! <fieldlists/>
! <profiles/>
! <datatypes/>
</datatypes>
<clientorserver/>
-
- <client type="plugin">
- <remoterule name="ZYB">
- <manufacturer>ZYB</manufacturer>
- <model>ZYB</model>
- <!-- information to disable anchors checking -->
- <lenientmode>yes</lenientmode>
- </remoterule>
- </client>
-
</sysync_config>
--- 1417,1424 ----
<use profile="vBookmark"/>
</datatype>
!
</datatypes>
<clientorserver/>
</sysync_config>
2010-02-02 21:29:53 +01:00
void scanFragments ( const string & mode , const string & dir , Category category ) ;
/**
* add all . xml files to the right hash , overwriting old entries
*/
void addFragments ( const string & dir , Category category ) ;
} ;
const string XMLFiles : : m_syncevolutionXML ( " syncevolution.xml " ) ;
void XMLFiles : : scan ( const string & mode )
{
const char * dir = getenv ( " SYNCEVOLUTION_XML_CONFIG_DIR " ) ;
/*
* read either one or the other , so that testing can run without
* accidentally reading installed files
*/
if ( dir ) {
scanRoot ( mode , dir ) ;
} else {
scanRoot ( mode , XML_CONFIG_DIR ) ;
scanRoot ( mode , SubstEnvironment ( " ${XDG_CONFIG_HOME}/syncevolution-xml " ) ) ;
}
}
2009-02-06 17:52:18 +01:00
XML config: use configuration composed from fragments (MB #7712)
This patch replaces src/syncclient_sample_config.xml with a
combination of src/syncevo/configs/syncevolution.xml and the
config fragments that are shared with Synthesis upstream.
These fragments are installed in /usr/share/syncevolution/xml (or
the corresponding data path). From there they are read at runtime
to compose the final XML configuration. Users can copy individual files
into the corresponding directory hierarchy rooted at
$XDG_CONFIG_HOME/syncevolution-xml to replace individual fragments.
New fragments can be added there or in /usr/share.
For testing, these two directories can be overridden with the
SYNCEVOLUTION_XML_CONFIG_DIR env variable. No tests have been added
for this yet. There's also no documentation about it except this
commit message - add something to the HACKING guide once this
new concept stabilizes.
Developers can add new fragments in the source tree, invoke make and run the
resulting binary in client mode. As before, a complete config is included
in the binary. However, it is only sufficient for SyncML client mode.
For server mode, the files are expected to be installed (no need to maintain
a list of files in a Makefile for that) or SYNCEVOLUTION_XML_CONFIG_DIR
must be set.
At the moment, the following sub-directories are scanned for .xml files:
- the root directory to find syncevolution.xml
- datatypes, datatypes/client, datatypes/server
- scripting, scripting/client, scripting/server
- remoterules, remoterules/client, remoterules/server
Files inside "client" or "server" sub-directories are only used
when assembling a config for the corresponding mode of operation.
The goal of this patch is to simplify config sharing with Synthesis
(individual files are easier to manage than the monolitithic one), to
share files between client and server with the possibility to add
mode-specific files, and to allow users to extend the XML
configuration. The most likely use case for the latter is support for
more devices.
Previously, remote rules for the different devices listed in
syncserv_sample_config.xml were not used by SyncEvolution.
This patch moves the ZYB remote rule into a client-specific remote rule,
thus removing a complaint from libsynthesis about the unknown <client>
element when running as server.
Because we are using the unified upstream config, some parts of the config
have changed:
- There is a SYNCLVL field in all field list. This is currently unused
by SyncEvolution, but doesn't hurt either.
- A new iCalendar 2.0 all-day sanity check was added (for older Oracle servers?).
- The CATEGORIES defition in vBookmark was extended.
- some comment and white space changes
Because this is such fundamental change, extra care was taken to
minimize and verify the config changes. Here's the command which compares
old and new config for clients plus its output:
$ update-samples.pl syncevolution.xml client | diff -c -b syncclient_sample_config.xml -
***************
*** 31,42 ****
<scripting>
<looptimeout>5</looptimeout>
- <function><![CDATA[
- // create a UID
- string newuid() {
- return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
- }
- ]]></function>
<macro name="VCARD_BEFOREWRITE_SCRIPT_EVOLUTION"><![CDATA[
// a wordaround for cellphone in evolution. for incoming contacts, if there is only one CELL,
// strip the HOME or WORK flag from it. Evolution then should show it. */
--- 30,35 ----
***************
*** 118,123 ****
--- 111,124 ----
}
]]></macro>
+ <function><![CDATA[
+ // create a UID
+ string newuid() {
+ return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
+ }
+ ]]></function>
+
+
<!-- define script macros for scripts that are used by both vCalendar 1.0 and iCalendar 2.0 -->
<macro name="VCALENDAR_INCOMING_SCRIPT"><![CDATA[
***************
*** 145,150 ****
--- 146,158 ----
DTSTART = CONVERTTOUSERZONE(DTSTART);
MAKEALLDAY(DTSTART,DTEND,i);
}
+ else {
+ // iCalendar 2.0 - only if DTSTART is a date-only value this really is an allday
+ if (ISDATEONLY(DTSTART)) {
+ // reshape to make sure we don't have invalid zero-duration alldays (old OCS 9 servers)
+ MAKEALLDAY(DTSTART,DTEND,i);
+ }
+ }
// Make sure that all EXDATE times are in the same timezone as the start
// time. Some servers send them as UTC, which is all fine and well, but
***************
*** 265,275 ****
</scripting>
-
<datatypes>
-
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
--- 274,283 ----
</scripting>
<datatypes>
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
***************
*** 680,689 ****
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
-
-
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
--- 688,696 ----
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
***************
*** 787,793 ****
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for todoz -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
--- 792,798 ----
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for tasks -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
***************
*** 1394,1401 ****
<!-- non-standard properties -->
! <property name="CATEGORIES">
! <value field="CATEGORIES"/>
</property>
<property name="CLASS" suppressempty="yes">
--- 1394,1402 ----
<!-- non-standard properties -->
! <!-- inherit CATEGORIES from vCard 3.0, i.e. comma separated -->
! <property name="CATEGORIES" values="list" valueseparator="," altvalueseparator=";">
! <value field="CATEGORIES" combine=","/>
</property>
<property name="CLASS" suppressempty="yes">
***************
*** 1416,1435 ****
<use profile="vBookmark"/>
</datatype>
! <fieldlists/>
! <profiles/>
! <datatypes/>
</datatypes>
<clientorserver/>
-
- <client type="plugin">
- <remoterule name="ZYB">
- <manufacturer>ZYB</manufacturer>
- <model>ZYB</model>
- <!-- information to disable anchors checking -->
- <lenientmode>yes</lenientmode>
- </remoterule>
- </client>
-
</sysync_config>
--- 1417,1424 ----
<use profile="vBookmark"/>
</datatype>
!
</datatypes>
<clientorserver/>
</sysync_config>
2010-02-02 21:29:53 +01:00
void XMLFiles : : scanRoot ( const string & mode , const string & dir )
{
addFragments ( dir , MAIN ) ;
scanFragments ( mode , dir + " /scripting " , SCRIPTING ) ;
scanFragments ( mode , dir + " /datatypes " , DATATYPES ) ;
scanFragments ( mode , dir + " /remoterules " , REMOTERULES ) ;
}
void XMLFiles : : scanFragments ( const string & mode , const string & dir , Category category )
{
addFragments ( dir , category ) ;
addFragments ( dir + " / " + mode , category ) ;
}
void XMLFiles : : addFragments ( const string & dir , Category category )
{
2010-02-15 18:00:46 +01:00
if ( ! isDir ( dir ) ) {
return ;
}
ReadDir content ( dir ) ;
2018-01-16 10:58:04 +01:00
for ( const string & file : content ) {
XML config: use configuration composed from fragments (MB #7712)
This patch replaces src/syncclient_sample_config.xml with a
combination of src/syncevo/configs/syncevolution.xml and the
config fragments that are shared with Synthesis upstream.
These fragments are installed in /usr/share/syncevolution/xml (or
the corresponding data path). From there they are read at runtime
to compose the final XML configuration. Users can copy individual files
into the corresponding directory hierarchy rooted at
$XDG_CONFIG_HOME/syncevolution-xml to replace individual fragments.
New fragments can be added there or in /usr/share.
For testing, these two directories can be overridden with the
SYNCEVOLUTION_XML_CONFIG_DIR env variable. No tests have been added
for this yet. There's also no documentation about it except this
commit message - add something to the HACKING guide once this
new concept stabilizes.
Developers can add new fragments in the source tree, invoke make and run the
resulting binary in client mode. As before, a complete config is included
in the binary. However, it is only sufficient for SyncML client mode.
For server mode, the files are expected to be installed (no need to maintain
a list of files in a Makefile for that) or SYNCEVOLUTION_XML_CONFIG_DIR
must be set.
At the moment, the following sub-directories are scanned for .xml files:
- the root directory to find syncevolution.xml
- datatypes, datatypes/client, datatypes/server
- scripting, scripting/client, scripting/server
- remoterules, remoterules/client, remoterules/server
Files inside "client" or "server" sub-directories are only used
when assembling a config for the corresponding mode of operation.
The goal of this patch is to simplify config sharing with Synthesis
(individual files are easier to manage than the monolitithic one), to
share files between client and server with the possibility to add
mode-specific files, and to allow users to extend the XML
configuration. The most likely use case for the latter is support for
more devices.
Previously, remote rules for the different devices listed in
syncserv_sample_config.xml were not used by SyncEvolution.
This patch moves the ZYB remote rule into a client-specific remote rule,
thus removing a complaint from libsynthesis about the unknown <client>
element when running as server.
Because we are using the unified upstream config, some parts of the config
have changed:
- There is a SYNCLVL field in all field list. This is currently unused
by SyncEvolution, but doesn't hurt either.
- A new iCalendar 2.0 all-day sanity check was added (for older Oracle servers?).
- The CATEGORIES defition in vBookmark was extended.
- some comment and white space changes
Because this is such fundamental change, extra care was taken to
minimize and verify the config changes. Here's the command which compares
old and new config for clients plus its output:
$ update-samples.pl syncevolution.xml client | diff -c -b syncclient_sample_config.xml -
***************
*** 31,42 ****
<scripting>
<looptimeout>5</looptimeout>
- <function><![CDATA[
- // create a UID
- string newuid() {
- return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
- }
- ]]></function>
<macro name="VCARD_BEFOREWRITE_SCRIPT_EVOLUTION"><![CDATA[
// a wordaround for cellphone in evolution. for incoming contacts, if there is only one CELL,
// strip the HOME or WORK flag from it. Evolution then should show it. */
--- 30,35 ----
***************
*** 118,123 ****
--- 111,124 ----
}
]]></macro>
+ <function><![CDATA[
+ // create a UID
+ string newuid() {
+ return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
+ }
+ ]]></function>
+
+
<!-- define script macros for scripts that are used by both vCalendar 1.0 and iCalendar 2.0 -->
<macro name="VCALENDAR_INCOMING_SCRIPT"><![CDATA[
***************
*** 145,150 ****
--- 146,158 ----
DTSTART = CONVERTTOUSERZONE(DTSTART);
MAKEALLDAY(DTSTART,DTEND,i);
}
+ else {
+ // iCalendar 2.0 - only if DTSTART is a date-only value this really is an allday
+ if (ISDATEONLY(DTSTART)) {
+ // reshape to make sure we don't have invalid zero-duration alldays (old OCS 9 servers)
+ MAKEALLDAY(DTSTART,DTEND,i);
+ }
+ }
// Make sure that all EXDATE times are in the same timezone as the start
// time. Some servers send them as UTC, which is all fine and well, but
***************
*** 265,275 ****
</scripting>
-
<datatypes>
-
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
--- 274,283 ----
</scripting>
<datatypes>
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
***************
*** 680,689 ****
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
-
-
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
--- 688,696 ----
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
***************
*** 787,793 ****
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for todoz -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
--- 792,798 ----
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for tasks -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
***************
*** 1394,1401 ****
<!-- non-standard properties -->
! <property name="CATEGORIES">
! <value field="CATEGORIES"/>
</property>
<property name="CLASS" suppressempty="yes">
--- 1394,1402 ----
<!-- non-standard properties -->
! <!-- inherit CATEGORIES from vCard 3.0, i.e. comma separated -->
! <property name="CATEGORIES" values="list" valueseparator="," altvalueseparator=";">
! <value field="CATEGORIES" combine=","/>
</property>
<property name="CLASS" suppressempty="yes">
***************
*** 1416,1435 ****
<use profile="vBookmark"/>
</datatype>
! <fieldlists/>
! <profiles/>
! <datatypes/>
</datatypes>
<clientorserver/>
-
- <client type="plugin">
- <remoterule name="ZYB">
- <manufacturer>ZYB</manufacturer>
- <model>ZYB</model>
- <!-- information to disable anchors checking -->
- <lenientmode>yes</lenientmode>
- </remoterule>
- </client>
-
</sysync_config>
--- 1417,1424 ----
<use profile="vBookmark"/>
</datatype>
!
</datatypes>
<clientorserver/>
</sysync_config>
2010-02-02 21:29:53 +01:00
if ( boost : : ends_with ( file , " .xml " ) ) {
m_files [ category ] [ file ] = dir + " / " + file ;
}
}
}
string XMLFiles : : get ( Category category )
{
string res ;
2018-01-16 10:58:04 +01:00
for ( const StringPair & entry : m_files [ category ] ) {
XML config: use configuration composed from fragments (MB #7712)
This patch replaces src/syncclient_sample_config.xml with a
combination of src/syncevo/configs/syncevolution.xml and the
config fragments that are shared with Synthesis upstream.
These fragments are installed in /usr/share/syncevolution/xml (or
the corresponding data path). From there they are read at runtime
to compose the final XML configuration. Users can copy individual files
into the corresponding directory hierarchy rooted at
$XDG_CONFIG_HOME/syncevolution-xml to replace individual fragments.
New fragments can be added there or in /usr/share.
For testing, these two directories can be overridden with the
SYNCEVOLUTION_XML_CONFIG_DIR env variable. No tests have been added
for this yet. There's also no documentation about it except this
commit message - add something to the HACKING guide once this
new concept stabilizes.
Developers can add new fragments in the source tree, invoke make and run the
resulting binary in client mode. As before, a complete config is included
in the binary. However, it is only sufficient for SyncML client mode.
For server mode, the files are expected to be installed (no need to maintain
a list of files in a Makefile for that) or SYNCEVOLUTION_XML_CONFIG_DIR
must be set.
At the moment, the following sub-directories are scanned for .xml files:
- the root directory to find syncevolution.xml
- datatypes, datatypes/client, datatypes/server
- scripting, scripting/client, scripting/server
- remoterules, remoterules/client, remoterules/server
Files inside "client" or "server" sub-directories are only used
when assembling a config for the corresponding mode of operation.
The goal of this patch is to simplify config sharing with Synthesis
(individual files are easier to manage than the monolitithic one), to
share files between client and server with the possibility to add
mode-specific files, and to allow users to extend the XML
configuration. The most likely use case for the latter is support for
more devices.
Previously, remote rules for the different devices listed in
syncserv_sample_config.xml were not used by SyncEvolution.
This patch moves the ZYB remote rule into a client-specific remote rule,
thus removing a complaint from libsynthesis about the unknown <client>
element when running as server.
Because we are using the unified upstream config, some parts of the config
have changed:
- There is a SYNCLVL field in all field list. This is currently unused
by SyncEvolution, but doesn't hurt either.
- A new iCalendar 2.0 all-day sanity check was added (for older Oracle servers?).
- The CATEGORIES defition in vBookmark was extended.
- some comment and white space changes
Because this is such fundamental change, extra care was taken to
minimize and verify the config changes. Here's the command which compares
old and new config for clients plus its output:
$ update-samples.pl syncevolution.xml client | diff -c -b syncclient_sample_config.xml -
***************
*** 31,42 ****
<scripting>
<looptimeout>5</looptimeout>
- <function><![CDATA[
- // create a UID
- string newuid() {
- return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
- }
- ]]></function>
<macro name="VCARD_BEFOREWRITE_SCRIPT_EVOLUTION"><![CDATA[
// a wordaround for cellphone in evolution. for incoming contacts, if there is only one CELL,
// strip the HOME or WORK flag from it. Evolution then should show it. */
--- 30,35 ----
***************
*** 118,123 ****
--- 111,124 ----
}
]]></macro>
+ <function><![CDATA[
+ // create a UID
+ string newuid() {
+ return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
+ }
+ ]]></function>
+
+
<!-- define script macros for scripts that are used by both vCalendar 1.0 and iCalendar 2.0 -->
<macro name="VCALENDAR_INCOMING_SCRIPT"><![CDATA[
***************
*** 145,150 ****
--- 146,158 ----
DTSTART = CONVERTTOUSERZONE(DTSTART);
MAKEALLDAY(DTSTART,DTEND,i);
}
+ else {
+ // iCalendar 2.0 - only if DTSTART is a date-only value this really is an allday
+ if (ISDATEONLY(DTSTART)) {
+ // reshape to make sure we don't have invalid zero-duration alldays (old OCS 9 servers)
+ MAKEALLDAY(DTSTART,DTEND,i);
+ }
+ }
// Make sure that all EXDATE times are in the same timezone as the start
// time. Some servers send them as UTC, which is all fine and well, but
***************
*** 265,275 ****
</scripting>
-
<datatypes>
-
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
--- 274,283 ----
</scripting>
<datatypes>
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
***************
*** 680,689 ****
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
-
-
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
--- 688,696 ----
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
***************
*** 787,793 ****
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for todoz -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
--- 792,798 ----
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for tasks -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
***************
*** 1394,1401 ****
<!-- non-standard properties -->
! <property name="CATEGORIES">
! <value field="CATEGORIES"/>
</property>
<property name="CLASS" suppressempty="yes">
--- 1394,1402 ----
<!-- non-standard properties -->
! <!-- inherit CATEGORIES from vCard 3.0, i.e. comma separated -->
! <property name="CATEGORIES" values="list" valueseparator="," altvalueseparator=";">
! <value field="CATEGORIES" combine=","/>
</property>
<property name="CLASS" suppressempty="yes">
***************
*** 1416,1435 ****
<use profile="vBookmark"/>
</datatype>
! <fieldlists/>
! <profiles/>
! <datatypes/>
</datatypes>
<clientorserver/>
-
- <client type="plugin">
- <remoterule name="ZYB">
- <manufacturer>ZYB</manufacturer>
- <model>ZYB</model>
- <!-- information to disable anchors checking -->
- <lenientmode>yes</lenientmode>
- </remoterule>
- </client>
-
</sysync_config>
--- 1417,1424 ----
<use profile="vBookmark"/>
</datatype>
!
</datatypes>
<clientorserver/>
</sysync_config>
2010-02-02 21:29:53 +01:00
string content ;
ReadFile ( entry . second , content ) ;
res + = content ;
}
return res ;
}
string XMLFiles : : get ( const string & file )
{
string res ;
2018-01-31 17:28:28 +01:00
auto entry = m_files [ MAIN ] . find ( file ) ;
XML config: use configuration composed from fragments (MB #7712)
This patch replaces src/syncclient_sample_config.xml with a
combination of src/syncevo/configs/syncevolution.xml and the
config fragments that are shared with Synthesis upstream.
These fragments are installed in /usr/share/syncevolution/xml (or
the corresponding data path). From there they are read at runtime
to compose the final XML configuration. Users can copy individual files
into the corresponding directory hierarchy rooted at
$XDG_CONFIG_HOME/syncevolution-xml to replace individual fragments.
New fragments can be added there or in /usr/share.
For testing, these two directories can be overridden with the
SYNCEVOLUTION_XML_CONFIG_DIR env variable. No tests have been added
for this yet. There's also no documentation about it except this
commit message - add something to the HACKING guide once this
new concept stabilizes.
Developers can add new fragments in the source tree, invoke make and run the
resulting binary in client mode. As before, a complete config is included
in the binary. However, it is only sufficient for SyncML client mode.
For server mode, the files are expected to be installed (no need to maintain
a list of files in a Makefile for that) or SYNCEVOLUTION_XML_CONFIG_DIR
must be set.
At the moment, the following sub-directories are scanned for .xml files:
- the root directory to find syncevolution.xml
- datatypes, datatypes/client, datatypes/server
- scripting, scripting/client, scripting/server
- remoterules, remoterules/client, remoterules/server
Files inside "client" or "server" sub-directories are only used
when assembling a config for the corresponding mode of operation.
The goal of this patch is to simplify config sharing with Synthesis
(individual files are easier to manage than the monolitithic one), to
share files between client and server with the possibility to add
mode-specific files, and to allow users to extend the XML
configuration. The most likely use case for the latter is support for
more devices.
Previously, remote rules for the different devices listed in
syncserv_sample_config.xml were not used by SyncEvolution.
This patch moves the ZYB remote rule into a client-specific remote rule,
thus removing a complaint from libsynthesis about the unknown <client>
element when running as server.
Because we are using the unified upstream config, some parts of the config
have changed:
- There is a SYNCLVL field in all field list. This is currently unused
by SyncEvolution, but doesn't hurt either.
- A new iCalendar 2.0 all-day sanity check was added (for older Oracle servers?).
- The CATEGORIES defition in vBookmark was extended.
- some comment and white space changes
Because this is such fundamental change, extra care was taken to
minimize and verify the config changes. Here's the command which compares
old and new config for clients plus its output:
$ update-samples.pl syncevolution.xml client | diff -c -b syncclient_sample_config.xml -
***************
*** 31,42 ****
<scripting>
<looptimeout>5</looptimeout>
- <function><![CDATA[
- // create a UID
- string newuid() {
- return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
- }
- ]]></function>
<macro name="VCARD_BEFOREWRITE_SCRIPT_EVOLUTION"><![CDATA[
// a wordaround for cellphone in evolution. for incoming contacts, if there is only one CELL,
// strip the HOME or WORK flag from it. Evolution then should show it. */
--- 30,35 ----
***************
*** 118,123 ****
--- 111,124 ----
}
]]></macro>
+ <function><![CDATA[
+ // create a UID
+ string newuid() {
+ return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
+ }
+ ]]></function>
+
+
<!-- define script macros for scripts that are used by both vCalendar 1.0 and iCalendar 2.0 -->
<macro name="VCALENDAR_INCOMING_SCRIPT"><![CDATA[
***************
*** 145,150 ****
--- 146,158 ----
DTSTART = CONVERTTOUSERZONE(DTSTART);
MAKEALLDAY(DTSTART,DTEND,i);
}
+ else {
+ // iCalendar 2.0 - only if DTSTART is a date-only value this really is an allday
+ if (ISDATEONLY(DTSTART)) {
+ // reshape to make sure we don't have invalid zero-duration alldays (old OCS 9 servers)
+ MAKEALLDAY(DTSTART,DTEND,i);
+ }
+ }
// Make sure that all EXDATE times are in the same timezone as the start
// time. Some servers send them as UTC, which is all fine and well, but
***************
*** 265,275 ****
</scripting>
-
<datatypes>
-
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
--- 274,283 ----
</scripting>
<datatypes>
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
***************
*** 680,689 ****
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
-
-
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
--- 688,696 ----
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
***************
*** 787,793 ****
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for todoz -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
--- 792,798 ----
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for tasks -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
***************
*** 1394,1401 ****
<!-- non-standard properties -->
! <property name="CATEGORIES">
! <value field="CATEGORIES"/>
</property>
<property name="CLASS" suppressempty="yes">
--- 1394,1402 ----
<!-- non-standard properties -->
! <!-- inherit CATEGORIES from vCard 3.0, i.e. comma separated -->
! <property name="CATEGORIES" values="list" valueseparator="," altvalueseparator=";">
! <value field="CATEGORIES" combine=","/>
</property>
<property name="CLASS" suppressempty="yes">
***************
*** 1416,1435 ****
<use profile="vBookmark"/>
</datatype>
! <fieldlists/>
! <profiles/>
! <datatypes/>
</datatypes>
<clientorserver/>
-
- <client type="plugin">
- <remoterule name="ZYB">
- <manufacturer>ZYB</manufacturer>
- <model>ZYB</model>
- <!-- information to disable anchors checking -->
- <lenientmode>yes</lenientmode>
- </remoterule>
- </client>
-
</sysync_config>
--- 1417,1424 ----
<use profile="vBookmark"/>
</datatype>
!
</datatypes>
<clientorserver/>
</sysync_config>
2010-02-02 21:29:53 +01:00
if ( entry ! = m_files [ MAIN ] . end ( ) ) {
ReadFile ( entry - > second , res ) ;
}
return res ;
2009-02-06 17:52:18 +01:00
}
2009-06-25 14:54:11 +02:00
static void substTag ( string & xml , const string & tagname , const string & replacement , bool replaceElement = false )
2009-03-17 16:46:53 +01:00
{
string tag ;
size_t index ;
2009-02-06 17:52:18 +01:00
2009-03-17 16:46:53 +01:00
tag . reserve ( tagname . size ( ) + 3 ) ;
tag + = " < " ;
tag + = tagname ;
tag + = " /> " ;
index = xml . find ( tag ) ;
if ( index ! = xml . npos ) {
string tmp ;
tmp . reserve ( tagname . size ( ) * 2 + 2 + 3 + replacement . size ( ) ) ;
2009-06-25 14:54:11 +02:00
if ( ! replaceElement ) {
tmp + = " < " ;
tmp + = tagname ;
tmp + = " > " ;
}
2009-03-17 16:46:53 +01:00
tmp + = replacement ;
2009-06-25 14:54:11 +02:00
if ( ! replaceElement ) {
tmp + = " </ " ;
tmp + = tagname ;
tmp + = " > " ;
}
2009-03-17 16:46:53 +01:00
xml . replace ( index , tag . size ( ) , tmp ) ;
}
}
2009-02-06 17:52:18 +01:00
2009-06-25 14:54:11 +02:00
static void substTag ( string & xml , const string & tagname , const char * replacement , bool replaceElement = false )
2009-06-10 13:32:49 +02:00
{
2009-09-27 22:22:34 +02:00
substTag ( xml , tagname , std : : string ( replacement ) , replaceElement ) ;
2009-06-10 13:32:49 +02:00
}
2009-06-25 14:54:11 +02:00
template < class T > void substTag ( string & xml , const string & tagname , const T replacement , bool replaceElement = false )
2009-06-10 13:32:49 +02:00
{
stringstream str ;
str < < replacement ;
2009-09-27 22:22:34 +02:00
substTag ( xml , tagname , str . str ( ) , replaceElement ) ;
2009-06-10 13:32:49 +02:00
}
XML config: use configuration composed from fragments (MB #7712)
This patch replaces src/syncclient_sample_config.xml with a
combination of src/syncevo/configs/syncevolution.xml and the
config fragments that are shared with Synthesis upstream.
These fragments are installed in /usr/share/syncevolution/xml (or
the corresponding data path). From there they are read at runtime
to compose the final XML configuration. Users can copy individual files
into the corresponding directory hierarchy rooted at
$XDG_CONFIG_HOME/syncevolution-xml to replace individual fragments.
New fragments can be added there or in /usr/share.
For testing, these two directories can be overridden with the
SYNCEVOLUTION_XML_CONFIG_DIR env variable. No tests have been added
for this yet. There's also no documentation about it except this
commit message - add something to the HACKING guide once this
new concept stabilizes.
Developers can add new fragments in the source tree, invoke make and run the
resulting binary in client mode. As before, a complete config is included
in the binary. However, it is only sufficient for SyncML client mode.
For server mode, the files are expected to be installed (no need to maintain
a list of files in a Makefile for that) or SYNCEVOLUTION_XML_CONFIG_DIR
must be set.
At the moment, the following sub-directories are scanned for .xml files:
- the root directory to find syncevolution.xml
- datatypes, datatypes/client, datatypes/server
- scripting, scripting/client, scripting/server
- remoterules, remoterules/client, remoterules/server
Files inside "client" or "server" sub-directories are only used
when assembling a config for the corresponding mode of operation.
The goal of this patch is to simplify config sharing with Synthesis
(individual files are easier to manage than the monolitithic one), to
share files between client and server with the possibility to add
mode-specific files, and to allow users to extend the XML
configuration. The most likely use case for the latter is support for
more devices.
Previously, remote rules for the different devices listed in
syncserv_sample_config.xml were not used by SyncEvolution.
This patch moves the ZYB remote rule into a client-specific remote rule,
thus removing a complaint from libsynthesis about the unknown <client>
element when running as server.
Because we are using the unified upstream config, some parts of the config
have changed:
- There is a SYNCLVL field in all field list. This is currently unused
by SyncEvolution, but doesn't hurt either.
- A new iCalendar 2.0 all-day sanity check was added (for older Oracle servers?).
- The CATEGORIES defition in vBookmark was extended.
- some comment and white space changes
Because this is such fundamental change, extra care was taken to
minimize and verify the config changes. Here's the command which compares
old and new config for clients plus its output:
$ update-samples.pl syncevolution.xml client | diff -c -b syncclient_sample_config.xml -
***************
*** 31,42 ****
<scripting>
<looptimeout>5</looptimeout>
- <function><![CDATA[
- // create a UID
- string newuid() {
- return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
- }
- ]]></function>
<macro name="VCARD_BEFOREWRITE_SCRIPT_EVOLUTION"><![CDATA[
// a wordaround for cellphone in evolution. for incoming contacts, if there is only one CELL,
// strip the HOME or WORK flag from it. Evolution then should show it. */
--- 30,35 ----
***************
*** 118,123 ****
--- 111,124 ----
}
]]></macro>
+ <function><![CDATA[
+ // create a UID
+ string newuid() {
+ return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
+ }
+ ]]></function>
+
+
<!-- define script macros for scripts that are used by both vCalendar 1.0 and iCalendar 2.0 -->
<macro name="VCALENDAR_INCOMING_SCRIPT"><![CDATA[
***************
*** 145,150 ****
--- 146,158 ----
DTSTART = CONVERTTOUSERZONE(DTSTART);
MAKEALLDAY(DTSTART,DTEND,i);
}
+ else {
+ // iCalendar 2.0 - only if DTSTART is a date-only value this really is an allday
+ if (ISDATEONLY(DTSTART)) {
+ // reshape to make sure we don't have invalid zero-duration alldays (old OCS 9 servers)
+ MAKEALLDAY(DTSTART,DTEND,i);
+ }
+ }
// Make sure that all EXDATE times are in the same timezone as the start
// time. Some servers send them as UTC, which is all fine and well, but
***************
*** 265,275 ****
</scripting>
-
<datatypes>
-
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
--- 274,283 ----
</scripting>
<datatypes>
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
***************
*** 680,689 ****
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
-
-
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
--- 688,696 ----
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
***************
*** 787,793 ****
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for todoz -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
--- 792,798 ----
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for tasks -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
***************
*** 1394,1401 ****
<!-- non-standard properties -->
! <property name="CATEGORIES">
! <value field="CATEGORIES"/>
</property>
<property name="CLASS" suppressempty="yes">
--- 1394,1402 ----
<!-- non-standard properties -->
! <!-- inherit CATEGORIES from vCard 3.0, i.e. comma separated -->
! <property name="CATEGORIES" values="list" valueseparator="," altvalueseparator=";">
! <value field="CATEGORIES" combine=","/>
</property>
<property name="CLASS" suppressempty="yes">
***************
*** 1416,1435 ****
<use profile="vBookmark"/>
</datatype>
! <fieldlists/>
! <profiles/>
! <datatypes/>
</datatypes>
<clientorserver/>
-
- <client type="plugin">
- <remoterule name="ZYB">
- <manufacturer>ZYB</manufacturer>
- <model>ZYB</model>
- <!-- information to disable anchors checking -->
- <lenientmode>yes</lenientmode>
- </remoterule>
- </client>
-
</sysync_config>
--- 1417,1424 ----
<use profile="vBookmark"/>
</datatype>
!
</datatypes>
<clientorserver/>
</sysync_config>
2010-02-02 21:29:53 +01:00
void SyncContext : : getConfigTemplateXML ( const string & mode ,
string & xml ,
string & rules ,
string & configname )
{
XMLFiles files ;
files . scan ( mode ) ;
xml = files . get ( files . m_syncevolutionXML ) ;
if ( xml . empty ( ) ) {
if ( mode ! = " client " ) {
SE_THROW ( files . m_syncevolutionXML + " not found " ) ;
}
configname = " builtin XML configuration " ;
xml = SyncEvolutionXMLClient ;
rules = SyncEvolutionXMLClientRules ;
} else {
configname = " XML configuration files " ;
rules = files . get ( XMLFiles : : REMOTERULES ) ;
substTag ( xml , " datatypes " ,
files . get ( XMLFiles : : DATATYPES ) +
" <fieldlists/> \n <profiles/> \n <datatypedefs/> \n " ) ;
substTag ( xml , " scripting " , files . get ( XMLFiles : : SCRIPTING ) ) ;
}
}
2014-08-29 11:18:03 +02:00
void SyncContext : : getConfigXML ( bool isSync , string & xml , string & configname )
2009-02-06 17:52:18 +01:00
{
XML config: use configuration composed from fragments (MB #7712)
This patch replaces src/syncclient_sample_config.xml with a
combination of src/syncevo/configs/syncevolution.xml and the
config fragments that are shared with Synthesis upstream.
These fragments are installed in /usr/share/syncevolution/xml (or
the corresponding data path). From there they are read at runtime
to compose the final XML configuration. Users can copy individual files
into the corresponding directory hierarchy rooted at
$XDG_CONFIG_HOME/syncevolution-xml to replace individual fragments.
New fragments can be added there or in /usr/share.
For testing, these two directories can be overridden with the
SYNCEVOLUTION_XML_CONFIG_DIR env variable. No tests have been added
for this yet. There's also no documentation about it except this
commit message - add something to the HACKING guide once this
new concept stabilizes.
Developers can add new fragments in the source tree, invoke make and run the
resulting binary in client mode. As before, a complete config is included
in the binary. However, it is only sufficient for SyncML client mode.
For server mode, the files are expected to be installed (no need to maintain
a list of files in a Makefile for that) or SYNCEVOLUTION_XML_CONFIG_DIR
must be set.
At the moment, the following sub-directories are scanned for .xml files:
- the root directory to find syncevolution.xml
- datatypes, datatypes/client, datatypes/server
- scripting, scripting/client, scripting/server
- remoterules, remoterules/client, remoterules/server
Files inside "client" or "server" sub-directories are only used
when assembling a config for the corresponding mode of operation.
The goal of this patch is to simplify config sharing with Synthesis
(individual files are easier to manage than the monolitithic one), to
share files between client and server with the possibility to add
mode-specific files, and to allow users to extend the XML
configuration. The most likely use case for the latter is support for
more devices.
Previously, remote rules for the different devices listed in
syncserv_sample_config.xml were not used by SyncEvolution.
This patch moves the ZYB remote rule into a client-specific remote rule,
thus removing a complaint from libsynthesis about the unknown <client>
element when running as server.
Because we are using the unified upstream config, some parts of the config
have changed:
- There is a SYNCLVL field in all field list. This is currently unused
by SyncEvolution, but doesn't hurt either.
- A new iCalendar 2.0 all-day sanity check was added (for older Oracle servers?).
- The CATEGORIES defition in vBookmark was extended.
- some comment and white space changes
Because this is such fundamental change, extra care was taken to
minimize and verify the config changes. Here's the command which compares
old and new config for clients plus its output:
$ update-samples.pl syncevolution.xml client | diff -c -b syncclient_sample_config.xml -
***************
*** 31,42 ****
<scripting>
<looptimeout>5</looptimeout>
- <function><![CDATA[
- // create a UID
- string newuid() {
- return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
- }
- ]]></function>
<macro name="VCARD_BEFOREWRITE_SCRIPT_EVOLUTION"><![CDATA[
// a wordaround for cellphone in evolution. for incoming contacts, if there is only one CELL,
// strip the HOME or WORK flag from it. Evolution then should show it. */
--- 30,35 ----
***************
*** 118,123 ****
--- 111,124 ----
}
]]></macro>
+ <function><![CDATA[
+ // create a UID
+ string newuid() {
+ return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
+ }
+ ]]></function>
+
+
<!-- define script macros for scripts that are used by both vCalendar 1.0 and iCalendar 2.0 -->
<macro name="VCALENDAR_INCOMING_SCRIPT"><![CDATA[
***************
*** 145,150 ****
--- 146,158 ----
DTSTART = CONVERTTOUSERZONE(DTSTART);
MAKEALLDAY(DTSTART,DTEND,i);
}
+ else {
+ // iCalendar 2.0 - only if DTSTART is a date-only value this really is an allday
+ if (ISDATEONLY(DTSTART)) {
+ // reshape to make sure we don't have invalid zero-duration alldays (old OCS 9 servers)
+ MAKEALLDAY(DTSTART,DTEND,i);
+ }
+ }
// Make sure that all EXDATE times are in the same timezone as the start
// time. Some servers send them as UTC, which is all fine and well, but
***************
*** 265,275 ****
</scripting>
-
<datatypes>
-
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
--- 274,283 ----
</scripting>
<datatypes>
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
***************
*** 680,689 ****
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
-
-
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
--- 688,696 ----
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
***************
*** 787,793 ****
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for todoz -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
--- 792,798 ----
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for tasks -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
***************
*** 1394,1401 ****
<!-- non-standard properties -->
! <property name="CATEGORIES">
! <value field="CATEGORIES"/>
</property>
<property name="CLASS" suppressempty="yes">
--- 1394,1402 ----
<!-- non-standard properties -->
! <!-- inherit CATEGORIES from vCard 3.0, i.e. comma separated -->
! <property name="CATEGORIES" values="list" valueseparator="," altvalueseparator=";">
! <value field="CATEGORIES" combine=","/>
</property>
<property name="CLASS" suppressempty="yes">
***************
*** 1416,1435 ****
<use profile="vBookmark"/>
</datatype>
! <fieldlists/>
! <profiles/>
! <datatypes/>
</datatypes>
<clientorserver/>
-
- <client type="plugin">
- <remoterule name="ZYB">
- <manufacturer>ZYB</manufacturer>
- <model>ZYB</model>
- <!-- information to disable anchors checking -->
- <lenientmode>yes</lenientmode>
- </remoterule>
- </client>
-
</sysync_config>
--- 1417,1424 ----
<use profile="vBookmark"/>
</datatype>
!
</datatypes>
<clientorserver/>
</sysync_config>
2010-02-02 21:29:53 +01:00
string rules ;
getConfigTemplateXML ( m_serverMode ? " server " : " client " ,
xml ,
rules ,
configname ) ;
2009-02-16 16:11:17 +01:00
string tag ;
size_t index ;
2009-06-18 18:02:55 +02:00
unsigned long hash = 0 ;
2009-02-16 16:11:17 +01:00
2010-03-26 09:43:02 +01:00
2012-11-06 10:49:26 +01:00
std : : set < std : : string > flags = getSyncMLFlags ( ) ;
bool noctcap = flags . find ( " noctcap " ) ! = flags . end ( ) ;
bool norestart = flags . find ( " norestart " ) ! = flags . end ( ) ;
PBAP: incremental sync (FDO #59551)
Depending on the SYNCEVOLUTION_PBAP_SYNC env variable, syncing reads
all properties as configured ("all"), excludes photos ("text") or
first text, then all ("incremental").
When excluding photos, only known properties get requested. This
avoids issues with phones which reject the request when enabling
properties via the bit flags. This also helps with
"databaseFormat=^PHOTO".
When excluding photos, the vcard merge script as used by EDS ensures
that existing photo data is preserved. This only works during a slow
sync (merge script not called otherwise, okay for PBAP because it
always syncs in slow sync) and EDS (other backends do not use the
merge script, okay at the moment because PIM Manager is hard-coded to
use EDS).
The PBAP backend must be aware of the PBAP sync mode and request a
second cycle, which again must be a slow sync. This only works because
the sync engine is aware of the special mode and sets a new session
variable "keepPhotoData". It would be better to have the PBAP backend
send CTCap with PHOTO marked as not supported for text-only syncs and
enabled when sending PHOTO data, but that is considerably harder to
implement (CTCap cannot be adjusted at runtime).
beginSync() may only ask for a slow sync when not already called
for one. That's what the command line tool does when accessing
items. It fails when getting the 508 status.
The original goal of overlapping syncing with download has not been
achieved yet. It turned out that all item IDs get requested before
syncing starts, which thus depends on downloading all items in the current
implementation. Can be fixed by making up IDs based on the number of
existing items (see GetSize() in PBAP) and then downloading later when
the data is needed.
2013-07-05 10:39:21 +02:00
const char * PBAPSyncMode = getenv ( " SYNCEVOLUTION_PBAP_SYNC " ) ;
bool keepPhotoData = PBAPSyncMode & &
( boost : : iequals ( PBAPSyncMode , " incremental " ) | |
boost : : iequals ( PBAPSyncMode , " text " ) ) ;
std : : string sessioninitscript =
2009-12-15 18:19:14 +01:00
" <sessioninitscript><![CDATA[ \n "
" // these variables are possibly modified by rule scripts \n "
" TIMESTAMP mindate; // earliest date remote party can handle \n "
" INTEGER retransfer_body; // if set to true, body is re-sent to client when message is moved from outbox to sent \n "
" mindate=EMPTY; // no limit by default \n "
" retransfer_body=FALSE; // normally, do not retransfer email body (and attachments) when moving items to sent box \n "
" INTEGER delayedabort; \n "
" delayedabort = FALSE; \n "
2010-04-08 06:10:27 +02:00
" INTEGER alarmTimeToUTC; \n "
" alarmTimeToUTC = FALSE; \n "
2013-03-05 16:12:48 +01:00
" INTEGER addInternetEmail; \n "
" addInternetEmail = FALSE; \n "
2012-06-29 16:56:13 +02:00
" INTEGER stripUID; \n "
" stripUID = FALSE; \n "
PBAP: incremental sync (FDO #59551)
Depending on the SYNCEVOLUTION_PBAP_SYNC env variable, syncing reads
all properties as configured ("all"), excludes photos ("text") or
first text, then all ("incremental").
When excluding photos, only known properties get requested. This
avoids issues with phones which reject the request when enabling
properties via the bit flags. This also helps with
"databaseFormat=^PHOTO".
When excluding photos, the vcard merge script as used by EDS ensures
that existing photo data is preserved. This only works during a slow
sync (merge script not called otherwise, okay for PBAP because it
always syncs in slow sync) and EDS (other backends do not use the
merge script, okay at the moment because PIM Manager is hard-coded to
use EDS).
The PBAP backend must be aware of the PBAP sync mode and request a
second cycle, which again must be a slow sync. This only works because
the sync engine is aware of the special mode and sets a new session
variable "keepPhotoData". It would be better to have the PBAP backend
send CTCap with PHOTO marked as not supported for text-only syncs and
enabled when sending PHOTO data, but that is considerably harder to
implement (CTCap cannot be adjusted at runtime).
beginSync() may only ask for a slow sync when not already called
for one. That's what the command line tool does when accessing
items. It fails when getting the 508 status.
The original goal of overlapping syncing with download has not been
achieved yet. It turned out that all item IDs get requested before
syncing starts, which thus depends on downloading all items in the current
implementation. Can be fixed by making up IDs based on the number of
existing items (see GetSize() in PBAP) and then downloading later when
the data is needed.
2013-07-05 10:39:21 +02:00
" INTEGER keepPhotoData; \n "
" keepPhotoData = "
// Keep local photos in first cycle when using special sync
// mode for PBAP. PBAP source will request second cycle if it
// has contacts whose photo data was not donwloaded. Then we
// will disable this special handling for that cycle and photo
// can be updated and removed normally.
+ std : : string ( keepPhotoData ? " TRUE " : " FALSE " ) + " ; \n "
2009-12-15 18:19:14 +01:00
" ]]></sessioninitscript> \n " ;
ostringstream clientorserver ;
if ( m_serverMode ) {
clientorserver < <
" <server type='plugin'> \n "
" <plugin_module>SyncEvolution</plugin_module> \n "
" <plugin_sessionauth>yes</plugin_sessionauth> \n "
2013-04-24 12:00:45 +02:00
" <plugin_deviceadmin>yes</plugin_deviceadmin> \n " ;
InitState < unsigned int > configrequestmaxtime = getRequestMaxTime ( ) ;
unsigned int requestmaxtime ;
if ( configrequestmaxtime . wasSet ( ) ) {
// Explicitly set, use it regardless of the kind of sync.
// We allow this even if thread support was not available,
// because if a user enables it explicitly, it's probably
// for a good reason (= failing client), in which case
// risking multithreading issues is preferable.
requestmaxtime = configrequestmaxtime . get ( ) ;
} else if ( m_remoteInitiated | | m_localSync ) {
// We initiated the sync (local sync, Bluetooth). The client
// should not time out, so there is no need for intermediate
// message sending.
//
// To avoid potential problems and get a single log file,
// avoid it and multithreading by default.
requestmaxtime = 0 ;
} else {
// We were contacted by an HTTP client. Reply to client
// not later than 120 seconds while storage initializes
// in a background thread.
# ifdef HAVE_THREAD_SUPPORT
requestmaxtime = 120 ; // default in seconds
# else
requestmaxtime = 0 ;
# endif
}
if ( requestmaxtime ) {
clientorserver < <
" <multithread>yes</multithread> \n "
" <requestmaxtime> " < < requestmaxtime < < " </requestmaxtime> \n " ;
} else {
clientorserver < <
" <multithread>no</multithread> \n " ;
}
clientorserver < <
2009-12-15 18:19:14 +01:00
" \n " < <
sessioninitscript < <
" <sessiontimeout>300</sessiontimeout> \n "
2010-03-17 08:40:08 +01:00
" \n " ;
//do not send respuri if over bluetooth
if ( boost : : starts_with ( getUsedSyncURL ( ) , " obex-bt:// " ) ) {
clientorserver < < " <sendrespuri>no</sendrespuri> \n "
" \n " ;
}
2012-11-06 10:49:26 +01:00
clientorserver < < " <syncmodeextensions> " < < ( norestart ? " no " : " yes " ) < < " </syncmodeextensions> \n " ;
2010-03-26 09:43:02 +01:00
if ( noctcap ) {
clientorserver < < " <showctcapproperties>no</showctcapproperties> \n "
" \n " ;
}
2010-03-17 08:40:08 +01:00
clientorserver < < " <defaultauth/> \n "
2009-12-15 18:19:14 +01:00
" \n "
" <datastore/> \n "
" \n "
" <remoterules/> \n "
" </server> \n " ;
} else {
clientorserver < <
" <client type='plugin'> \n "
" <binfilespath>$(binfilepath)</binfilespath> \n "
2013-04-24 12:00:45 +02:00
" <multithread>no</multithread> \n "
2012-06-11 14:48:02 +02:00
" <defaultauth/> \n " ;
if ( getRefreshSync ( ) ) {
clientorserver < <
" <preferslowsync>no</preferslowsync> \n " ;
}
clientorserver < <
2010-02-26 11:20:52 +01:00
" \n " ;
2012-06-11 14:48:02 +02:00
2010-02-26 11:20:52 +01:00
string syncMLVersion ( getSyncMLVersion ( ) ) ;
if ( ! syncMLVersion . empty ( ) ) {
clientorserver < < " <defaultsyncmlversion> "
< < syncMLVersion . c_str ( ) < < " </defaultsyncmlversion> \n " ;
}
2010-03-26 09:43:02 +01:00
2012-11-06 10:49:26 +01:00
clientorserver < < " <syncmodeextensions> " < < ( norestart ? " no " : " yes " ) < < " </syncmodeextensions> \n " ;
2010-03-26 09:43:02 +01:00
if ( noctcap ) {
clientorserver < < " <showctcapproperties>no</showctcapproperties> \n "
" \n " ;
}
2010-02-26 11:20:52 +01:00
clientorserver < < sessioninitscript < <
2009-12-15 18:19:14 +01:00
// SyncEvolution has traditionally not folded long lines in
// vCard. Testing showed that servers still have problems with
// it, so avoid it by default
" <donotfoldcontent>yes</donotfoldcontent> \n "
" \n "
" <fakedeviceid/> \n "
" \n "
" <datastore/> \n "
" \n "
" <remoterules/> \n "
" </client> \n " ;
}
2009-09-27 22:48:04 +02:00
substTag ( xml ,
" clientorserver " ,
2009-12-15 18:19:14 +01:00
clientorserver . str ( ) ,
true ) ;
2009-09-27 22:48:04 +02:00
2009-02-16 16:11:17 +01:00
tag = " <debug/> " ;
index = xml . find ( tag ) ;
if ( index ! = xml . npos ) {
stringstream debug ;
bool logging = ! m_sourceListPtr - > getLogdir ( ) . empty ( ) ;
2009-06-03 20:39:16 +02:00
int loglevel = getLogLevel ( ) ;
2013-10-01 09:26:41 +02:00
# ifdef USE_DLT
const char * useDLT = getenv ( " SYNCEVOLUTION_USE_DLT " ) ;
# else
2018-01-30 17:00:24 +01:00
static const char * useDLT = nullptr ;
2013-10-01 09:26:41 +02:00
# endif
2009-02-16 16:11:17 +01:00
debug < <
" <debug> \n "
2009-10-05 14:49:32 +02:00
// logpath is a config variable set by SyncContext::doSync()
2009-02-16 16:11:17 +01:00
" <logpath>$(logpath)</logpath> \n "
2013-10-01 09:26:41 +02:00
" <filename> " < < ( useDLT ? " " : LogfileBasename ) < < " </filename> " < <
2009-02-16 16:11:17 +01:00
" <logflushmode>flush</logflushmode> \n "
2013-10-01 09:26:41 +02:00
" <logformat> " < < ( useDLT ? " dlt " : " html " ) < < " </logformat> \n "
" <folding>auto</folding> \n " < <
( useDLT ?
" <timestamp>no</timestamp> \n "
" <timestampall>no</timestampall> \n " :
" <timestamp>yes</timestamp> \n "
" <timestampall>yes</timestampall> \n " ) < <
2009-02-16 16:11:17 +01:00
" <timedsessionlognames>no</timedsessionlognames> \n "
2013-04-24 12:00:45 +02:00
" <subthreadmode>separate</subthreadmode> \n "
2009-07-03 12:27:07 +02:00
" <logsessionstoglobal>yes</logsessionstoglobal> \n "
2009-02-16 16:11:17 +01:00
" <singlegloballog>yes</singlegloballog> \n " ;
2013-10-01 09:26:41 +02:00
# ifdef USE_DLT
if ( useDLT ) {
debug < <
// We have to enable all logging inside libsynthesis.
// The actual filtering then takes place inside DLT.
// Message logging is not supported.
" <enable option= \" all \" /> \n "
// Allow logging outside of sessions.
" <globallogs>yes</globallogs> \n "
// Don't try per-session logging, it all goes to DLT anyway.
" <sessionlogs>yes</sessionlogs> \n "
;
// Be extra verbose if currently enabled. Cannot be changed later on.
if ( atoi ( useDLT ) > DLT_LOG_DEBUG ) {
debug < <
" <enable option= \" userdata \" /> \n "
" <enable option= \" scripts \" /> \n " ;
}
if ( atoi ( useDLT ) > DLT_LOG_DEBUG ) {
debug < <
" <enable option= \" exotic \" /> \n " ;
}
} else
# endif // USE_DLT
if ( logging ) {
2009-02-16 16:11:17 +01:00
debug < <
" <sessionlogs>yes</sessionlogs> \n "
2009-06-03 20:39:16 +02:00
" <globallogs>yes</globallogs> \n " ;
debug < < " <msgdump> " < < ( loglevel > = 5 ? " yes " : " no " ) < < " </msgdump> \n " ;
debug < < " <xmltranslate> " < < ( loglevel > = 4 ? " yes " : " no " ) < < " </xmltranslate> \n " ;
if ( loglevel > = 3 ) {
debug < <
2012-02-03 17:37:29 +01:00
" <sourcelink>doxygen</sourcelink> \n "
2009-06-03 20:39:16 +02:00
" <enable option= \" all \" /> \n "
" <enable option= \" userdata \" /> \n "
" <enable option= \" scripts \" /> \n "
" <enable option= \" exotic \" /> \n " ;
}
2009-02-16 16:11:17 +01:00
} else {
debug < <
" <sessionlogs>no</sessionlogs> \n "
" <globallogs>no</globallogs> \n "
" <msgdump>no</msgdump> \n "
" <xmltranslate>no</xmltranslate> \n "
" <disable option= \" all \" /> " ;
}
debug < <
" </debug> \n " ;
xml . replace ( index , tag . size ( ) , debug . str ( ) ) ;
}
2009-06-25 14:54:11 +02:00
XMLConfigFragments fragments ;
2009-02-16 16:11:17 +01:00
tag = " <datastore/> " ;
index = xml . find ( tag ) ;
2009-02-06 17:52:18 +01:00
if ( index ! = xml . npos ) {
stringstream datastores ;
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : * m_sourceListPtr ) {
2009-02-06 17:52:18 +01:00
string fragment ;
2009-06-25 14:54:11 +02:00
source - > getDatastoreXML ( fragment , fragments ) ;
2010-02-15 18:03:56 +01:00
string name ;
// Make sure that sub-datastores do not interfere with the global URI namespace
// by adding a <superdatastore>/ prefix. That way we can have a "calendar"
// alias for "calendar+todo" without conflicting with the underlying
// "calendar", which will be called "calendar+todo/calendar" in the XML config.
name = source - > getVirtualSource ( ) ;
if ( ! name . empty ( ) ) {
name + = m_findSourceSeparator ;
}
name + = source - > getName ( ) ;
2009-02-06 17:52:18 +01:00
2010-02-15 18:03:56 +01:00
datastores < < " <datastore name=' " < < name < < " ' type='plugin'> \n " < <
2010-02-15 16:50:06 +01:00
" <dbtypeid> " < < source - > getSynthesisID ( ) < < " </dbtypeid> \n " < <
2009-12-15 18:19:14 +01:00
fragment ;
2010-03-04 17:50:01 +01:00
datastores < < " <resumesupport>on</resumesupport> \n " ;
if ( source - > getOperations ( ) . m_writeBlob ) {
// BLOB support is essential for caching partially received items.
datastores < < " <resumeitemsupport>on</resumeitemsupport> \n " ;
}
2012-08-31 12:21:11 +02:00
SyncMode mode = StringToSyncMode ( source - > getSync ( ) ) ;
2010-01-29 19:40:44 +01:00
if ( source - > getForceSlowSync ( ) ) {
2009-12-15 18:19:14 +01:00
// we *want* a slow sync, but couldn't tell the client -> force it server-side
datastores < < " <alertscript> FORCESLOWSYNC(); </alertscript> \n " ;
engine: local cache sync mode
This patch introduces support for true one-way syncing ("caching"):
the local datastore is meant to be an exact copy of the data on the
remote side. The assumption is that no modifications are ever made
locally outside of syncing. This is different from one-way sync modes,
which allows local changes and only temporarily disables sending them
to the remote side.
Another goal of the new mode is to avoid data writes as much as
possible.
This new mode only works on the server side of a sync, where the
engine has enough control over the data flow.
Most of the changes are in libsynthesis. SyncEvolution only needs to
enable the new mode, which is done via an extension of the "sync"
property:
- "local-cache-incremental" will do an incremental sync (if possible)
or a slow sync (otherwise). This is usually the right mode to use,
and thus has "local-cache" as alias.
- "local-cache-slow" will always do a slow sync. Useful for
debugging or after (accidentally) making changes on the server side.
An incremental sync will ignore such changes because they are not
meant to happen and thus leave client and sync out-of-sync!
Both modes are recorded in the sync report of the local side. The
target side is the client and records the normal "two-way" or "slow"
sync modes.
With the current SyncEvolution contact field list, first, middle and
last name are used to find matches during any kind of slow sync. The
organization field is ignored for matching during the initial slow
sync and used in all following ones. That's okay, the difference won't
matter in practice because the initial slow sync in PBAP caching will
be done with no local data. The test achieve the same result in both
cases by keeping the organization set in the reduced data set.
It's also okay to include the property in the comparison, because it
might help to distinguish between "John Doe" in different companies.
It might be worthwhile to add more fields as match criteria, for
example the birthday. Currently they are excluded, probably because
they are not trusted to be supported by SyncML peers. In caching mode
the situation is different, because all our data came from the peer.
The downside is that in cases where matching has to be done all the
time because change detection is not supported (PBAP), including the
birthday as criteria will cause unnecessary contact removed/added
events (and thus disk IO) when a contact was originally created
without birthday locally and then a birthday gets added on the phone.
Testing is done as part of the D-Bus testing framework, because usually
this functionality will be used as part of the D-Bus server and writing
tests in Python is easier.
A new test class "TestLocalCache" contains the new tests. They include
tests for removing extra items during a slow sync (testItemRemoval),
adding new client items under various conditions (testItemAdd*) and
updating/removing an item during incremental syncing
(testItemUpdate/Delete*). Doing these changes during a slow sync could
also be tested (not currently covered).
The tests for removing properties (testPropertyRemoval*) cover
removing almost all contact properties during an initial slow sync, a
second slow sync (which is treated differently in libsynthesis, see
merge=always and merge=slowsync), and an incremental sync.
2012-08-23 14:25:55 +02:00
} else if ( mode = = SYNC_LOCAL_CACHE_SLOW | |
mode = = SYNC_LOCAL_CACHE_INCREMENTAL ) {
if ( ! m_serverMode ) {
SE_THROW ( " sync modes 'local-cache-*' are only supported on the server side " ) ;
}
datastores < < " <alertscript>SETREFRESHONLY(1); SETCACHEDATA(1);</alertscript> \n " ;
// datastores << " <datastoreinitscript>REFRESHONLY(); CACHEDATA(); SLOWSYNC(); ALERTCODE();</datastoreinitscript>\n";
2012-08-31 12:21:11 +02:00
} else if ( mode ! = SYNC_SLOW & &
2011-10-24 19:52:01 +02:00
// slow-sync detection not implemented when running as server,
// not even when initiating the sync (direct sync with phone)
2009-12-17 15:56:56 +01:00
! m_serverMode & &
2011-10-24 19:52:01 +02:00
// is implemented as "delete local data" + "slow sync",
// so a slow sync is acceptable in this case
2012-08-31 12:21:11 +02:00
mode ! = SYNC_REFRESH_FROM_SERVER & &
mode ! = SYNC_REFRESH_FROM_REMOTE & &
2010-03-12 09:34:28 +01:00
// The forceSlow should be disabled if the sync session is
// initiated by a remote peer (eg. Server Alerted Sync)
! m_remoteInitiated & &
2010-01-22 16:14:29 +01:00
getPreventSlowSync ( ) & &
2010-02-16 20:11:16 +01:00
( ! source - > getOperations ( ) . m_isEmpty | | // check is only relevant if we have local data;
! source - > getOperations ( ) . m_isEmpty ( ) ) ) { // if we cannot check, assume we have data
2009-12-15 18:19:14 +01:00
// We are not expecting a slow sync => refuse to execute one.
// This is the client check for this, server must be handled
2009-12-17 15:56:56 +01:00
// differently (TODO, MB #2416).
2009-12-15 18:19:14 +01:00
datastores < <
2010-01-29 19:43:50 +01:00
" <datastoreinitscript><![CDATA[ \n "
" if (SLOWSYNC() && ALERTCODE() != 203) { \n " // SLOWSYNC() is true for acceptable refresh-from-client, check for that
" DEBUGMESSAGE( \" slow sync not expected by SyncEvolution, disabling datastore \" ); \n "
2010-02-04 21:30:54 +01:00
" ABORTDATASTORE( " < < sysync : : LOCERR_DATASTORE_ABORT < < " ); \n "
2010-01-29 19:43:50 +01:00
" // tell UI to abort instead of sending the next message \n "
" SETSESSIONVAR( \" delayedabort \" , 1); \n "
" } \n "
" ]]></datastoreinitscript> \n " ;
2009-12-17 02:43:32 +01:00
}
2009-12-15 18:19:14 +01:00
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
if ( m_serverMode & & ! m_localSync ) {
2010-01-29 19:53:38 +01:00
string uri = source - > getURI ( ) ;
if ( ! uri . empty ( ) ) {
datastores < < " <alias name=' " < < uri < < " '/> " ;
}
2009-12-29 08:13:45 +01:00
}
2009-12-15 18:19:14 +01:00
datastores < < " </datastore> \n \n " ;
2009-02-06 17:52:18 +01:00
}
2009-11-23 02:43:56 +01:00
/*If there is super datastore, add it here*/
2009-12-15 18:19:14 +01:00
//TODO generate specific superdatastore contents (MB #8753)
//Now only works for synthesis built-in events+tasks
2018-01-16 17:17:34 +01:00
for ( std : : shared_ptr < VirtualSyncSource > vSource : m_sourceListPtr - > getVirtualSources ( ) ) {
2009-12-15 10:08:01 +01:00
std : : string superType = vSource - > getSourceType ( ) . m_format ;
2009-11-23 02:43:56 +01:00
std : : string evoSyncSource = vSource - > getDatabaseID ( ) ;
std : : vector < std : : string > mappedSources = unescapeJoinedString ( evoSyncSource , ' , ' ) ;
2010-02-18 17:56:31 +01:00
// always check for a consistent config
SourceType sourceType = vSource - > getSourceType ( ) ;
2018-01-16 10:58:04 +01:00
for ( std : : string source : mappedSources ) {
2010-02-18 17:56:31 +01:00
//check the data type
SyncSource * subSource = ( * m_sourceListPtr ) [ source ] ;
SourceType subType = subSource - > getSourceType ( ) ;
2010-03-18 10:23:33 +01:00
//If there is no format explictly selected in sub SyncSource, we
//have no way to determine whether it works with the format
//specific in the virtual SyncSource, thus no warning in this
//case.
if ( ! subType . m_format . empty ( ) & & (
sourceType . m_format ! = subType . m_format | |
sourceType . m_forceFormat ! = subType . m_forceFormat ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_WARNING ( NULL ,
2014-07-28 15:29:41 +02:00
" Virtual datastore \" %s \" and sub datastore \" %s \" have different data format. Will use the format in virtual datastore. " ,
2011-01-18 15:07:46 +01:00
vSource - > getDisplayName ( ) . c_str ( ) , source . c_str ( ) ) ;
2010-02-18 17:56:31 +01:00
}
2009-11-23 02:43:56 +01:00
}
if ( mappedSources . size ( ) ! = 2 ) {
2014-07-28 15:29:41 +02:00
vSource - > throwError ( SE_HERE , " virtual datastore currently only supports events+tasks combinations " ) ;
2009-11-23 02:43:56 +01:00
}
2010-02-15 18:03:56 +01:00
string name = vSource - > getName ( ) ;
datastores < < " <superdatastore name= ' " < < name < < " '> \n " ;
datastores < < " <contains datastore = ' " < < name < < m_findSourceSeparator < < mappedSources [ 0 ] < < " '> \n "
2009-11-23 02:43:56 +01:00
< < " <dispatchfilter>F.ISEVENT:=1</dispatchfilter> \n "
< < " <guidprefix>e</guidprefix> \n "
< < " </contains> \n "
2010-02-15 18:03:56 +01:00
< < " \n <contains datastore = ' " < < name < < m_findSourceSeparator < < mappedSources [ 1 ] < < " '> \n "
2009-11-23 02:43:56 +01:00
< < " <dispatchfilter>F.ISEVENT:=0</dispatchfilter> \n "
< < " <guidprefix>t</guidprefix> \n "
< < " </contains> \n " ;
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
if ( m_serverMode & & ! m_localSync ) {
2010-02-15 14:24:11 +01:00
string uri = vSource - > getURI ( ) ;
if ( ! uri . empty ( ) ) {
datastores < < " <alias name=' " < < uri < < " '/> " ;
}
}
2010-03-03 07:47:40 +01:00
if ( vSource - > getForceSlowSync ( ) ) {
// we *want* a slow sync, but couldn't tell the client -> force it server-side
datastores < < " <alertscript> FORCESLOWSYNC(); </alertscript> \n " ;
}
2009-11-23 02:43:56 +01:00
std : : string typesupport ;
typesupport = vSource - > getDataTypeSupport ( ) ;
datastores < < " <typesupport> \n "
< < typesupport
< < " </typesupport> \n " ;
datastores < < " \n </superdatastore> " ;
}
2009-04-24 10:37:56 +02:00
if ( datastores . str ( ) . empty ( ) ) {
2009-07-03 12:27:07 +02:00
// Add dummy datastore, the engine needs it. sync()
// checks that we have a valid configuration if it is
// really needed.
#if 0
datastores < < " <datastore name= \" ____dummy____ \" type= \" plugin \" > "
" <plugin_module>SyncEvolution</plugin_module> "
" <fieldmap fieldlist= \" contacts \" /> "
" <typesupport> "
" <use datatype= \" vCard30 \" /> "
" </typesupport> "
" </datastore> " ;
# endif
2009-04-24 10:37:56 +02:00
}
2009-02-06 17:52:18 +01:00
xml . replace ( index , tag . size ( ) , datastores . str ( ) ) ;
}
2009-02-24 11:30:27 +01:00
2009-06-25 14:54:11 +02:00
substTag ( xml , " fieldlists " , fragments . m_fieldlists . join ( ) , true ) ;
substTag ( xml , " profiles " , fragments . m_profiles . join ( ) , true ) ;
XML config: use configuration composed from fragments (MB #7712)
This patch replaces src/syncclient_sample_config.xml with a
combination of src/syncevo/configs/syncevolution.xml and the
config fragments that are shared with Synthesis upstream.
These fragments are installed in /usr/share/syncevolution/xml (or
the corresponding data path). From there they are read at runtime
to compose the final XML configuration. Users can copy individual files
into the corresponding directory hierarchy rooted at
$XDG_CONFIG_HOME/syncevolution-xml to replace individual fragments.
New fragments can be added there or in /usr/share.
For testing, these two directories can be overridden with the
SYNCEVOLUTION_XML_CONFIG_DIR env variable. No tests have been added
for this yet. There's also no documentation about it except this
commit message - add something to the HACKING guide once this
new concept stabilizes.
Developers can add new fragments in the source tree, invoke make and run the
resulting binary in client mode. As before, a complete config is included
in the binary. However, it is only sufficient for SyncML client mode.
For server mode, the files are expected to be installed (no need to maintain
a list of files in a Makefile for that) or SYNCEVOLUTION_XML_CONFIG_DIR
must be set.
At the moment, the following sub-directories are scanned for .xml files:
- the root directory to find syncevolution.xml
- datatypes, datatypes/client, datatypes/server
- scripting, scripting/client, scripting/server
- remoterules, remoterules/client, remoterules/server
Files inside "client" or "server" sub-directories are only used
when assembling a config for the corresponding mode of operation.
The goal of this patch is to simplify config sharing with Synthesis
(individual files are easier to manage than the monolitithic one), to
share files between client and server with the possibility to add
mode-specific files, and to allow users to extend the XML
configuration. The most likely use case for the latter is support for
more devices.
Previously, remote rules for the different devices listed in
syncserv_sample_config.xml were not used by SyncEvolution.
This patch moves the ZYB remote rule into a client-specific remote rule,
thus removing a complaint from libsynthesis about the unknown <client>
element when running as server.
Because we are using the unified upstream config, some parts of the config
have changed:
- There is a SYNCLVL field in all field list. This is currently unused
by SyncEvolution, but doesn't hurt either.
- A new iCalendar 2.0 all-day sanity check was added (for older Oracle servers?).
- The CATEGORIES defition in vBookmark was extended.
- some comment and white space changes
Because this is such fundamental change, extra care was taken to
minimize and verify the config changes. Here's the command which compares
old and new config for clients plus its output:
$ update-samples.pl syncevolution.xml client | diff -c -b syncclient_sample_config.xml -
***************
*** 31,42 ****
<scripting>
<looptimeout>5</looptimeout>
- <function><![CDATA[
- // create a UID
- string newuid() {
- return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
- }
- ]]></function>
<macro name="VCARD_BEFOREWRITE_SCRIPT_EVOLUTION"><![CDATA[
// a wordaround for cellphone in evolution. for incoming contacts, if there is only one CELL,
// strip the HOME or WORK flag from it. Evolution then should show it. */
--- 30,35 ----
***************
*** 118,123 ****
--- 111,124 ----
}
]]></macro>
+ <function><![CDATA[
+ // create a UID
+ string newuid() {
+ return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
+ }
+ ]]></function>
+
+
<!-- define script macros for scripts that are used by both vCalendar 1.0 and iCalendar 2.0 -->
<macro name="VCALENDAR_INCOMING_SCRIPT"><![CDATA[
***************
*** 145,150 ****
--- 146,158 ----
DTSTART = CONVERTTOUSERZONE(DTSTART);
MAKEALLDAY(DTSTART,DTEND,i);
}
+ else {
+ // iCalendar 2.0 - only if DTSTART is a date-only value this really is an allday
+ if (ISDATEONLY(DTSTART)) {
+ // reshape to make sure we don't have invalid zero-duration alldays (old OCS 9 servers)
+ MAKEALLDAY(DTSTART,DTEND,i);
+ }
+ }
// Make sure that all EXDATE times are in the same timezone as the start
// time. Some servers send them as UTC, which is all fine and well, but
***************
*** 265,275 ****
</scripting>
-
<datatypes>
-
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
--- 274,283 ----
</scripting>
<datatypes>
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
***************
*** 680,689 ****
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
-
-
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
--- 688,696 ----
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
***************
*** 787,793 ****
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for todoz -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
--- 792,798 ----
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for tasks -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
***************
*** 1394,1401 ****
<!-- non-standard properties -->
! <property name="CATEGORIES">
! <value field="CATEGORIES"/>
</property>
<property name="CLASS" suppressempty="yes">
--- 1394,1402 ----
<!-- non-standard properties -->
! <!-- inherit CATEGORIES from vCard 3.0, i.e. comma separated -->
! <property name="CATEGORIES" values="list" valueseparator="," altvalueseparator=";">
! <value field="CATEGORIES" combine=","/>
</property>
<property name="CLASS" suppressempty="yes">
***************
*** 1416,1435 ****
<use profile="vBookmark"/>
</datatype>
! <fieldlists/>
! <profiles/>
! <datatypes/>
</datatypes>
<clientorserver/>
-
- <client type="plugin">
- <remoterule name="ZYB">
- <manufacturer>ZYB</manufacturer>
- <model>ZYB</model>
- <!-- information to disable anchors checking -->
- <lenientmode>yes</lenientmode>
- </remoterule>
- </client>
-
</sysync_config>
--- 1417,1424 ----
<use profile="vBookmark"/>
</datatype>
!
</datatypes>
<clientorserver/>
</sysync_config>
2010-02-02 21:29:53 +01:00
substTag ( xml , " datatypedefs " , fragments . m_datatypes . join ( ) , true ) ;
2009-09-27 22:48:04 +02:00
substTag ( xml , " remoterules " ,
XML config: use configuration composed from fragments (MB #7712)
This patch replaces src/syncclient_sample_config.xml with a
combination of src/syncevo/configs/syncevolution.xml and the
config fragments that are shared with Synthesis upstream.
These fragments are installed in /usr/share/syncevolution/xml (or
the corresponding data path). From there they are read at runtime
to compose the final XML configuration. Users can copy individual files
into the corresponding directory hierarchy rooted at
$XDG_CONFIG_HOME/syncevolution-xml to replace individual fragments.
New fragments can be added there or in /usr/share.
For testing, these two directories can be overridden with the
SYNCEVOLUTION_XML_CONFIG_DIR env variable. No tests have been added
for this yet. There's also no documentation about it except this
commit message - add something to the HACKING guide once this
new concept stabilizes.
Developers can add new fragments in the source tree, invoke make and run the
resulting binary in client mode. As before, a complete config is included
in the binary. However, it is only sufficient for SyncML client mode.
For server mode, the files are expected to be installed (no need to maintain
a list of files in a Makefile for that) or SYNCEVOLUTION_XML_CONFIG_DIR
must be set.
At the moment, the following sub-directories are scanned for .xml files:
- the root directory to find syncevolution.xml
- datatypes, datatypes/client, datatypes/server
- scripting, scripting/client, scripting/server
- remoterules, remoterules/client, remoterules/server
Files inside "client" or "server" sub-directories are only used
when assembling a config for the corresponding mode of operation.
The goal of this patch is to simplify config sharing with Synthesis
(individual files are easier to manage than the monolitithic one), to
share files between client and server with the possibility to add
mode-specific files, and to allow users to extend the XML
configuration. The most likely use case for the latter is support for
more devices.
Previously, remote rules for the different devices listed in
syncserv_sample_config.xml were not used by SyncEvolution.
This patch moves the ZYB remote rule into a client-specific remote rule,
thus removing a complaint from libsynthesis about the unknown <client>
element when running as server.
Because we are using the unified upstream config, some parts of the config
have changed:
- There is a SYNCLVL field in all field list. This is currently unused
by SyncEvolution, but doesn't hurt either.
- A new iCalendar 2.0 all-day sanity check was added (for older Oracle servers?).
- The CATEGORIES defition in vBookmark was extended.
- some comment and white space changes
Because this is such fundamental change, extra care was taken to
minimize and verify the config changes. Here's the command which compares
old and new config for clients plus its output:
$ update-samples.pl syncevolution.xml client | diff -c -b syncclient_sample_config.xml -
***************
*** 31,42 ****
<scripting>
<looptimeout>5</looptimeout>
- <function><![CDATA[
- // create a UID
- string newuid() {
- return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
- }
- ]]></function>
<macro name="VCARD_BEFOREWRITE_SCRIPT_EVOLUTION"><![CDATA[
// a wordaround for cellphone in evolution. for incoming contacts, if there is only one CELL,
// strip the HOME or WORK flag from it. Evolution then should show it. */
--- 30,35 ----
***************
*** 118,123 ****
--- 111,124 ----
}
]]></macro>
+ <function><![CDATA[
+ // create a UID
+ string newuid() {
+ return "syuid" + NUMFORMAT(RANDOM(1000000),6,"0") + "." + (string)MILLISECONDS(NOW());
+ }
+ ]]></function>
+
+
<!-- define script macros for scripts that are used by both vCalendar 1.0 and iCalendar 2.0 -->
<macro name="VCALENDAR_INCOMING_SCRIPT"><![CDATA[
***************
*** 145,150 ****
--- 146,158 ----
DTSTART = CONVERTTOUSERZONE(DTSTART);
MAKEALLDAY(DTSTART,DTEND,i);
}
+ else {
+ // iCalendar 2.0 - only if DTSTART is a date-only value this really is an allday
+ if (ISDATEONLY(DTSTART)) {
+ // reshape to make sure we don't have invalid zero-duration alldays (old OCS 9 servers)
+ MAKEALLDAY(DTSTART,DTEND,i);
+ }
+ }
// Make sure that all EXDATE times are in the same timezone as the start
// time. Some servers send them as UTC, which is all fine and well, but
***************
*** 265,275 ****
</scripting>
-
<datatypes>
-
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
--- 274,283 ----
</scripting>
<datatypes>
<!-- list of internal fields representing vCard data -->
<fieldlist name="contacts">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="REV" type="timestamp" compare="never" age="yes"/>
<!-- Name elements -->
***************
*** 680,689 ****
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
-
-
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
--- 688,696 ----
$VCARD_INCOMING_NAMECHANGE_SCRIPT
]]></incomingscript>
</datatype>
<!-- common field list for events and todos (both represented by vCalendar/iCalendar) -->
<fieldlist name="calendar">
+ <field name="SYNCLVL" type="integer" compare="never"/>
<field name="ISEVENT" type="integer" compare="always"/>
<field name="DMODIFIED" type="timestamp" compare="never" age="yes"/>
***************
*** 787,793 ****
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for todoz -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
--- 792,798 ----
<subprofile onlyformode="standard" name="VTIMEZONE" mode="vtimezones"/>
! <!-- sub-profile for tasks -->
<subprofile name="VTODO" nummandatory="1" showifselectedonly="yes" field="ISEVENT" value="0">
<property name="LAST-MODIFIED" suppressempty="yes">
***************
*** 1394,1401 ****
<!-- non-standard properties -->
! <property name="CATEGORIES">
! <value field="CATEGORIES"/>
</property>
<property name="CLASS" suppressempty="yes">
--- 1394,1402 ----
<!-- non-standard properties -->
! <!-- inherit CATEGORIES from vCard 3.0, i.e. comma separated -->
! <property name="CATEGORIES" values="list" valueseparator="," altvalueseparator=";">
! <value field="CATEGORIES" combine=","/>
</property>
<property name="CLASS" suppressempty="yes">
***************
*** 1416,1435 ****
<use profile="vBookmark"/>
</datatype>
! <fieldlists/>
! <profiles/>
! <datatypes/>
</datatypes>
<clientorserver/>
-
- <client type="plugin">
- <remoterule name="ZYB">
- <manufacturer>ZYB</manufacturer>
- <model>ZYB</model>
- <!-- information to disable anchors checking -->
- <lenientmode>yes</lenientmode>
- </remoterule>
- </client>
-
</sysync_config>
--- 1417,1424 ----
<use profile="vBookmark"/>
</datatype>
!
</datatypes>
<clientorserver/>
</sysync_config>
2010-02-02 21:29:53 +01:00
rules +
2009-09-27 22:48:04 +02:00
fragments . m_remoterules . join ( ) ,
true ) ;
2009-06-25 14:54:11 +02:00
2009-09-27 22:48:04 +02:00
if ( m_serverMode ) {
// TODO: set the device ID for an OBEX server
} else {
substTag ( xml , " fakedeviceid " , getDevID ( ) ) ;
}
2009-03-17 16:46:53 +01:00
substTag ( xml , " model " , getMod ( ) ) ;
substTag ( xml , " manufacturer " , getMan ( ) ) ;
substTag ( xml , " hardwareversion " , getHwv ( ) ) ;
// abuse (?) the firmware version to store the SyncEvolution version number
substTag ( xml , " firmwareversion " , getSwv ( ) ) ;
substTag ( xml , " devicetype " , getDevType ( ) ) ;
2014-09-08 10:52:00 +02:00
substTag ( xml , " maxmsgsize " , getMaxMsgSize ( ) . get ( ) ) ;
substTag ( xml , " maxobjsize " , getMaxObjSize ( ) . get ( ) ) ;
2009-09-27 22:48:04 +02:00
if ( m_serverMode ) {
2013-07-26 10:22:11 +02:00
UserIdentity id = getSyncUser ( ) ;
2009-09-29 22:41:06 +02:00
2010-10-29 11:12:11 +02:00
/*
* Do not check username / pwd if this local sync or over
2014-10-24 16:42:34 +02:00
* bluetooth transport . Need credentials for checking ,
* and IdentityProviderCredentials ( ) throws an error when
* called for a provider which does not support plain
* credentials .
2010-10-29 11:12:11 +02:00
*/
2014-10-24 16:42:34 +02:00
bool withauth = ! m_localSync & & ! boost : : starts_with ( getUsedSyncURL ( ) , " obex-bt " ) ;
if ( withauth ) {
Credentials cred = IdentityProviderCredentials ( id , getSyncPassword ( ) ) ;
const string & user = cred . m_username ;
const string & password = cred . m_password ;
if ( user . empty ( ) & & password . empty ( ) ) {
withauth = false ;
}
}
if ( withauth ) {
2009-09-29 22:41:06 +02:00
// require authentication with the configured password
substTag ( xml , " defaultauth " ,
2009-10-25 22:46:09 +01:00
" <requestedauth>md5</requestedauth> \n "
2009-11-11 10:50:26 +01:00
" <requiredauth>basic</requiredauth> \n "
2009-10-25 22:46:09 +01:00
" <autononce>yes</autononce> \n " ,
2009-09-29 22:41:06 +02:00
true ) ;
} else {
2014-10-24 16:42:34 +02:00
if ( id . wasSet ( ) ) {
SE_LOG_WARNING ( getConfigName ( ) , " ignoring username %s, it is not needed " ,
id . toString ( ) . c_str ( ) ) ;
}
2009-10-25 22:46:09 +01:00
// no authentication required
2009-09-29 22:41:06 +02:00
substTag ( xml , " defaultauth " ,
2009-11-10 18:40:44 +01:00
" <logininitscript>return TRUE</logininitscript> \n "
2009-09-29 22:41:06 +02:00
" <requestedauth>none</requestedauth> \n "
" <requiredauth>none</requiredauth> \n "
" <autononce>yes</autononce> \n " ,
true ) ;
}
2009-09-27 22:48:04 +02:00
} else {
substTag ( xml , " defaultauth " , getClientAuthType ( ) ) ;
}
2009-06-18 18:02:55 +02:00
2014-08-29 11:18:03 +02:00
if ( isSync | |
! getConfigDate ( ) . wasSet ( ) ) {
// If the hash code of the main sync XML config is changed, that
// means the content of the config has changed. Save the new hash
// and regen the configdate. Also necessary when no config data
// has ever been set.
hash = Hash ( xml . c_str ( ) ) ;
if ( getHashCode ( ) ! = hash ) {
setConfigDate ( ) ;
setHashCode ( hash ) ;
flush ( ) ;
}
2009-06-18 18:02:55 +02:00
}
substTag ( xml , " configdate " , getConfigDate ( ) . c_str ( ) ) ;
2009-02-06 17:52:18 +01:00
}
2009-10-05 14:49:32 +02:00
SharedEngine SyncContext : : createEngine ( )
2009-07-03 12:27:07 +02:00
{
SharedEngine engine ( new sysync : : TEngineModuleBridge ) ;
2009-09-27 22:48:04 +02:00
// This instance of the engine is used outside of the sync session
// itself for logging. doSync() then reinitializes it with a full
// datastore configuration.
engine . Connect ( m_serverMode ?
# ifdef ENABLE_SYNCML_LINKED
// use Synthesis client or server engine that we were linked against
" [server:] " : " [] " ,
# else
// load engine dynamically
" server:libsynthesis.so.0 " : " libsynthesis.so.0 " ,
# endif
0 ,
2009-07-03 12:27:07 +02:00
sysync : : DBG_PLUGIN_NONE |
sysync : : DBG_PLUGIN_INT |
sysync : : DBG_PLUGIN_DB |
sysync : : DBG_PLUGIN_EXOT ) ;
SharedKey configvars = engine . OpenKeyByPath ( SharedKey ( ) , " /configvars " ) ;
2009-11-06 11:43:55 +01:00
string logdir ;
if ( m_sourceListPtr ) {
logdir = m_sourceListPtr - > getLogdir ( ) ;
}
2009-07-03 12:27:07 +02:00
engine . SetStrValue ( configvars , " defout_path " ,
logdir . size ( ) ? logdir : " /dev/null " ) ;
engine . SetStrValue ( configvars , " conferrpath " , " console " ) ;
2009-09-27 22:48:04 +02:00
engine . SetStrValue ( configvars , " binfilepath " , getSynthesisDatadir ( ) . c_str ( ) ) ;
2009-07-03 12:27:07 +02:00
configvars . reset ( ) ;
return engine ;
}
2009-07-21 19:18:53 +02:00
namespace {
void GnutlsLogFunction ( int level , const char * str )
{
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( " GNUTLS " , " level %d: %s " , level , str ) ;
2009-07-21 19:18:53 +02:00
}
}
2009-09-27 22:48:04 +02:00
void SyncContext : : initServer ( const std : : string & sessionID ,
SharedBuffer data ,
const std : : string & messageType )
{
m_serverMode = true ;
m_sessionID = sessionID ;
m_initialMessage = data ;
m_initialMessageType = messageType ;
}
2009-11-06 11:43:55 +01:00
struct SyncContext : : SyncMLMessageInfo
SyncContext : : analyzeSyncMLMessage ( const char * data , size_t len ,
const std : : string & messageType )
{
config: share properties between peers, configuration view without peer
This patch makes the configuration layout with per-source and per-peer
properties the default for new configurations. Migrating old
configurations is not implemented. The command line has not
been updated at all (MB #8048). The D-Bus API is fairly complete,
only listing sessions independently of a peer is missing (MB #8049).
The key concept of this patch is that a pseudo-node implemented by
MultiplexConfigNode provides a view on all user-visible or hidden
properties. Based on the property name, it looks up the property
definition, picks one of the underlying nodes based on the property
visibility and sharing attributes, then reads and writes the property
via that node. Clearing properties is not needed and not implemented,
because of the uncertain semantic (really remove shared properties?!).
The "sync" property must be available both in the per-source config
(to pick a backend independently of a specific peer) and in the
per-peer configuration (to select a specific data format). This is
solved by making the property special (SHARED_AND_UNSHARED flag) and
then writing it into two nodes. Reading is done from the more specific
per-peer node, with the other node acting as fallback.
The MultiplexConfigNode has to implement the FilterConfigNode API
because it is used as one by the code which sets passwords in the
filter. For this to work, the base FilterConfigNode implementation must
use virtual method calls.
The TestDBusSessionConfig.testUpdateConfigError checks that the error
generated for an incorrect "sync" property contains the path of the
config.ini file. The meaning of the error message in this case is that
the wrong value is *for* that file, not that the property is already
wrong *in* the file, but that's okay.
The MultiplexConfigNode::getName() can only return a fixed name. To
satisfy the test and because it is the right choice at the moment for
all properties which might trigger such an error, it now is configured
so that it returns the most specific path of the non-shared
properties.
"syncevolution --print-config" shows errors that are in files. Wrong
command line parameters are rejected with a message that refers to the
command line parameter ("--source-property sync=foo").
A future enhancement would be to make the name depend on the
property (MB#8037).
Because an empty string is now a valid configuration name (referencing
the source properties without the per-peer properties) several checks
for such empty strings were removed. The corresponding tests were
updated resp. removed. Instead of talking about "server not found",
the more neutral name "configuration" is used. The new
TestMultipleConfigs.testSharing() covers the semantic of sharing
properties between multiple configs.
Access to non-existant nodes is routed into the new
DevNullConfigNode. It always returns an empty string when reading and
throws an error when trying to write into it. Unintentionally writing
into a config.ini file therefore became harder, compared with the
previous instantiation of SyncContext() with empty config name.
The parsing of incoming messages uses a SyncContext which is bound to
a VolatileConfigNode. This allows reading and writing of properties
without any risk of touching files on disk.
The patch which introduced the new config nodes was not complete yet
with regards to the new layout. Removing nodes and trees used the
wrong root path: getRootPath() refers to the most specific peer
config, m_root to the part without the peer path. SyncConfig must
distinguish between a view with peer-specific properties and one
without, which is done by setting the m_peerPath only if a peer was
selected. Copying properties must know whether writing per-specific
properties ("unshared") is wanted, because trying to do it for a view
without those properties would trigger the DevNullConfigNode
exception.
SyncConfig::removeSyncSource() removes source properties both in the
shared part of the config and in *all* peers. This is used by
Session.SetConfig() for the case that the caller is a) setting instead
of updating the config and b) not providing any properties for the
source. This is clearly a risky operation which should not be done
when there are other peers which still use the source. We might have a
problem in our D-Bus API definition for "removing a peer
configuration" (MB #8059) because it always has an effect on other
peers.
The property registries were initialized implicitly before. With the
recent changes it happened that SyncContext was initialized to analyze
a SyncML message without initializing the registry, which caused
getRemoteDevID() to use a property where the hidden flag had not been
set yet.
Moving all of these additional flags into the property constructors is
awkward (which is why they are in the getRegistry() methods), so this
was fixed by initializing the properties in the SyncConfig
constructors by asking for the registries. Because there is no way to
access them except via the registry and SyncConfig instances (*), this
should ensure that the properties are valid when used.
(*) Exception are some properties which are declared publicly to have access
to their name. Nobody's perfect...
2009-11-13 20:02:44 +01:00
SyncContext sync ;
2009-11-06 11:43:55 +01:00
SourceList sourceList ( sync , false ) ;
sourceList . setLogLevel ( SourceList : : LOGGING_SUMMARY ) ;
2010-05-27 15:33:06 +02:00
sync . m_sourceListPtr = & sourceList ;
SwapContext syncSentinel ( & sync ) ;
2009-11-06 11:43:55 +01:00
sync . initServer ( " " , SharedBuffer ( ) , " " ) ;
SwapEngine swapengine ( sync ) ;
sync . initEngine ( false ) ;
sysync : : TEngineProgressInfo progressInfo ;
sysync : : uInt16 stepCmd = sysync : : STEPCMD_GOTDATA ;
SharedSession session = sync . m_engine . OpenSession ( sync . m_sessionID ) ;
SessionSentinel sessionSentinel ( sync , session ) ;
sync . m_engine . WriteSyncMLBuffer ( session , data , len ) ;
SharedKey sessionKey = sync . m_engine . OpenSessionKey ( session ) ;
sync . m_engine . SetStrValue ( sessionKey ,
" contenttype " ,
messageType ) ;
// analyze main loop: runs until SessionStep() signals reply or error.
// Will call our SynthesisDBPlugin callbacks, most importantly
// SyncEvolution_Session_CheckDevice(), which records the device ID
// for us.
do {
sync . m_engine . SessionStep ( session , stepCmd , & progressInfo ) ;
switch ( stepCmd ) {
case sysync : : STEPCMD_OK :
case sysync : : STEPCMD_PROGRESS :
stepCmd = sysync : : STEPCMD_STEP ;
break ;
default :
// whatever it is, cannot proceed
break ;
}
} while ( stepCmd = = sysync : : STEPCMD_STEP ) ;
SyncMLMessageInfo info ;
info . m_deviceID = sync . getSyncDeviceID ( ) ;
return info ;
}
2014-08-29 11:18:03 +02:00
void SyncContext : : initEngine ( bool isSync )
2009-11-06 11:43:55 +01:00
{
string xml , configname ;
2014-08-29 11:18:03 +02:00
getConfigXML ( isSync , xml , configname ) ;
2009-11-06 11:43:55 +01:00
try {
m_engine . InitEngineXML ( xml . c_str ( ) ) ;
} catch ( const BadSynthesisResult & ex ) {
2013-04-08 19:17:36 +02:00
SE_LOG_ERROR ( NULL ,
2009-11-06 11:43:55 +01:00
" internal error, invalid XML configuration (%s): \n %s " ,
m_sourceListPtr & & ! m_sourceListPtr - > empty ( ) ?
" with datastores " :
" without datastores " ,
xml . c_str ( ) ) ;
throw ;
}
2014-08-29 11:18:03 +02:00
if ( isSync & &
2010-02-19 18:32:00 +01:00
getLogLevel ( ) > = 5 ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEV ( NULL , " Full XML configuration: \n %s " , xml . c_str ( ) ) ;
2009-11-06 11:43:55 +01:00
}
}
Merge branch 'HARMATTAN-1-3-1'
Fetched the code and its history from the 1.3.1 archives at:
http://people.debian.org/~ovek/maemo/
http://people.debian.org/~ovek/harmattan/
Merged almost everything, except for Maemo/Harmattan specific build files:
autogen-maemo.sh builddeb buildsrc debian
The following changes were also removed, because they are either local
workarounds or merge artifacts which probably also don't belong into
the Maemo/Harmattan branch:
diff --git a/configure.ac b/configure.ac
index cb66617..2c4403c 100644
--- a/configure.ac
+++ b/configure.ac
@@ -44,7 +44,7 @@ if test "$enable_release_mode" = "yes"; then
AC_DEFINE(SYNCEVOLUTION_STABLE_RELEASE, 1, [binary is meant for end-users])
fi
-AM_INIT_AUTOMAKE([1.11.1 tar-ustar silent-rules subdir-objects -Wno-portability])
+AM_INIT_AUTOMAKE([subdir-objects -Wno-portability])
AM_PROG_CC_C_O
diff --git a/src/backends/webdav/CalDAVSource.cpp b/src/backends/webdav/CalDAVSource.cpp
index decd170..7d338ac 100644
--- a/src/backends/webdav/CalDAVSource.cpp
+++ b/src/backends/webdav/CalDAVSource.cpp
@@ -1282,6 +1282,7 @@ void CalDAVSource::Event::fixIncomingCalendar(icalcomponent *calendar)
// time.
bool ridInUTC = false;
const icaltimezone *zone = NULL;
+ icalcomponent *parent = NULL;
for (icalcomponent *comp = icalcomponent_get_first_component(calendar, ICAL_VEVENT_COMPONENT);
comp;
@@ -1295,6 +1296,7 @@ void CalDAVSource::Event::fixIncomingCalendar(icalcomponent *calendar)
// is parent event? -> remember time zone unless it is UTC
static const struct icaltimetype null = { 0 };
if (!memcmp(&rid, &null, sizeof(null))) {
+ parent = comp;
struct icaltimetype dtstart = icalcomponent_get_dtstart(comp);
if (!icaltime_is_utc(dtstart)) {
zone = icaltime_get_timezone(dtstart);
diff --git a/src/backends/webdav/CalDAVSource.h b/src/backends/webdav/CalDAVSource.h
index 517ac2f..fa7c2ca 100644
--- a/src/backends/webdav/CalDAVSource.h
+++ b/src/backends/webdav/CalDAVSource.h
@@ -45,6 +45,10 @@ class CalDAVSource : public WebDAVSource,
virtual void removeMergedItem(const std::string &luid);
virtual void flushItem(const string &uid);
virtual std::string getSubDescription(const string &uid, const string &subid);
+ virtual void updateSynthesisInfo(SynthesisInfo &info,
+ XMLConfigFragments &fragments) {
+ info.m_backendRule = "HAVE-SYNCEVOLUTION-EXDATE-DETACHED";
+ }
// implementation of SyncSourceLogging callback
virtual std::string getDescription(const string &luid);
Making SySync_ConsolePrintf a real instance inside SyncEvolution leads
to link errors in other configurations. It really has to be extern. Added
a comment to the master branch to make that more obvious:
-extern "C" { // without curly braces, g++ 4.2 thinks the variable is extern
- int (*SySync_ConsolePrintf)(FILE *stream, const char *format, ...);
-}
+// This is just the declaration. The actual function pointer instance
+// is inside libsynthesis, which, for historic purposes, doesn't define
+// it in its header files (yet).
+extern "C" int (*SySync_ConsolePrintf)(FILE *stream, const char *format, ...);
2012-11-01 14:39:31 +01:00
// This is just the declaration. The actual function pointer instance
// is inside libsynthesis, which, for historic purposes, doesn't define
// it in its header files (yet).
2012-06-08 15:29:49 +02:00
extern " C " int ( * SySync_ConsolePrintf ) ( FILE * stream , const char * format , . . . ) ;
static int nopPrintf ( FILE * stream , const char * format , . . . ) { return 0 ; }
2013-04-26 11:41:41 +02:00
extern " C "
{
extern int ( * SySync_CondTimedWait ) ( pthread_cond_t * cond , pthread_mutex_t * mutex , bool & aTerminated , long aMilliSecondsToWait ) ;
}
# ifdef HAVE_GLIB
static gboolean timeout ( gpointer data )
{
// Call me again...
return true ;
}
static int CondTimedWaitGLib ( pthread_cond_t * /* cond */ , pthread_mutex_t * mutex ,
bool & terminated , long milliSecondsToWait )
{
int result = 0 ;
// return abstime ? pthread_cond_timedwait(cond, mutex, abstime) : pthread_cond_wait(cond, mutex);
try {
pthread_mutex_unlock ( mutex ) ;
SE_LOG_DEBUG ( NULL , " wait for background thread: %lds " , milliSecondsToWait ) ;
SuspendFlags & flags = SuspendFlags : : getSuspendFlags ( ) ;
Timespec now = Timespec : : system ( ) ;
Timespec wait ( milliSecondsToWait / 1000 , milliSecondsToWait % 1000 ) ;
Timespec deadline = now + wait ;
// We don't need to react to thread shutdown immediately (only
// called once per sync), so a relatively long check interval of
// one second is okay.
2018-01-30 17:00:24 +01:00
GLibEvent id ( g_timeout_add_seconds ( 1 , timeout , nullptr ) , " timeout " ) ;
2013-04-26 11:41:41 +02:00
2018-01-16 17:17:34 +01:00
auto condTimedWaitContinue = [ mutex , & terminated , milliSecondsToWait , & deadline , & flags , & result ] ( ) {
// Thread has terminated?
pthread_mutex_lock ( mutex ) ;
if ( terminated ) {
pthread_mutex_unlock ( mutex ) ;
SE_LOG_DEBUG ( NULL , " background thread completed " ) ;
return false ;
}
pthread_mutex_unlock ( mutex ) ;
// Abort? Ignore when waiting for final thread shutdown, because
// in that case we just get called again.
if ( milliSecondsToWait > 0 & & flags . isAborted ( ) ) {
SE_LOG_DEBUG ( NULL , " give up waiting for background thread, aborted " ) ;
// Signal error. libsynthesis then assumes that the thread still
// runs and enters its parallel message sending, which eventually
// returns control to us.
result = 1 ;
return false ;
}
// Timeout?
if ( ! milliSecondsToWait | |
( milliSecondsToWait > 0 & & deadline < = Timespec : : system ( ) ) ) {
SE_LOG_DEBUG ( NULL , " give up waiting for background thread, timeout " ) ;
result = 1 ;
return false ;
}
return true ;
} ;
GRunWhile ( condTimedWaitContinue ) ;
2013-04-26 11:41:41 +02:00
} catch ( . . . ) {
Exception : : handle ( HANDLE_EXCEPTION_FATAL ) ;
}
pthread_mutex_lock ( mutex ) ;
return result ;
}
# endif
2010-11-10 15:26:47 +01:00
void SyncContext : : initMain ( const char * appname )
{
# if defined(HAVE_GLIB)
// this is required when using glib directly or indirectly
2014-07-22 13:55:19 +02:00
# if !GLIB_CHECK_VERSION(2,36,0)
2010-11-10 15:26:47 +01:00
g_type_init ( ) ;
2014-07-22 13:55:19 +02:00
# endif
# if !GLIB_CHECK_VERSION(2,32,0)
2018-01-30 17:00:24 +01:00
g_thread_init ( nullptr ) ;
2014-07-22 13:55:19 +02:00
# endif
2010-11-10 15:26:47 +01:00
g_set_prgname ( appname ) ;
2011-12-08 15:07:55 +01:00
2015-03-02 11:32:15 +01:00
// Initialize SuspendFlags singleton.
SuspendFlags : : getSuspendFlags ( ) ;
2011-12-08 15:07:55 +01:00
// redirect glib logging into our own logging
2018-01-30 17:00:24 +01:00
g_log_set_default_handler ( Logger : : glogFunc , nullptr ) ;
2013-04-30 11:31:07 +02:00
// Only the main thread may use the default GMainContext.
// Anything else is unsafe, see https://mail.gnome.org/archives/gtk-list/2013-April/msg00040.html
// util.cpp:Sleep() checks this and uses the default context
// when called by the main thread, otherwise falls back to
2013-06-29 21:20:09 +02:00
// select(). GRunWhile() is always safe to use.
2018-01-30 17:00:24 +01:00
g_main_context_acquire ( nullptr ) ;
2013-04-26 11:41:41 +02:00
SySync_CondTimedWait = CondTimedWaitGLib ;
2010-11-10 15:26:47 +01:00
# endif
2012-06-08 15:29:49 +02:00
if ( atoi ( getEnv ( " SYNCEVOLUTION_DEBUG " , " 0 " ) ) > 3 ) {
SySync_ConsolePrintf = Logger : : sysyncPrintf ;
} else {
SySync_ConsolePrintf = nopPrintf ;
}
2010-11-10 15:26:47 +01:00
2016-09-20 17:11:49 +02:00
// Load backends.
SyncSource : : backendsInit ( ) ;
2012-03-06 09:24:15 +01:00
// invoke optional init parts, for example KDE KApplication init
// in KDE backend
GetInitMainSignal ( ) ( appname ) ;
2012-01-27 16:05:06 +01:00
2018-01-16 17:17:34 +01:00
2011-11-17 14:26:33 +01:00
struct sigaction sa ;
memset ( & sa , 0 , sizeof ( sa ) ) ;
sa . sa_handler = SIG_IGN ;
2018-01-30 17:00:24 +01:00
sigaction ( SIGPIPE , & sa , nullptr ) ;
2011-11-17 14:26:33 +01:00
2010-11-10 15:26:47 +01:00
// Initializing a potential use of EDS early is necessary for
// libsynthesis when compiled with
// --enable-evolution-compatibility: in that mode libical will
// only be found by libsynthesis after EDSAbiWrapperInit()
// pulls it into the process by loading libecal.
EDSAbiWrapperInit ( ) ;
2013-03-22 10:45:11 +01:00
if ( const char * gnutlsdbg = getenv ( " SYNCEVOLUTION_GNUTLS_DEBUG " ) ) {
2010-11-10 15:26:47 +01:00
// Enable libgnutls debugging without creating a hard dependency on it,
// because we don't call it directly and might not even be linked against
// it. Therefore check for the relevant symbols via dlsym().
void ( * set_log_level ) ( int ) ;
2011-02-18 09:22:36 +01:00
typedef void ( * LogFunc_t ) ( int level , const char * str ) ;
void ( * set_log_function ) ( LogFunc_t func ) ;
2010-11-10 15:26:47 +01:00
2018-01-30 16:10:58 +01:00
set_log_level = ( decltype ( set_log_level ) ) dlsym ( RTLD_DEFAULT , " gnutls_global_set_log_level " ) ;
set_log_function = ( decltype ( set_log_function ) ) dlsym ( RTLD_DEFAULT , " gnutls_global_set_log_function " ) ;
2010-11-10 15:26:47 +01:00
if ( set_log_level & & set_log_function ) {
2013-03-22 10:45:11 +01:00
set_log_level ( atoi ( gnutlsdbg ) ) ;
2010-11-10 15:26:47 +01:00
set_log_function ( GnutlsLogFunction ) ;
} else {
2013-04-08 19:17:36 +02:00
SE_LOG_ERROR ( NULL , " SYNCEVOLUTION_GNUTLS_DEBUG debugging not possible, log functions not found " ) ;
2010-11-10 15:26:47 +01:00
}
}
}
2012-03-06 09:24:15 +01:00
SyncContext : : InitMainSignal & SyncContext : : GetInitMainSignal ( )
{
static InitMainSignal initMainSignal ;
return initMainSignal ;
}
2011-01-11 15:23:15 +01:00
static bool IsStableRelease =
# ifdef SYNCEVOLUTION_STABLE_RELEASE
true
# else
false
# endif
;
2011-01-10 15:56:53 +01:00
bool SyncContext : : isStableRelease ( )
{
return IsStableRelease ;
}
void SyncContext : : setStableRelease ( bool isStableRelease )
{
IsStableRelease = isStableRelease ;
}
2012-06-05 14:57:32 +02:00
void SyncContext : : checkConfig ( const std : : string & operation ) const
2006-03-19 22:37:30 +01:00
{
2012-06-05 14:57:32 +02:00
std : : string peer , context ;
splitConfigString ( m_server , peer , context ) ;
2011-04-21 12:36:12 +02:00
if ( isConfigNeeded ( ) & &
2012-06-05 14:57:32 +02:00
( ! exists ( ) | | peer . empty ( ) ) ) {
if ( peer . empty ( ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " Configuration \" %s \" does not refer to a sync peer. " , m_server . c_str ( ) ) ;
2012-06-05 14:57:32 +02:00
} else {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " Configuration \" %s \" does not exist. " , m_server . c_str ( ) ) ;
2012-06-05 14:57:32 +02:00
}
2014-04-02 14:57:56 +02:00
Exception : : throwError ( SE_HERE , StringPrintf ( " Cannot proceed with %s without a configuration. " , operation . c_str ( ) ) ) ;
2006-08-06 09:56:41 +02:00
}
2011-04-21 12:36:12 +02:00
}
SyncMLStatus SyncContext : : sync ( SyncReport * report )
{
SyncMLStatus status = STATUS_OK ;
2012-06-05 14:57:32 +02:00
checkConfig ( " sync " ) ;
2006-08-06 09:56:41 +02:00
2014-07-22 16:04:03 +02:00
if ( getenv ( " SYNCEVOLUTION_EPHEMERAL " ) ) {
SE_LOG_INFO ( NULL , " turning on ephemeral sync mode as requested by SYNCEVOLUTION_EPHEMERAL variable " ) ;
makeEphemeral ( ) ;
}
2006-09-07 21:47:29 +02:00
// redirect logging as soon as possible
2009-07-03 12:27:07 +02:00
SourceList sourceList ( * this , m_doLogging ) ;
2009-04-21 11:22:32 +02:00
sourceList . setLogLevel ( m_quiet ? SourceList : : LOGGING_QUIET :
getPrintChanges ( ) ? SourceList : : LOGGING_FULL :
SourceList : : LOGGING_SUMMARY ) ;
2009-07-21 19:18:53 +02:00
2009-12-04 16:38:26 +01:00
// careful about scope, this is needed for writing the
// report below
SyncReport buffer ;
2009-11-06 11:43:55 +01:00
SwapContext syncSentinel ( this ) ;
2009-10-25 22:46:09 +01:00
try {
m_sourceListPtr = & sourceList ;
local sync: avoid confusion about what data is changed
In local sync the terms "local" and "remote" (in SyncReport, "Data
modified locally") do not always apply and can be confusing. Replaced
with explicitly mentioning the context.
The source name also no longer is unique. Extended in the local sync
case (and only in that case) by adding a <context>/ prefix to the
source name.
Here is an example of the modified output:
$ syncevolution google
[INFO] @default/itodo20: inactive
[INFO] @default/addressbook: inactive
[INFO] @default/calendar+todo: inactive
[INFO] @default/memo: inactive
[INFO] @default/ical20: inactive
[INFO] @default/todo: inactive
[INFO] @default/file_calendar+todo: inactive
[INFO] @default/file_vcard21: inactive
[INFO] @default/vcard30: inactive
[INFO] @default/text: inactive
[INFO] @default/file_itodo20: inactive
[INFO] @default/vcard21: inactive
[INFO] @default/file_ical20: inactive
[INFO] @default/file_vcard30: inactive
[INFO] @google/addressbook: inactive
[INFO] @google/memo: inactive
[INFO] @google/todo: inactive
[INFO] @google/calendar: starting normal sync, two-way
Local data changes to be applied remotely during synchronization:
*** @google/calendar ***
after last sync | current data
removed since last sync <
> added since last sync
-------------------------------------------------------------------------------
BEGIN:VCALENDAR BEGIN:VCALENDAR
...
END:VCALENDAR END:VCALENDAR
-------------------------------------------------------------------------------
[INFO] @google/calendar: sent 1/2
[INFO] @google/calendar: sent 2/2
Local data changes to be applied remotely during synchronization:
*** @default/calendar ***
no changes
[INFO] @default/calendar: started
[INFO] @default/calendar: updating "created in Google, online"
[INFO] @default/calendar: updating "created in Google - mod2, online"
[INFO] @google/calendar: started
[INFO] @default/calendar: inactive
[INFO] @google/calendar: normal sync done successfully
Synchronization successful.
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | @default | @google | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| disabled, 0 KB sent by client, 2 KB received |
| item(s) in database backup: 3 before sync, 3 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Mon Oct 25 10:03:24 2010, duration 0:13min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified @default during synchronization:
*** @default/calendar ***
before sync | after sync
removed during sync <
> added during sync
-------------------------------------------------------------------------------
BEGIN:VCALENDAR BEGIN:VCALENDAR
VERSION:2.0 VERSION:2.0
...
END:VCALENDAR END:VCALENDAR
-------------------------------------------------------------------------------
pohly@pohly-mobl1:/tmp/syncevolution/src$
Synchronization successful.
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | @google | @default | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 |
| two-way, 2 KB sent by client, 0 KB received |
| item(s) in database backup: 2 before sync, 2 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Mon Oct 25 10:03:24 2010, duration 0:13min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified @google during synchronization:
*** @google/calendar ***
no changes
2010-10-25 10:34:23 +02:00
string url = getUsedSyncURL ( ) ;
if ( boost : : starts_with ( url , " local:// " ) ) {
initLocalSync ( url . substr ( strlen ( " local:// " ) ) ) ;
2010-10-22 14:02:01 +02:00
}
2009-10-25 22:46:09 +01:00
2009-04-16 09:26:14 +02:00
if ( ! report ) {
report = & buffer ;
}
report - > clear ( ) ;
local sync: avoid confusion about what data is changed
In local sync the terms "local" and "remote" (in SyncReport, "Data
modified locally") do not always apply and can be confusing. Replaced
with explicitly mentioning the context.
The source name also no longer is unique. Extended in the local sync
case (and only in that case) by adding a <context>/ prefix to the
source name.
Here is an example of the modified output:
$ syncevolution google
[INFO] @default/itodo20: inactive
[INFO] @default/addressbook: inactive
[INFO] @default/calendar+todo: inactive
[INFO] @default/memo: inactive
[INFO] @default/ical20: inactive
[INFO] @default/todo: inactive
[INFO] @default/file_calendar+todo: inactive
[INFO] @default/file_vcard21: inactive
[INFO] @default/vcard30: inactive
[INFO] @default/text: inactive
[INFO] @default/file_itodo20: inactive
[INFO] @default/vcard21: inactive
[INFO] @default/file_ical20: inactive
[INFO] @default/file_vcard30: inactive
[INFO] @google/addressbook: inactive
[INFO] @google/memo: inactive
[INFO] @google/todo: inactive
[INFO] @google/calendar: starting normal sync, two-way
Local data changes to be applied remotely during synchronization:
*** @google/calendar ***
after last sync | current data
removed since last sync <
> added since last sync
-------------------------------------------------------------------------------
BEGIN:VCALENDAR BEGIN:VCALENDAR
...
END:VCALENDAR END:VCALENDAR
-------------------------------------------------------------------------------
[INFO] @google/calendar: sent 1/2
[INFO] @google/calendar: sent 2/2
Local data changes to be applied remotely during synchronization:
*** @default/calendar ***
no changes
[INFO] @default/calendar: started
[INFO] @default/calendar: updating "created in Google, online"
[INFO] @default/calendar: updating "created in Google - mod2, online"
[INFO] @google/calendar: started
[INFO] @default/calendar: inactive
[INFO] @google/calendar: normal sync done successfully
Synchronization successful.
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | @default | @google | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| disabled, 0 KB sent by client, 2 KB received |
| item(s) in database backup: 3 before sync, 3 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Mon Oct 25 10:03:24 2010, duration 0:13min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified @default during synchronization:
*** @default/calendar ***
before sync | after sync
removed during sync <
> added during sync
-------------------------------------------------------------------------------
BEGIN:VCALENDAR BEGIN:VCALENDAR
VERSION:2.0 VERSION:2.0
...
END:VCALENDAR END:VCALENDAR
-------------------------------------------------------------------------------
pohly@pohly-mobl1:/tmp/syncevolution/src$
Synchronization successful.
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | @google | @default | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 |
| two-way, 2 KB sent by client, 0 KB received |
| item(s) in database backup: 2 before sync, 2 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Mon Oct 25 10:03:24 2010, duration 0:13min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified @google during synchronization:
*** @google/calendar ***
no changes
2010-10-25 10:34:23 +02:00
if ( m_localSync ) {
report - > setRemoteName ( m_localPeerContext ) ;
report - > setLocalName ( getContextName ( ) ) ;
}
2009-04-16 09:26:14 +02:00
2009-02-19 10:52:35 +01:00
// let derived classes override settings, like the log dir
prepare ( ) ;
2009-07-03 12:27:07 +02:00
// choose log directory
2009-04-16 09:26:14 +02:00
sourceList . startSession ( getLogDir ( ) ,
getMaxLogDirs ( ) ,
getLogLevel ( ) ,
2010-02-18 09:56:08 +01:00
report ) ;
2009-11-27 03:24:49 +01:00
/* Must detect server or client session before creating the
* underlying SynthesisEngine
* */
if ( getPeerIsClient ( ) ) {
m_serverMode = true ;
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
} else if ( m_localSync & & ! m_agent ) {
2014-04-02 14:57:56 +02:00
Exception : : throwError ( SE_HERE , " configuration error, syncURL = local can only be used in combination with peerIsClient = 1 " ) ;
2009-11-27 03:24:49 +01:00
}
2009-11-27 03:16:40 +01:00
// create a Synthesis engine, used purely for logging purposes
// at this time
SwapEngine swapengine ( * this ) ;
initEngine ( false ) ;
2009-07-03 12:27:07 +02:00
try {
2009-11-27 03:16:40 +01:00
// dump some summary information at the beginning of the log
2013-07-26 10:22:11 +02:00
SE_LOG_DEV ( NULL , " SyncML server account: %s " , getSyncUser ( ) . toString ( ) . c_str ( ) ) ;
2013-04-08 19:17:36 +02:00
SE_LOG_DEV ( NULL , " client: SyncEvolution %s for %s " , getSwv ( ) . c_str ( ) , getDevType ( ) . c_str ( ) ) ;
SE_LOG_DEV ( NULL , " device ID: %s " , getDevID ( ) . c_str ( ) ) ;
SE_LOG_DEV ( NULL , " %s " , EDSAbiWrapperDebug ( ) ) ;
SE_LOG_DEV ( NULL , " %s " , SyncSource : : backendsDebug ( ) . c_str ( ) ) ;
2009-11-27 03:16:40 +01:00
2011-01-10 15:56:53 +01:00
// ensure that config can be modified (might have to be migrated first)
prepareConfigForWrite ( ) ;
2009-11-27 03:16:40 +01:00
// instantiate backends, but do not open them yet
initSources ( sourceList ) ;
if ( sourceList . empty ( ) ) {
2014-07-28 15:29:41 +02:00
Exception : : throwError ( SE_HERE , " no datastores active, check configuration " ) ;
2009-11-27 03:16:40 +01:00
}
2008-04-05 14:09:44 +02:00
2009-07-03 12:27:07 +02:00
// request all config properties once: throwing exceptions
// now is okay, whereas later it would lead to leaks in the
// not exception safe client library
2009-10-06 17:22:47 +02:00
SyncConfig dummy ;
2009-07-03 12:27:07 +02:00
set < string > activeSources = sourceList . getSources ( ) ;
dummy . copy ( * this , & activeSources ) ;
// start background thread if not running yet:
// necessary to catch problems with Evolution backend
startLoopThread ( ) ;
// ask for passwords now
2013-07-29 13:57:46 +02:00
PasswordConfigProperty : : checkPasswords ( getUserInterfaceNonNull ( ) , * this ,
PasswordConfigProperty : : CHECK_PASSWORD_ALL ,
sourceList . getSourceNames ( ) ) ;
2009-07-03 12:27:07 +02:00
// open each source - failing now is still safe
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
// in clients; in servers we wait until the source
// is really needed
2018-01-16 17:17:34 +01:00
auto startSourceAccess = [ this ] ( SyncEvo : : SyncSource & source , const char * , const char * ) {
if ( m_firstSourceAccess ) {
syncSuccessStart ( ) ;
m_firstSourceAccess = false ;
}
if ( m_serverMode ) {
// When using the source as cache, change tracking
// is not required. Disabling it can make item
// changes faster.
SyncMode mode = StringToSyncMode ( source . getSync ( ) ) ;
if ( mode = = SYNC_LOCAL_CACHE_SLOW | |
mode = = SYNC_LOCAL_CACHE_INCREMENTAL ) {
source . setNeedChanges ( false ) ;
}
// source is active in sync, now open it
source . open ( ) ;
}
// database dumping is delayed in both client and server
m_sourceListPtr - > syncPrepare ( source . getName ( ) ) ;
return STATUS_OK ;
} ;
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : sourceList ) {
2009-09-29 22:41:06 +02:00
if ( m_serverMode ) {
source - > enableServerMode ( ) ;
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
} else {
source - > open ( ) ;
2009-09-29 22:41:06 +02:00
}
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
// request callback when starting to use source
2018-01-16 17:17:34 +01:00
source - > getOperations ( ) . m_startDataRead . getPreSignal ( ) . connect ( startSourceAccess ) ;
2009-07-03 12:27:07 +02:00
}
2010-02-16 17:15:46 +01:00
// ready to go
2009-07-03 12:27:07 +02:00
status = doSync ( ) ;
} catch ( . . . ) {
// handle the exception here while the engine (and logging!) is still alive
2009-10-06 17:22:47 +02:00
Exception : : handle ( & status ) ;
2009-07-03 12:27:07 +02:00
goto report ;
}
2009-02-20 19:20:08 +01:00
} catch ( . . . ) {
2009-10-06 17:22:47 +02:00
Exception : : handle ( & status ) ;
2009-02-20 19:20:08 +01:00
}
2006-03-19 22:37:30 +01:00
2009-07-03 12:27:07 +02:00
report :
2010-02-26 13:43:54 +01:00
if ( status = = SyncMLStatus ( sysync : : LOCERR_DATASTORE_ABORT ) ) {
// this can mean only one thing in SyncEvolution: unexpected slow sync
status = STATUS_UNEXPECTED_SLOW_SYNC ;
}
2009-02-20 19:20:08 +01:00
try {
2009-02-19 18:36:50 +01:00
// Print final report before cleaning up.
// Status was okay only if all sources succeeded.
2010-01-15 16:35:41 +01:00
// When a source or the overall sync was successful,
// but some items failed, we report a "partial failure"
// status.
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : sourceList ) {
2013-05-07 16:39:50 +02:00
m_sourceSyncedSignal ( source - > getName ( ) , * source ) ;
2010-01-15 16:35:41 +01:00
if ( source - > getStatus ( ) = = STATUS_OK & &
( source - > getItemStat ( SyncSource : : ITEM_LOCAL ,
SyncSource : : ITEM_ANY ,
SyncSource : : ITEM_REJECT ) | |
source - > getItemStat ( SyncSource : : ITEM_REMOTE ,
SyncSource : : ITEM_ANY ,
SyncSource : : ITEM_REJECT ) ) ) {
source - > recordStatus ( STATUS_PARTIAL_FAILURE ) ;
}
2009-02-20 19:20:08 +01:00
if ( source - > getStatus ( ) ! = STATUS_OK & &
status = = STATUS_OK ) {
2009-02-19 18:36:50 +01:00
status = source - > getStatus ( ) ;
break ;
}
}
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
2011-02-08 12:49:14 +01:00
// Also take into account result of client side in local sync,
// if any existed. A non-success status code in the client's report
// was already propagated to the parent via a TransportStatusException
// in LocalTransportAgent::checkChildReport(). What we can do here
// is updating the individual's sources status.
2012-01-09 15:06:46 +01:00
if ( m_localSync & & m_agent & & getPeerIsClient ( ) ) {
2018-01-16 17:17:34 +01:00
std : : shared_ptr < LocalTransportAgent > agent = std : : static_pointer_cast < LocalTransportAgent > ( m_agent ) ;
2011-02-08 12:49:14 +01:00
SyncReport childReport ;
agent - > getClientSyncReport ( childReport ) ;
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : sourceList ) {
2011-04-21 12:20:00 +02:00
const SyncSourceReport * childSourceReport = childReport . findSyncSourceReport ( source - > getURINonEmpty ( ) ) ;
2011-02-08 12:49:14 +01:00
if ( childSourceReport ) {
SyncMLStatus parentSourceStatus = source - > getStatus ( ) ;
SyncMLStatus childSourceStatus = childSourceReport - > getStatus ( ) ;
// child source had an error *and*
// parent error is either unspecific (USERABORT) or
// is a remote error (HTTP error range)
if ( childSourceStatus ! = STATUS_OK & & childSourceStatus ! = STATUS_HTTP_OK & &
( parentSourceStatus = = SyncMLStatus ( sysync : : LOCERR_USERABORT ) | |
parentSourceStatus < SyncMLStatus ( sysync : : LOCAL_STATUS_CODE ) ) ) {
source - > recordStatus ( childSourceStatus ) ;
}
}
}
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
}
2011-02-08 12:49:14 +01:00
sourceList . updateSyncReport ( * report ) ;
2009-05-08 09:57:28 +02:00
sourceList . syncDone ( status , report ) ;
2009-02-20 19:20:08 +01:00
} catch ( . . . ) {
2009-10-06 17:22:47 +02:00
Exception : : handle ( & status ) ;
2007-03-23 22:00:32 +01:00
}
2006-12-17 17:33:45 +01:00
2011-02-08 12:49:14 +01:00
m_agent . reset ( ) ;
2018-01-30 17:00:24 +01:00
m_sourceListPtr = nullptr ;
2009-02-19 14:53:55 +01:00
return status ;
2005-11-26 22:16:03 +01:00
}
2007-11-08 22:22:52 +01:00
2010-02-26 11:20:52 +01:00
bool SyncContext : : sendSAN ( uint16_t version )
2009-11-13 05:31:06 +01:00
{
sysync : : SanPackage san ;
2010-02-26 11:20:52 +01:00
bool legacy = version < 12 ;
2009-11-13 05:31:06 +01:00
/* Should be nonce sent by the server in the preceeding sync session */
2009-11-18 07:22:33 +01:00
string nonce = " SyncEvolution " ;
2013-07-26 10:22:11 +02:00
UserIdentity id = getSyncUser ( ) ;
2013-07-29 16:51:26 +02:00
Credentials cred = IdentityProviderCredentials ( id , getSyncPassword ( ) ) ;
const std : : string & user = cred . m_username ;
const std : : string & password = cred . m_password ;
2013-07-26 10:22:11 +02:00
string uauthb64 = san . B64_H ( user , password ) ;
2009-11-13 05:31:06 +01:00
/* Client is expected to conduct the sync in the backgroud */
sysync : : UI_Mode mode = sysync : : UI_not_specified ;
2010-02-26 11:20:52 +01:00
uint16_t sessionId = 1 ;
2009-11-13 05:31:06 +01:00
string serverId = getRemoteIdentifier ( ) ;
2009-11-18 06:35:04 +01:00
if ( serverId . empty ( ) ) {
serverId = getDevID ( ) ;
}
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " starting SAN %u auth %s nonce %s session %u server %s " ,
2011-04-20 10:34:42 +02:00
version ,
uauthb64 . c_str ( ) ,
nonce . c_str ( ) ,
sessionId ,
serverId . c_str ( ) ) ;
2010-02-26 11:20:52 +01:00
san . PreparePackage ( uauthb64 , nonce , version , mode ,
2009-11-13 05:31:06 +01:00
sysync : : Initiator_Server , sessionId , serverId ) ;
san . CreateEmptyNotificationBody ( ) ;
2009-11-26 13:37:40 +01:00
bool hasSource = false ;
2009-12-15 10:08:01 +01:00
std : : set < std : : string > dataSources = m_sourceListPtr - > getSources ( ) ;
2009-12-17 02:43:32 +01:00
2009-12-15 10:08:01 +01:00
/* For each virtual datasoruce, generate the SAN accoring to it and ignoring
* sub datasource in the later phase */
2018-01-16 17:17:34 +01:00
for ( std : : shared_ptr < VirtualSyncSource > vSource : m_sourceListPtr - > getVirtualSources ( ) ) {
2009-12-15 10:08:01 +01:00
std : : string evoSyncSource = vSource - > getDatabaseID ( ) ;
2009-12-17 02:43:32 +01:00
std : : string sync = vSource - > getSync ( ) ;
2011-10-24 19:52:01 +02:00
SANSyncMode mode = AlertSyncMode ( StringToSyncMode ( sync , true ) , getPeerIsClient ( ) ) ;
2009-12-15 10:08:01 +01:00
std : : vector < std : : string > mappedSources = unescapeJoinedString ( evoSyncSource , ' , ' ) ;
2018-01-16 10:58:04 +01:00
for ( std : : string source : mappedSources ) {
2009-12-15 10:08:01 +01:00
dataSources . erase ( source ) ;
2011-10-24 19:52:01 +02:00
if ( mode = = SA_SLOW ) {
2010-01-29 19:40:44 +01:00
// We force a source which the client is not expected to use into slow mode.
// Shouldn't we rather reject attempts to synchronize it?
( * m_sourceListPtr ) [ source ] - > setForceSlowSync ( true ) ;
2009-12-17 02:43:32 +01:00
}
2009-12-15 10:08:01 +01:00
}
dataSources . insert ( vSource - > getName ( ) ) ;
}
2010-02-26 11:20:52 +01:00
2011-10-24 19:52:01 +02:00
SANSyncMode syncMode = SA_INVALID ;
2010-02-26 11:20:52 +01:00
vector < pair < string , string > > alertedSources ;
2009-11-13 05:31:06 +01:00
/* For each source to be notified do the following: */
2018-01-16 10:58:04 +01:00
for ( string name : dataSources ) {
2018-01-16 17:17:34 +01:00
std : : shared_ptr < PersistentSyncSourceConfig > sc ( getSyncSourceConfig ( name ) ) ;
2009-11-13 05:31:06 +01:00
string sync = sc - > getSync ( ) ;
2011-10-24 19:52:01 +02:00
SANSyncMode mode = AlertSyncMode ( StringToSyncMode ( sync , true ) , getPeerIsClient ( ) ) ;
if ( mode = = SA_SLOW ) {
2010-01-29 19:40:44 +01:00
( * m_sourceListPtr ) [ name ] - > setForceSlowSync ( true ) ;
2011-10-24 19:52:01 +02:00
mode = SA_TWO_WAY ;
2009-12-17 02:43:32 +01:00
}
2011-10-24 19:52:01 +02:00
if ( mode < SA_FIRST | | mode > SA_LAST ) {
2014-07-28 15:29:41 +02:00
SE_LOG_DEV ( NULL , " Ignoring datastore %s with an invalid sync mode " , name . c_str ( ) ) ;
2009-11-13 05:31:06 +01:00
continue ;
}
2010-02-26 11:20:52 +01:00
syncMode = mode ;
2009-11-26 13:37:40 +01:00
hasSource = true ;
2011-04-21 12:20:00 +02:00
string uri = sc - > getURINonEmpty ( ) ;
2009-11-18 07:22:33 +01:00
SourceType sourceType = sc - > getSourceType ( ) ;
/*If the type is not set by user explictly, let's use backend default
* value */
if ( sourceType . m_format . empty ( ) ) {
sourceType . m_format = ( * m_sourceListPtr ) [ name ] - > getPeerMimeType ( ) ;
}
2010-02-26 11:20:52 +01:00
if ( ! legacy ) {
/*If user did not use force type, we will always use the older type as
* this is what most phones support */
int contentTypeB = StringToContentType ( sourceType . m_format , sourceType . m_forceFormat ) ;
if ( contentTypeB = = WSPCTC_UNKNOWN ) {
contentTypeB = 0 ;
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " Unknown datasource mimetype, use 0 as default " ) ;
2010-02-26 11:20:52 +01:00
}
2014-07-28 15:29:41 +02:00
SE_LOG_DEBUG ( NULL , " SAN datastore %s uri %s type %u mode %d " ,
2011-04-20 10:34:42 +02:00
name . c_str ( ) ,
uri . c_str ( ) ,
contentTypeB ,
mode ) ;
2010-02-26 11:20:52 +01:00
if ( san . AddSync ( mode , ( uInt32 ) contentTypeB , uri . c_str ( ) ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_ERROR ( NULL , " SAN: adding server alerted sync element failed " ) ;
2010-02-26 11:20:52 +01:00
} ;
} else {
2011-04-20 10:34:42 +02:00
string mimetype = GetLegacyMIMEType ( sourceType . m_format , sourceType . m_forceFormat ) ;
2014-07-28 15:29:41 +02:00
SE_LOG_DEBUG ( NULL , " SAN datastore %s uri %s type %s " ,
2011-04-20 10:34:42 +02:00
name . c_str ( ) ,
uri . c_str ( ) ,
mimetype . c_str ( ) ) ;
alertedSources . push_back ( std : : make_pair ( mimetype , uri ) ) ;
2009-11-18 07:22:33 +01:00
}
2009-11-13 05:31:06 +01:00
}
2009-11-26 13:37:40 +01:00
if ( ! hasSource ) {
2014-07-28 15:29:41 +02:00
SE_THROW ( " No datastore enabled for server alerted sync! " ) ;
2009-11-26 13:37:40 +01:00
}
2009-11-13 05:31:06 +01:00
/* Generate the SAN Package */
2009-11-26 15:38:17 +01:00
void * buffer ;
2009-11-13 05:31:06 +01:00
size_t sanSize ;
2010-02-26 11:20:52 +01:00
if ( ! legacy ) {
if ( san . GetPackage ( buffer , sanSize ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_ERROR ( NULL , " SAN package generating failed " ) ;
2010-02-26 11:20:52 +01:00
return false ;
}
//TODO log the binary SAN content
} else {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " SAN with overall sync mode %d " , syncMode ) ;
2010-02-26 11:20:52 +01:00
if ( san . GetPackageLegacy ( buffer , sanSize , alertedSources , syncMode , getWBXML ( ) ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_ERROR ( NULL , " SAN package generating failed " ) ;
2010-02-26 11:20:52 +01:00
return false ;
}
2013-04-08 19:17:36 +02:00
//SE_LOG_DEBUG(NULL, "SAN package content: %s", (char*)buffer);
2009-11-13 05:31:06 +01:00
}
2010-02-26 11:20:52 +01:00
m_agent = createTransportAgent ( ) ;
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " Server sending SAN " ) ;
2011-04-19 16:56:35 +02:00
m_serverAlerted = true ;
2010-02-26 11:20:52 +01:00
m_agent - > setContentType ( ! legacy ?
TransportAgent : : m_contentTypeServerAlertedNotificationDS
: ( getWBXML ( ) ? TransportAgent : : m_contentTypeSyncWBXML :
TransportAgent : : m_contentTypeSyncML ) ) ;
m_agent - > send ( reinterpret_cast < char * > ( buffer ) , sanSize ) ;
//change content type
m_agent - > setContentType ( getWBXML ( ) ? TransportAgent : : m_contentTypeSyncWBXML :
TransportAgent : : m_contentTypeSyncML ) ;
TransportAgent : : Status status ;
do {
status = m_agent - > wait ( ) ;
} while ( status = = TransportAgent : : ACTIVE ) ;
if ( status = = TransportAgent : : GOT_REPLY ) {
const char * reply ;
size_t replyLen ;
string contentType ;
m_agent - > getReply ( reply , replyLen , contentType ) ;
//sanity check for the reply
if ( contentType . empty ( ) | |
contentType . find ( TransportAgent : : m_contentTypeSyncML ) ! = contentType . npos | |
contentType . find ( TransportAgent : : m_contentTypeSyncWBXML ) ! = contentType . npos ) {
SharedBuffer request ( reply , replyLen ) ;
//TODO should generate more reasonable sessionId here
string sessionId = " " ;
initServer ( sessionId , request , contentType ) ;
return true ;
2009-11-13 05:31:06 +01:00
}
}
return false ;
}
2009-12-15 18:19:14 +01:00
static string Step2String ( sysync : : uInt16 stepcmd )
{
switch ( stepcmd ) {
case sysync : : STEPCMD_CLIENTSTART : return " STEPCMD_CLIENTSTART " ;
case sysync : : STEPCMD_CLIENTAUTOSTART : return " STEPCMD_CLIENTAUTOSTART " ;
case sysync : : STEPCMD_STEP : return " STEPCMD_STEP " ;
case sysync : : STEPCMD_GOTDATA : return " STEPCMD_GOTDATA " ;
case sysync : : STEPCMD_SENTDATA : return " STEPCMD_SENTDATA " ;
case sysync : : STEPCMD_SUSPEND : return " STEPCMD_SUSPEND " ;
case sysync : : STEPCMD_ABORT : return " STEPCMD_ABORT " ;
case sysync : : STEPCMD_TRANSPFAIL : return " STEPCMD_TRANSPFAIL " ;
case sysync : : STEPCMD_TIMEOUT : return " STEPCMD_TIMEOUT " ;
case sysync : : STEPCMD_SAN_CHECK : return " STEPCMD_SAN_CHECK " ;
case sysync : : STEPCMD_AUTOSYNC_CHECK : return " STEPCMD_AUTOSYNC_CHECK " ;
case sysync : : STEPCMD_OK : return " STEPCMD_OK " ;
case sysync : : STEPCMD_PROGRESS : return " STEPCMD_PROGRESS " ;
case sysync : : STEPCMD_ERROR : return " STEPCMD_ERROR " ;
case sysync : : STEPCMD_SENDDATA : return " STEPCMD_SENDDATA " ;
case sysync : : STEPCMD_NEEDDATA : return " STEPCMD_NEEDDATA " ;
case sysync : : STEPCMD_RESENDDATA : return " STEPCMD_RESENDDATA " ;
case sysync : : STEPCMD_DONE : return " STEPCMD_DONE " ;
case sysync : : STEPCMD_RESTART : return " STEPCMD_RESTART " ;
case sysync : : STEPCMD_NEEDSYNC : return " STEPCMD_NEEDSYNC " ;
default : return StringPrintf ( " STEPCMD %d " , stepcmd ) ;
}
}
2014-01-31 17:30:04 +01:00
const char * SyncContext : : SyncFreezeName ( SyncFreeze syncFreeze )
{
switch ( syncFreeze ) {
case SYNC_FREEZE_NONE : return " none " ;
case SYNC_FREEZE_RUNNING : return " running " ;
case SYNC_FREEZE_FROZEN : return " frozen " ;
}
return " ??? " ;
}
bool SyncContext : : setFreeze ( bool freeze )
{
SyncFreeze newSyncFreeze = freeze ? SYNC_FREEZE_FROZEN : SYNC_FREEZE_RUNNING ;
if ( m_syncFreeze = = SYNC_FREEZE_NONE | |
newSyncFreeze = = m_syncFreeze ) {
SE_LOG_DEBUG ( NULL , " SyncContext::setFreeze(%s): not changing freeze state: %s " ,
freeze ? " freeze " : " thaw " ,
SyncFreezeName ( m_syncFreeze ) ) ;
return false ;
} else {
SE_LOG_DEBUG ( NULL , " SyncContext::setFreeze(%s): changing freeze state: %s -> %s " ,
freeze ? " freeze " : " thaw " ,
SyncFreezeName ( m_syncFreeze ) ,
SyncFreezeName ( newSyncFreeze ) ) ;
2014-03-07 08:15:37 +01:00
if ( m_agent ) {
SE_LOG_DEBUG ( NULL , " SyncContext::setFreeze(): transport agent " ) ;
m_agent - > setFreeze ( freeze ) ;
}
if ( m_sourceListPtr ) {
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : * m_sourceListPtr ) {
2014-07-28 15:29:41 +02:00
SE_LOG_DEBUG ( NULL , " SyncContext::setFreeze(): datastore %s " , source - > getDisplayName ( ) . c_str ( ) ) ;
2014-03-07 08:15:37 +01:00
source - > setFreeze ( freeze ) ;
}
}
2014-01-31 17:30:04 +01:00
m_syncFreeze = newSyncFreeze ;
return true ;
}
}
2014-08-29 11:27:07 +02:00
SharedSession * keepSession ;
2009-10-05 14:49:32 +02:00
SyncMLStatus SyncContext : : doSync ( )
2009-02-01 16:16:16 +01:00
{
2018-01-16 17:17:34 +01:00
std : : shared_ptr < SuspendFlags : : Guard > signalGuard ;
rewrote signal handling
Having the signal handling code in SyncContext created an unnecessary
dependency of some classes (in particular the transports) on
SyncContext.h. Now the code is in its own SuspendFlags.cpp/h files.
Cleaning up when the caller is done with signal handling is now part
of the utility class (removed automatically when guard instance is
freed).
The signal handlers now push one byte for each caught signal into a
pipe. That byte tells the rest of the code which message it needs to
print, which cannot be done in the signal handlers (because the
logging code is not reentrant and thus not safe to call from a signal
handler).
Compared to the previous solution, this solves several problems:
- no more race condition between setting and printing the message
- the pipe can be watched in a glib event loop, thus removing
the need to poll at regular intervals; polling is still possible
(and necessary) in those transports which do not integrate with
the event loop (CurlTransport) while it can be removed from
others (SoupTransport, OBEXTransport)
A boost::signal is emitted when the global SuspendFlags change.
Automatic connection management is used to disconnect instances which
are managed by boost::shared_ptr. For example, the current transport's
cancel() method is called when the state changes to "aborted".
The early connection phase of the OBEX transport now also can be
aborted (required cleaning up that transport!).
Currently watching for aborts via the event loop only works for real
Unix signals, but not for "abort" flags set in derived SyncContext
instances. The plan is to change that by allowing a "set abort" on
SuspendFlags and thus making
SyncContext::checkForSuspend/checkForAbort() redundant.
The new class is used as follows:
- syncevolution command line without daemon uses it to control
suspend/abort directly
- syncevolution command line as client of syncevo-dbus-server
connects to the state change signal and relays it to the
syncevo-dbus-server session via D-Bus; now all operations
are protected like that, not just syncing
- syncevo-dbus-server installs its own handlers for SIGINT
and SIGTERM and tries to shut down when either of them
is received. SuspendFlags then doesn't activate its own
handler. Instead that handler is invoked by the
syncevo-dbus-server niam() handler, to suspend or abort
a running sync. Once syncs run in a separate process, the
syncevo-dbus-server should request that these processes
suspend or abort before shutting down itself.
- The syncevo-local-sync helper ignores SIGINT after a sync
has started. It would receive that signal when forked by
syncevolution in non-daemon mode and the user presses
CTRL-C. Now the signal is only handled in the parent
process, which suspends as part of its own side of
the SyncML session and aborts by sending a SIGTERM+SIGINT
to syncevo-local-sync. SIGTERM in syncevo-local-sync is
handled by SuspendFlags and is meant to abort whatever
is going on there at the moment (see below).
Aborting long-running operations like import/export or communication
via CardDAV or ActiveSync still needs further work. The backends need
to check the abort state and return early instead of continuing.
2012-01-19 16:11:22 +01:00
// install signal handlers unless this was explicitly disabled
2018-01-30 17:00:24 +01:00
bool catchSignals = getenv ( " SYNCEVOLUTION_NO_SYNC_SIGNALS " ) = = nullptr ;
rewrote signal handling
Having the signal handling code in SyncContext created an unnecessary
dependency of some classes (in particular the transports) on
SyncContext.h. Now the code is in its own SuspendFlags.cpp/h files.
Cleaning up when the caller is done with signal handling is now part
of the utility class (removed automatically when guard instance is
freed).
The signal handlers now push one byte for each caught signal into a
pipe. That byte tells the rest of the code which message it needs to
print, which cannot be done in the signal handlers (because the
logging code is not reentrant and thus not safe to call from a signal
handler).
Compared to the previous solution, this solves several problems:
- no more race condition between setting and printing the message
- the pipe can be watched in a glib event loop, thus removing
the need to poll at regular intervals; polling is still possible
(and necessary) in those transports which do not integrate with
the event loop (CurlTransport) while it can be removed from
others (SoupTransport, OBEXTransport)
A boost::signal is emitted when the global SuspendFlags change.
Automatic connection management is used to disconnect instances which
are managed by boost::shared_ptr. For example, the current transport's
cancel() method is called when the state changes to "aborted".
The early connection phase of the OBEX transport now also can be
aborted (required cleaning up that transport!).
Currently watching for aborts via the event loop only works for real
Unix signals, but not for "abort" flags set in derived SyncContext
instances. The plan is to change that by allowing a "set abort" on
SuspendFlags and thus making
SyncContext::checkForSuspend/checkForAbort() redundant.
The new class is used as follows:
- syncevolution command line without daemon uses it to control
suspend/abort directly
- syncevolution command line as client of syncevo-dbus-server
connects to the state change signal and relays it to the
syncevo-dbus-server session via D-Bus; now all operations
are protected like that, not just syncing
- syncevo-dbus-server installs its own handlers for SIGINT
and SIGTERM and tries to shut down when either of them
is received. SuspendFlags then doesn't activate its own
handler. Instead that handler is invoked by the
syncevo-dbus-server niam() handler, to suspend or abort
a running sync. Once syncs run in a separate process, the
syncevo-dbus-server should request that these processes
suspend or abort before shutting down itself.
- The syncevo-local-sync helper ignores SIGINT after a sync
has started. It would receive that signal when forked by
syncevolution in non-daemon mode and the user presses
CTRL-C. Now the signal is only handled in the parent
process, which suspends as part of its own side of
the SyncML session and aborts by sending a SIGTERM+SIGINT
to syncevo-local-sync. SIGTERM in syncevo-local-sync is
handled by SuspendFlags and is meant to abort whatever
is going on there at the moment (see below).
Aborting long-running operations like import/export or communication
via CardDAV or ActiveSync still needs further work. The backends need
to check the abort state and return early instead of continuing.
2012-01-19 16:11:22 +01:00
if ( catchSignals ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " sync is starting, catch signals " ) ;
rewrote signal handling
Having the signal handling code in SyncContext created an unnecessary
dependency of some classes (in particular the transports) on
SyncContext.h. Now the code is in its own SuspendFlags.cpp/h files.
Cleaning up when the caller is done with signal handling is now part
of the utility class (removed automatically when guard instance is
freed).
The signal handlers now push one byte for each caught signal into a
pipe. That byte tells the rest of the code which message it needs to
print, which cannot be done in the signal handlers (because the
logging code is not reentrant and thus not safe to call from a signal
handler).
Compared to the previous solution, this solves several problems:
- no more race condition between setting and printing the message
- the pipe can be watched in a glib event loop, thus removing
the need to poll at regular intervals; polling is still possible
(and necessary) in those transports which do not integrate with
the event loop (CurlTransport) while it can be removed from
others (SoupTransport, OBEXTransport)
A boost::signal is emitted when the global SuspendFlags change.
Automatic connection management is used to disconnect instances which
are managed by boost::shared_ptr. For example, the current transport's
cancel() method is called when the state changes to "aborted".
The early connection phase of the OBEX transport now also can be
aborted (required cleaning up that transport!).
Currently watching for aborts via the event loop only works for real
Unix signals, but not for "abort" flags set in derived SyncContext
instances. The plan is to change that by allowing a "set abort" on
SuspendFlags and thus making
SyncContext::checkForSuspend/checkForAbort() redundant.
The new class is used as follows:
- syncevolution command line without daemon uses it to control
suspend/abort directly
- syncevolution command line as client of syncevo-dbus-server
connects to the state change signal and relays it to the
syncevo-dbus-server session via D-Bus; now all operations
are protected like that, not just syncing
- syncevo-dbus-server installs its own handlers for SIGINT
and SIGTERM and tries to shut down when either of them
is received. SuspendFlags then doesn't activate its own
handler. Instead that handler is invoked by the
syncevo-dbus-server niam() handler, to suspend or abort
a running sync. Once syncs run in a separate process, the
syncevo-dbus-server should request that these processes
suspend or abort before shutting down itself.
- The syncevo-local-sync helper ignores SIGINT after a sync
has started. It would receive that signal when forked by
syncevolution in non-daemon mode and the user presses
CTRL-C. Now the signal is only handled in the parent
process, which suspends as part of its own side of
the SyncML session and aborts by sending a SIGTERM+SIGINT
to syncevo-local-sync. SIGTERM in syncevo-local-sync is
handled by SuspendFlags and is meant to abort whatever
is going on there at the moment (see below).
Aborting long-running operations like import/export or communication
via CardDAV or ActiveSync still needs further work. The backends need
to check the abort state and return early instead of continuing.
2012-01-19 16:11:22 +01:00
signalGuard = SuspendFlags : : getSuspendFlags ( ) . activate ( ) ;
2009-10-30 07:25:53 +01:00
}
2014-01-31 17:30:04 +01:00
// From now on it is possible to freeze the sync.
m_syncFreeze = SYNC_FREEZE_RUNNING ;
2012-03-30 09:55:55 +02:00
// delay the sync for debugging purposes
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " ready to sync " ) ;
2012-03-30 09:55:55 +02:00
const char * delay = getenv ( " SYNCEVOLUTION_SYNC_DELAY " ) ;
if ( delay ) {
2012-06-20 12:26:12 +02:00
Sleep ( atoi ( delay ) ) ;
2012-03-30 09:55:55 +02:00
}
2013-04-26 11:13:50 +02:00
SuspendFlags & flags = SuspendFlags : : getSuspendFlags ( ) ;
if ( ! flags . isNormal ( ) ) {
2012-03-30 09:55:55 +02:00
return ( SyncMLStatus ) sysync : : LOCERR_USERABORT ;
}
2009-02-20 19:20:08 +01:00
SyncMLStatus status = STATUS_OK ;
2009-07-03 12:27:07 +02:00
std : : string s ;
2009-02-20 19:20:08 +01:00
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
if ( m_serverMode & &
! m_initialMessage . size ( ) & &
! m_localSync ) {
2009-11-27 05:28:20 +01:00
//This is a server alerted sync !
2010-02-26 11:20:52 +01:00
string sanFormat ( getSyncMLVersion ( ) ) ;
uint16_t version = 12 ;
2010-03-04 09:26:09 +01:00
if ( boost : : iequals ( sanFormat , " 1.2 " ) | |
sanFormat = = " " ) {
2010-02-26 11:20:52 +01:00
version = 12 ;
} else if ( boost : : iequals ( sanFormat , " 1.1 " ) ) {
version = 11 ;
} else {
version = 10 ;
}
bool status = true ;
try {
status = sendSAN ( version ) ;
2020-03-02 13:24:24 +01:00
} catch ( const TransportException & e ) {
2010-02-26 11:20:52 +01:00
if ( ! sanFormat . empty ( ) ) {
throw ;
}
status = false ;
//by pass the exception if we will try again with legacy SANFormat
}
2013-04-26 11:13:50 +02:00
if ( ! flags . isNormal ( ) ) {
2012-03-30 09:55:55 +02:00
return ( SyncMLStatus ) sysync : : LOCERR_USERABORT ;
}
2010-02-26 11:20:52 +01:00
if ( ! status ) {
if ( sanFormat . empty ( ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " Server Alerted Sync init with SANFormat %d failed, trying with legacy format " , version ) ;
2010-02-26 11:20:52 +01:00
version = 11 ;
if ( ! sendSAN ( version ) ) {
// return a proper error code
2014-04-02 14:57:56 +02:00
Exception : : throwError ( SE_HERE , " Server Alerted Sync init failed " ) ;
2010-02-26 11:20:52 +01:00
}
} else {
// return a proper error code
2014-04-02 14:57:56 +02:00
Exception : : throwError ( SE_HERE , " Server Alerted Sync init failed " ) ;
2010-02-26 11:20:52 +01:00
}
2009-11-27 05:28:20 +01:00
}
}
2013-04-26 11:13:50 +02:00
if ( ! flags . isNormal ( ) ) {
2012-03-30 09:55:55 +02:00
return ( SyncMLStatus ) sysync : : LOCERR_USERABORT ;
}
2009-07-03 12:27:07 +02:00
// re-init engine with all sources configured
2009-02-06 17:52:18 +01:00
string xml , configname ;
2009-11-06 11:43:55 +01:00
initEngine ( true ) ;
2009-07-03 12:27:07 +02:00
2009-09-27 22:48:04 +02:00
SharedKey targets ;
SharedKey target ;
if ( m_serverMode ) {
// Server engine has no profiles. All settings have be done
// via the XML configuration or function parameters (session ID
// in OpenSession()).
} else {
// check the settings status (MUST BE DONE TO MAKE SETTINGS READY)
SharedKey profiles = m_engine . OpenKeyByPath ( SharedKey ( ) , " /profiles " ) ;
m_engine . GetStrValue ( profiles , " settingsstatus " ) ;
// allow creating new settings when existing settings are not up/downgradeable
m_engine . SetStrValue ( profiles , " overwrite " , " 1 " ) ;
// check status again
m_engine . GetStrValue ( profiles , " settingsstatus " ) ;
2009-02-01 16:16:16 +01:00
2009-09-27 22:48:04 +02:00
// open first profile
SharedKey profile ;
2010-01-04 17:50:27 +01:00
profile = m_engine . OpenSubkey ( profiles , sysync : : KEYVAL_ID_FIRST , true ) ;
if ( ! profile ) {
2009-09-27 22:48:04 +02:00
// no profile exists yet, create default profile
profile = m_engine . OpenSubkey ( profiles , sysync : : KEYVAL_ID_NEW_DEFAULT ) ;
}
2013-09-16 12:17:43 +02:00
if ( ! m_localSync ) {
// Not needed for local sync and might even be
// impossible/wrong because username could refer to an
// identity provider which cannot return a plain string.
SE_LOG_DEBUG ( NULL , " copying syncURL, username, password to Synthesis engine " ) ;
m_engine . SetStrValue ( profile , " serverURI " , getUsedSyncURL ( ) ) ;
UserIdentity syncUser = getSyncUser ( ) ;
InitStateString syncPassword = getSyncPassword ( ) ;
2018-01-16 17:17:34 +01:00
std : : shared_ptr < AuthProvider > provider = AuthProvider : : create ( syncUser , syncPassword ) ;
2013-09-16 12:17:43 +02:00
Credentials cred = provider - > getCredentials ( ) ;
const std : : string & user = cred . m_username ;
const std : : string & password = cred . m_password ;
m_engine . SetStrValue ( profile , " serverUser " , user ) ;
m_engine . SetStrValue ( profile , " serverPassword " , password ) ;
}
2009-09-27 22:48:04 +02:00
m_engine . SetInt32Value ( profile , " encoding " ,
getWBXML ( ) ? 1 /* WBXML */ : 2 /* XML */ ) ;
// Iterate over all data stores in the XML config
// and match them with sync sources.
// TODO: let sync sources provide their own
// XML snippets (inside <client> and inside <datatypes>).
targets = m_engine . OpenKeyByPath ( profile , " targets " ) ;
2009-03-08 14:41:20 +01:00
2010-01-04 17:50:27 +01:00
for ( target = m_engine . OpenSubkey ( targets , sysync : : KEYVAL_ID_FIRST , true ) ;
target ;
target = m_engine . OpenSubkey ( targets , sysync : : KEYVAL_ID_NEXT , true ) ) {
s = m_engine . GetStrValue ( target , " dbname " ) ;
2011-09-02 09:42:19 +02:00
SyncSource * source = findSource ( s ) ;
2010-01-04 17:50:27 +01:00
if ( source ) {
m_engine . SetInt32Value ( target , " enabled " , 1 ) ;
int slow = 0 ;
int direction = 0 ;
2011-10-24 19:52:01 +02:00
string sync = source - > getSync ( ) ;
// this code only runs when we are the client,
// take that into account for the "from-local/remote" modes
SimpleSyncMode mode = SimplifySyncMode ( StringToSyncMode ( sync ) , false ) ;
if ( mode = = SIMPLE_SYNC_SLOW ) {
2010-01-04 17:50:27 +01:00
slow = 1 ;
direction = 0 ;
2011-10-24 19:52:01 +02:00
} else if ( mode = = SIMPLE_SYNC_TWO_WAY ) {
2010-01-04 17:50:27 +01:00
slow = 0 ;
direction = 0 ;
2011-10-24 19:52:01 +02:00
} else if ( mode = = SIMPLE_SYNC_REFRESH_FROM_REMOTE ) {
2010-01-04 17:50:27 +01:00
slow = 1 ;
direction = 1 ;
2011-10-24 19:52:01 +02:00
} else if ( mode = = SIMPLE_SYNC_REFRESH_FROM_LOCAL ) {
2010-01-04 17:50:27 +01:00
slow = 1 ;
direction = 2 ;
2011-10-24 19:52:01 +02:00
} else if ( mode = = SIMPLE_SYNC_ONE_WAY_FROM_REMOTE ) {
2010-01-04 17:50:27 +01:00
slow = 0 ;
direction = 1 ;
2011-10-24 19:52:01 +02:00
} else if ( mode = = SIMPLE_SYNC_ONE_WAY_FROM_LOCAL ) {
2010-01-04 17:50:27 +01:00
slow = 0 ;
direction = 2 ;
2009-03-08 14:41:20 +01:00
} else {
2014-04-02 14:57:56 +02:00
source - > throwError ( SE_HERE , string ( " invalid sync mode: " ) + sync ) ;
2009-03-08 14:41:20 +01:00
}
2010-01-04 17:50:27 +01:00
m_engine . SetInt32Value ( target , " forceslow " , slow ) ;
m_engine . SetInt32Value ( target , " syncmode " , direction ) ;
2011-04-21 12:20:00 +02:00
string uri = source - > getURINonEmpty ( ) ;
2010-03-03 14:55:45 +01:00
m_engine . SetStrValue ( target , " remotepath " , uri ) ;
2010-01-04 17:50:27 +01:00
} else {
m_engine . SetInt32Value ( target , " enabled " , 0 ) ;
2009-02-01 16:16:16 +01:00
}
}
2009-09-27 22:48:04 +02:00
// Close all keys so that engine can flush the modified config.
// Otherwise the session reads the unmodified values from the
// created files while the updated values are still in memory.
target . reset ( ) ;
targets . reset ( ) ;
profile . reset ( ) ;
profiles . reset ( ) ;
// reopen profile keys
profiles = m_engine . OpenKeyByPath ( SharedKey ( ) , " /profiles " ) ;
m_engine . GetStrValue ( profiles , " settingsstatus " ) ;
profile = m_engine . OpenSubkey ( profiles , sysync : : KEYVAL_ID_FIRST ) ;
targets = m_engine . OpenKeyByPath ( profile , " targets " ) ;
2009-02-01 16:16:16 +01:00
}
2009-02-16 16:11:17 +01:00
2009-09-21 11:55:02 +02:00
m_retries = 0 ;
2009-11-13 05:31:06 +01:00
//Create the transport agent if not already created
if ( ! m_agent ) {
m_agent = createTransportAgent ( ) ;
}
2009-02-01 16:16:16 +01:00
2011-04-19 16:56:35 +02:00
// server in local sync initiates sync by passing data to forked process
if ( m_serverMode & & m_localSync ) {
m_serverAlerted = true ;
}
2009-02-01 16:16:16 +01:00
sysync : : TEngineProgressInfo progressInfo ;
2009-09-27 22:48:04 +02:00
sysync : : uInt16 stepCmd =
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
( m_localSync & & m_serverMode ) ? sysync : : STEPCMD_NEEDDATA :
2009-09-27 22:48:04 +02:00
m_serverMode ?
sysync : : STEPCMD_GOTDATA :
sysync : : STEPCMD_CLIENTSTART ;
SharedSession session = m_engine . OpenSession ( m_sessionID ) ;
2009-03-08 14:41:20 +01:00
SharedBuffer sendBuffer ;
2017-12-21 17:11:54 +01:00
std : : unique_ptr < SessionSentinel > sessionSentinel ( new SessionSentinel ( * this , session ) ) ;
2009-02-01 16:16:16 +01:00
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
if ( m_serverMode & & ! m_localSync ) {
2009-09-27 22:48:04 +02:00
m_engine . WriteSyncMLBuffer ( session ,
m_initialMessage . get ( ) ,
m_initialMessage . size ( ) ) ;
SharedKey sessionKey = m_engine . OpenSessionKey ( session ) ;
m_engine . SetStrValue ( sessionKey ,
" contenttype " ,
m_initialMessageType ) ;
m_initialMessage . reset ( ) ;
2009-10-01 15:21:32 +02:00
// TODO: set "sendrespuri" session key to control
// whether the generated messages contain a respURI
// (not needed for OBEX)
2009-09-27 22:48:04 +02:00
}
2014-08-29 11:27:07 +02:00
// Special case local sync when nothing changed: we can be sure
// that we can do another sync from exactly the same state (nonce,
// source change tracking meta data, etc.) and be successful
// again. In such a case we can avoid unnecessary updates of the
// .ini and .bfi files.
//
// To detect this, the server side hooks into the SaveAdminData
// operation and replaces it with just returning an "aborted by
// user" error.
if ( m_serverMode & &
m_localSync & &
m_sourceListPtr - > size ( ) = = 1 ) {
SyncSource * source = * ( * m_sourceListPtr ) . begin ( ) ;
2018-01-16 17:17:34 +01:00
auto preSaveAdminData = [ this ] ( SyncSource & source , const char * adminData ) {
if ( ! source . getTotalNumItemsReceived ( ) & &
! source . getTotalNumItemsSent ( ) & &
source . getFinalSyncMode ( ) = = SYNC_TWO_WAY & &
! source . isFirstSync ( ) ) {
SE_LOG_DEBUG ( NULL , " requesting end of two-way sync with one source early because nothing changed " ) ;
m_quitSync = true ;
return STATUS_SYNC_END_SHORTCUT ;
} else {
return STATUS_OK ;
}
} ;
source - > getOperations ( ) . m_saveAdminData . getPreSignal ( ) . connect ( preSaveAdminData ) ;
2014-08-29 11:27:07 +02:00
}
2009-02-01 16:16:16 +01:00
// Sync main loop: runs until SessionStep() signals end or error.
// Exceptions are caught and lead to a call of SessionStep() with
// parameter STEPCMD_ABORT -> abort session as soon as possible.
2009-02-19 16:08:17 +01:00
bool aborting = false ;
2009-06-26 07:55:48 +02:00
int suspending = 0 ;
2013-04-24 10:01:54 +02:00
Timespec sendStart , resendStart ;
2010-01-05 07:44:19 +01:00
int requestNum = 0 ;
2009-02-19 16:08:17 +01:00
sysync : : uInt16 previousStepCmd = stepCmd ;
SyncSource: optional support for asynchronous insert/update/delete
The wrapper around the actual operation checks if the operation
returned an error or result code (traditional behavior). If not, it
expects a ContinueOperation instance, remembers it and calls it when
the same operation gets called again for the same item.
For add/insert, "same item" is detected based on the KeyH address,
which must not change. For delete, the item local ID is used.
Pre- and post-signals are called exactly once, before the first call
and after the last call of the item.
ContinueOperation is a simple boost::function pointer for now. The
Synthesis engine itself is not able to force completion of the
operation, it just polls. This can lead to many empty messages with
just an Alert inside, thus triggering the "endless loop" protection,
which aborts the sync.
We overcome this limitation in the SyncEvolution layer above the
Synthesis engine: first, we flush pending operations before starting
network IO. This is a good place to batch together all pending
operations. Second, to overcome the "endless loop" problem, we force
a waiting for completion if the last message already was empty. If
that happened, we are done with items and should start sending our
responses.
Binding a function which returns the traditional TSyError still works
because it gets copied transparently into the boost::variant that the
wrapper expects, so no other code in SyncSource or backends needs to
be adapted. Enabling the use of LOCERR_AGAIN in the utility classes
and backends will follow in the next patches.
2013-06-05 17:22:00 +02:00
std : : vector < int > numItemsReceived ; // source->getTotalNumItemsReceived() for each source, see STEPCMD_SENDDATA
2014-08-29 11:27:07 +02:00
m_quitSync = false ;
2009-02-01 16:16:16 +01:00
do {
try {
2014-08-29 11:27:07 +02:00
if ( m_quitSync & &
! m_serverMode ) {
SE_LOG_DEBUG ( NULL , " ending sync early as requested " ) ;
// Intentionally prevent destructing the Synthesis
// engine and session destruction by keeping a
// reference to it around forever, because destroying
// the session would cause undesired disk writes.
keepSession = new SharedSession ( session ) ;
break ;
}
2009-06-26 07:55:48 +02:00
// check for suspend, if so, modify step command for next step
// Since the suspend will actually be committed until it is
// sending out a message, we can safely delay the suspend to
// GOTDATA state.
// After exception occurs, stepCmd will be set to abort to force
// aborting, must avoid to change it back to suspend cmd.
2013-04-26 11:13:50 +02:00
if ( flags . isSuspended ( ) & & stepCmd = = sysync : : STEPCMD_GOTDATA ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " suspending before SessionStep() in STEPCMD_GOTDATA as requested by user " ) ;
2009-03-11 12:59:25 +01:00
stepCmd = sysync : : STEPCMD_SUSPEND ;
}
2009-06-26 07:55:48 +02:00
2009-12-15 18:19:14 +01:00
// Aborting is useful while waiting for a reply and before
// sending a message (which will just lead to us waiting
// for the reply, but possibly after doing some slow network
// IO for setting up the message send).
//
// While processing a message we let the engine run, because
// that is a) likely to be done soon and b) may reduce the
// breakage caused by aborting a running sync.
//
// This check here covers the "waiting for reply" case.
if ( ( stepCmd = = sysync : : STEPCMD_RESENDDATA | |
stepCmd = = sysync : : STEPCMD_SENTDATA | |
stepCmd = = sysync : : STEPCMD_NEEDDATA ) & &
2013-04-26 11:13:50 +02:00
flags . isAborted ( ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " aborting before SessionStep() in %s as requested by script " ,
2009-12-15 18:19:14 +01:00
Step2String ( stepCmd ) . c_str ( ) ) ;
2009-03-11 12:59:25 +01:00
stepCmd = sysync : : STEPCMD_ABORT ;
}
2009-02-19 16:08:17 +01:00
// take next step, but don't abort twice: instead
// let engine contine with its shutdown
2009-02-23 16:36:17 +01:00
if ( stepCmd = = sysync : : STEPCMD_ABORT ) {
2009-02-19 16:08:17 +01:00
if ( aborting ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " engine already notified of abort request, reverting to %s " ,
2009-12-15 18:19:14 +01:00
Step2String ( previousStepCmd ) . c_str ( ) ) ;
2009-02-19 16:08:17 +01:00
stepCmd = previousStepCmd ;
} else {
aborting = true ;
}
}
2009-03-11 12:59:25 +01:00
// same for suspending
if ( stepCmd = = sysync : : STEPCMD_SUSPEND ) {
if ( suspending ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " engine already notified of suspend request, reverting to %s " ,
2009-12-15 18:19:14 +01:00
Step2String ( previousStepCmd ) . c_str ( ) ) ;
2009-03-11 12:59:25 +01:00
stepCmd = previousStepCmd ;
2009-06-26 07:55:48 +02:00
suspending + + ;
2009-03-11 12:59:25 +01:00
} else {
2009-06-26 07:55:48 +02:00
suspending + + ;
2009-03-11 12:59:25 +01:00
}
}
2009-07-22 10:44:06 +02:00
2014-01-31 17:30:04 +01:00
// Need to wait for setFrozen(false) or suspend/abort request.
// Such a call can come in via a D-Bus interface that we keep
// active by servicing the event loop inside GRunWhile().
//
// We freeze without notifying our peer. It will freeze itself
// eventually because we stop exchanging SyncML messages.
if ( ! aborting & & ! suspending & & m_syncFreeze = = SYNC_FREEZE_FROZEN ) {
SE_LOG_DEBUG ( NULL , " freezing sync " ) ;
2018-01-05 16:19:44 +01:00
GRunWhile ( [ this , & flags ] ( ) { return this - > m_syncFreeze = = SYNC_FREEZE_FROZEN & & flags . isNormal ( ) ; } ) ;
2014-01-31 17:30:04 +01:00
}
2009-09-21 13:50:19 +02:00
if ( stepCmd = = sysync : : STEPCMD_NEEDDATA ) {
// Engine already notified. Don't call it twice
// with this state, because it doesn't know how
// to handle this. Skip the SessionStep() call
// and wait for response.
} else {
2010-02-26 17:21:59 +01:00
if ( getLogLevel ( ) > 4 ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " before SessionStep: %s " , Step2String ( stepCmd ) . c_str ( ) ) ;
2010-02-26 17:21:59 +01:00
}
2009-09-21 13:50:19 +02:00
m_engine . SessionStep ( session , stepCmd , & progressInfo ) ;
2010-02-26 17:21:59 +01:00
if ( getLogLevel ( ) > 4 ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " after SessionStep: %s " , Step2String ( stepCmd ) . c_str ( ) ) ;
2010-02-26 17:21:59 +01:00
}
2009-12-07 07:26:01 +01:00
reportStepCmd ( stepCmd ) ;
2009-09-21 13:50:19 +02:00
}
2009-12-15 18:19:14 +01:00
if ( stepCmd = = sysync : : STEPCMD_SENDDATA & &
checkForScriptAbort ( session ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " aborting after SessionStep() in STEPCMD_SENDDATA as requested by script " ) ;
2009-12-15 18:19:14 +01:00
// Catch outgoing message and abort if requested by script.
// Report which sources are affected, based on their status code.
2010-02-12 17:07:59 +01:00
set < string > sources ;
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : * m_sourceListPtr ) {
2009-12-15 18:19:14 +01:00
if ( source - > getStatus ( ) = = STATUS_UNEXPECTED_SLOW_SYNC ) {
2010-02-12 17:07:59 +01:00
string name = source - > getVirtualSource ( ) ;
if ( name . empty ( ) ) {
name = source - > getName ( ) ;
}
sources . insert ( name ) ;
2009-12-15 18:19:14 +01:00
}
}
2010-01-21 11:58:57 +01:00
string explanation = SyncReport : : slowSyncExplanation ( m_server ,
sources ) ;
if ( ! explanation . empty ( ) ) {
2009-12-15 18:19:14 +01:00
string sourceparam = boost : : join ( sources , " " ) ;
2013-04-08 19:17:36 +02:00
SE_LOG_ERROR ( NULL ,
2014-07-28 15:29:41 +02:00
" Aborting because of unexpected slow sync for datastore(s): %s " ,
2009-12-15 18:19:14 +01:00
sourceparam . c_str ( ) ) ;
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " %s " , explanation . c_str ( ) ) ;
2009-12-15 18:19:14 +01:00
} else {
// we should not get here, but if we do, at least log something
2013-04-08 19:17:36 +02:00
SE_LOG_ERROR ( NULL , " aborting as requested by script " ) ;
2009-12-15 18:19:14 +01:00
}
stepCmd = sysync : : STEPCMD_ABORT ;
continue ;
} else if ( stepCmd = = sysync : : STEPCMD_SENDDATA & &
2013-04-26 11:13:50 +02:00
flags . isAborted ( ) ) {
2009-12-15 18:19:14 +01:00
// Catch outgoing message and abort if requested by user.
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " aborting after SessionStep() in STEPCMD_SENDDATA as requested by user " ) ;
2009-12-15 18:19:14 +01:00
stepCmd = sysync : : STEPCMD_ABORT ;
continue ;
} else if ( suspending = = 1 ) {
//During suspention we actually insert a STEPCMD_SUSPEND cmd
//Should restore to the original step here
2009-06-26 07:55:48 +02:00
stepCmd = previousStepCmd ;
continue ;
}
2009-12-15 18:19:14 +01:00
2009-03-08 14:41:20 +01:00
switch ( stepCmd ) {
case sysync : : STEPCMD_OK :
// no progress info, call step again
stepCmd = sysync : : STEPCMD_STEP ;
break ;
case sysync : : STEPCMD_PROGRESS :
// new progress info to show
// Check special case of interactive display alert
if ( progressInfo . eventtype = = sysync : : PEV_DISPLAY100 ) {
// alert 100 received from remote, message text is in
// SessionKey's "displayalert" field
SharedKey sessionKey = m_engine . OpenSessionKey ( session ) ;
// get message from server to display
s = m_engine . GetStrValue ( sessionKey ,
" displayalert " ) ;
displayServerMessage ( s ) ;
} else {
switch ( progressInfo . targetID ) {
case sysync : : KEYVAL_ID_UNKNOWN :
case 0 /* used with PEV_SESSIONSTART?! */ :
displaySyncProgress ( sysync : : TProgressEventEnum ( progressInfo . eventtype ) ,
progressInfo . extra1 ,
progressInfo . extra2 ,
progressInfo . extra3 ) ;
2011-02-16 09:37:56 +01:00
if ( progressInfo . eventtype = = sysync : : PEV_SESSIONEND & &
! status ) {
// remember sync result
status = SyncMLStatus ( progressInfo . extra1 ) ;
}
2009-03-08 14:41:20 +01:00
break ;
2010-05-04 15:36:00 +02:00
default : {
// specific for a certain sync source:
// find it...
SyncSource * source = m_sourceListPtr - > lookupBySynthesisID ( progressInfo . targetID ) ;
if ( source ) {
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
displaySourceProgress ( * source ,
SyncSourceEvent ( sysync : : TProgressEventEnum ( progressInfo . eventtype ) ,
progressInfo . extra1 ,
progressInfo . extra2 ,
progressInfo . extra3 ) ,
false ) ;
2010-05-04 15:36:00 +02:00
} else {
2014-04-02 14:57:56 +02:00
Exception : : throwError ( SE_HERE , std : : string ( " unknown target " ) + s ) ;
2009-02-01 16:16:16 +01:00
}
2010-05-04 15:36:00 +02:00
target . reset ( ) ;
2009-03-08 14:41:20 +01:00
break ;
2009-02-01 16:16:16 +01:00
}
2010-05-04 15:36:00 +02:00
}
2009-03-08 14:41:20 +01:00
}
stepCmd = sysync : : STEPCMD_STEP ;
break ;
case sysync : : STEPCMD_ERROR :
// error, terminate (should not happen, as status is
// already checked above)
break ;
case sysync : : STEPCMD_RESTART :
// make sure connection is closed and will be re-opened for next request
// tbd: close communication channel if still open to make sure it is
// re-opened for the next request
stepCmd = sysync : : STEPCMD_STEP ;
2009-07-22 10:44:06 +02:00
m_retries = 0 ;
2009-03-08 14:41:20 +01:00
break ;
case sysync : : STEPCMD_SENDDATA : {
SyncSource: optional support for asynchronous insert/update/delete
The wrapper around the actual operation checks if the operation
returned an error or result code (traditional behavior). If not, it
expects a ContinueOperation instance, remembers it and calls it when
the same operation gets called again for the same item.
For add/insert, "same item" is detected based on the KeyH address,
which must not change. For delete, the item local ID is used.
Pre- and post-signals are called exactly once, before the first call
and after the last call of the item.
ContinueOperation is a simple boost::function pointer for now. The
Synthesis engine itself is not able to force completion of the
operation, it just polls. This can lead to many empty messages with
just an Alert inside, thus triggering the "endless loop" protection,
which aborts the sync.
We overcome this limitation in the SyncEvolution layer above the
Synthesis engine: first, we flush pending operations before starting
network IO. This is a good place to batch together all pending
operations. Second, to overcome the "endless loop" problem, we force
a waiting for completion if the last message already was empty. If
that happened, we are done with items and should start sending our
responses.
Binding a function which returns the traditional TSyError still works
because it gets copied transparently into the boost::variant that the
wrapper expects, so no other code in SyncSource or backends needs to
be adapted. Enabling the use of LOCERR_AGAIN in the utility classes
and backends will follow in the next patches.
2013-06-05 17:22:00 +02:00
// We'll be busy for a while with network IO, so give
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
// sources a chance to do some work in parallel and
// flush pending progress notifications.
SyncSource: optional support for asynchronous insert/update/delete
The wrapper around the actual operation checks if the operation
returned an error or result code (traditional behavior). If not, it
expects a ContinueOperation instance, remembers it and calls it when
the same operation gets called again for the same item.
For add/insert, "same item" is detected based on the KeyH address,
which must not change. For delete, the item local ID is used.
Pre- and post-signals are called exactly once, before the first call
and after the last call of the item.
ContinueOperation is a simple boost::function pointer for now. The
Synthesis engine itself is not able to force completion of the
operation, it just polls. This can lead to many empty messages with
just an Alert inside, thus triggering the "endless loop" protection,
which aborts the sync.
We overcome this limitation in the SyncEvolution layer above the
Synthesis engine: first, we flush pending operations before starting
network IO. This is a good place to batch together all pending
operations. Second, to overcome the "endless loop" problem, we force
a waiting for completion if the last message already was empty. If
that happened, we are done with items and should start sending our
responses.
Binding a function which returns the traditional TSyError still works
because it gets copied transparently into the boost::variant that the
wrapper expects, so no other code in SyncSource or backends needs to
be adapted. Enabling the use of LOCERR_AGAIN in the utility classes
and backends will follow in the next patches.
2013-06-05 17:22:00 +02:00
if ( m_sourceListPtr ) {
bool needResults = true ;
if ( numItemsReceived . size ( ) < m_sourceListPtr - > size ( ) ) {
numItemsReceived . insert ( numItemsReceived . end ( ) ,
m_sourceListPtr - > size ( ) - numItemsReceived . size ( ) ,
0 ) ;
}
for ( size_t i = 0 ; i < numItemsReceived . size ( ) ; i + + ) {
SyncSource * source = ( * m_sourceListPtr - > getSourceSet ( ) ) [ i ] ;
int received = source - > getTotalNumItemsReceived ( ) ;
SE_LOG_DEBUG ( source - > getDisplayName ( ) , " total number of items received %d " ,
received ) ;
if ( numItemsReceived [ i ] ! = received ) {
numItemsReceived [ i ] = received ;
needResults = false ;
}
}
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : * m_sourceListPtr ) {
2013-06-07 11:48:45 +02:00
source - > flushItemChanges ( ) ;
SyncSource: optional support for asynchronous insert/update/delete
The wrapper around the actual operation checks if the operation
returned an error or result code (traditional behavior). If not, it
expects a ContinueOperation instance, remembers it and calls it when
the same operation gets called again for the same item.
For add/insert, "same item" is detected based on the KeyH address,
which must not change. For delete, the item local ID is used.
Pre- and post-signals are called exactly once, before the first call
and after the last call of the item.
ContinueOperation is a simple boost::function pointer for now. The
Synthesis engine itself is not able to force completion of the
operation, it just polls. This can lead to many empty messages with
just an Alert inside, thus triggering the "endless loop" protection,
which aborts the sync.
We overcome this limitation in the SyncEvolution layer above the
Synthesis engine: first, we flush pending operations before starting
network IO. This is a good place to batch together all pending
operations. Second, to overcome the "endless loop" problem, we force
a waiting for completion if the last message already was empty. If
that happened, we are done with items and should start sending our
responses.
Binding a function which returns the traditional TSyError still works
because it gets copied transparently into the boost::variant that the
wrapper expects, so no other code in SyncSource or backends needs to
be adapted. Enabling the use of LOCERR_AGAIN in the utility classes
and backends will follow in the next patches.
2013-06-05 17:22:00 +02:00
if ( needResults ) {
2013-06-07 11:48:45 +02:00
source - > finishItemChanges ( ) ;
SyncSource: optional support for asynchronous insert/update/delete
The wrapper around the actual operation checks if the operation
returned an error or result code (traditional behavior). If not, it
expects a ContinueOperation instance, remembers it and calls it when
the same operation gets called again for the same item.
For add/insert, "same item" is detected based on the KeyH address,
which must not change. For delete, the item local ID is used.
Pre- and post-signals are called exactly once, before the first call
and after the last call of the item.
ContinueOperation is a simple boost::function pointer for now. The
Synthesis engine itself is not able to force completion of the
operation, it just polls. This can lead to many empty messages with
just an Alert inside, thus triggering the "endless loop" protection,
which aborts the sync.
We overcome this limitation in the SyncEvolution layer above the
Synthesis engine: first, we flush pending operations before starting
network IO. This is a good place to batch together all pending
operations. Second, to overcome the "endless loop" problem, we force
a waiting for completion if the last message already was empty. If
that happened, we are done with items and should start sending our
responses.
Binding a function which returns the traditional TSyError still works
because it gets copied transparently into the boost::variant that the
wrapper expects, so no other code in SyncSource or backends needs to
be adapted. Enabling the use of LOCERR_AGAIN in the utility classes
and backends will follow in the next patches.
2013-06-05 17:22:00 +02:00
}
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
displaySourceProgress ( * source , SyncSourceEvent ( ) , false ) ;
SyncSource: optional support for asynchronous insert/update/delete
The wrapper around the actual operation checks if the operation
returned an error or result code (traditional behavior). If not, it
expects a ContinueOperation instance, remembers it and calls it when
the same operation gets called again for the same item.
For add/insert, "same item" is detected based on the KeyH address,
which must not change. For delete, the item local ID is used.
Pre- and post-signals are called exactly once, before the first call
and after the last call of the item.
ContinueOperation is a simple boost::function pointer for now. The
Synthesis engine itself is not able to force completion of the
operation, it just polls. This can lead to many empty messages with
just an Alert inside, thus triggering the "endless loop" protection,
which aborts the sync.
We overcome this limitation in the SyncEvolution layer above the
Synthesis engine: first, we flush pending operations before starting
network IO. This is a good place to batch together all pending
operations. Second, to overcome the "endless loop" problem, we force
a waiting for completion if the last message already was empty. If
that happened, we are done with items and should start sending our
responses.
Binding a function which returns the traditional TSyError still works
because it gets copied transparently into the boost::variant that the
wrapper expects, so no other code in SyncSource or backends needs to
be adapted. Enabling the use of LOCERR_AGAIN in the utility classes
and backends will follow in the next patches.
2013-06-05 17:22:00 +02:00
}
}
2009-03-08 14:41:20 +01:00
// send data to remote
SharedKey sessionKey = m_engine . OpenSessionKey ( session ) ;
2009-09-27 22:48:04 +02:00
if ( m_serverMode ) {
2009-11-13 05:31:06 +01:00
m_agent - > setURL ( " " ) ;
2009-09-27 22:48:04 +02:00
} else {
// use OpenSessionKey() and GetValue() to retrieve "connectURI"
// and "contenttype" to be used to send data to the server
s = m_engine . GetStrValue ( sessionKey ,
" connectURI " ) ;
2009-11-13 05:31:06 +01:00
m_agent - > setURL ( s ) ;
2009-09-27 22:48:04 +02:00
}
2009-03-08 14:41:20 +01:00
s = m_engine . GetStrValue ( sessionKey ,
" contenttype " ) ;
2009-11-13 05:31:06 +01:00
m_agent - > setContentType ( s ) ;
2009-03-08 14:41:20 +01:00
sessionKey . reset ( ) ;
2013-04-24 10:01:54 +02:00
sendStart = resendStart = Timespec : : monotonic ( ) ;
2010-01-05 07:44:19 +01:00
requestNum + + ;
2009-03-08 14:41:20 +01:00
// use GetSyncMLBuffer()/RetSyncMLBuffer() to access the data to be
// sent or have it copied into caller's buffer using
// ReadSyncMLBuffer(), then send it to the server
2009-09-16 05:24:55 +02:00
sendBuffer = m_engine . GetSyncMLBuffer ( session , true ) ;
2014-08-29 11:27:07 +02:00
if ( m_serverMode & & m_quitSync ) {
// When aborting prematurely, skip the server's
// last reply message and instead tell the client
// to quit.
m_agent - > setContentType ( " quitsync " ) ;
2018-01-30 17:00:24 +01:00
m_agent - > send ( nullptr , 0 ) ;
2014-08-29 11:27:07 +02:00
} else {
m_agent - > send ( sendBuffer . get ( ) , sendBuffer . size ( ) ) ;
}
2009-09-16 05:24:55 +02:00
stepCmd = sysync : : STEPCMD_SENTDATA ; // we have sent the data
break ;
}
case sysync : : STEPCMD_RESENDDATA : {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " resend previous message, retry #%d " , m_retries ) ;
2013-04-24 10:01:54 +02:00
resendStart = Timespec : : monotonic ( ) ;
2009-09-16 05:24:55 +02:00
/* We are resending previous message, just read from the
* previous buffer */
2009-11-13 05:31:06 +01:00
m_agent - > send ( sendBuffer . get ( ) , sendBuffer . size ( ) ) ;
2009-03-08 14:41:20 +01:00
stepCmd = sysync : : STEPCMD_SENTDATA ; // we have sent the data
break ;
}
case sysync : : STEPCMD_NEEDDATA :
2011-02-11 11:36:47 +01:00
if ( ! sendStart ) {
// no message sent yet, record start of wait for data
2013-04-24 10:01:54 +02:00
sendStart = Timespec : : monotonic ( ) ;
2011-02-11 11:36:47 +01:00
}
2009-11-13 05:31:06 +01:00
switch ( m_agent - > wait ( ) ) {
2009-03-08 14:41:20 +01:00
case TransportAgent : : ACTIVE :
2009-09-21 13:50:19 +02:00
// Still sending the data?! Don't change anything,
// skip SessionStep() above.
2009-02-01 16:16:16 +01:00
break ;
2009-09-21 06:20:03 +02:00
2009-09-16 05:24:55 +02:00
case TransportAgent : : TIME_OUT : {
2013-04-24 10:01:54 +02:00
double duration = ( Timespec : : monotonic ( ) - sendStart ) . duration ( ) ;
2010-01-14 12:09:33 +01:00
// HTTP SyncML servers cannot resend a HTTP POST
// reply. Other server transports could in theory
// resend, but don't have the necessary D-Bus APIs
// (MB #6370).
2010-03-15 13:22:34 +01:00
// Same if() as below for FAILED.
2010-01-14 12:09:33 +01:00
if ( m_serverMode | |
2014-01-17 14:40:27 +01:00
! m_retryInterval | | duration + 0.9 > = m_retryDuration | | requestNum = = 1 ) {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL ,
2009-09-21 11:55:02 +02:00
" Transport giving up after %d retries and %ld:%02ldmin " ,
m_retries ,
2013-04-24 10:01:54 +02:00
( long ) duration / 60 ,
( long ) duration % 60 ) ;
2009-09-27 22:41:57 +02:00
SE_THROW_EXCEPTION ( TransportException , " timeout, retry period exceeded " ) ;
2009-07-28 07:21:27 +02:00
} else {
2013-04-24 10:01:54 +02:00
// Timeout must have been due to retryInterval having passed, resend
// immediately.
2009-09-21 06:20:03 +02:00
m_retries + + ;
2009-07-28 07:21:27 +02:00
stepCmd = sysync : : STEPCMD_RESENDDATA ;
}
2009-07-22 10:44:06 +02:00
break ;
2009-09-16 05:24:55 +02:00
}
2009-03-31 15:25:47 +02:00
case TransportAgent : : GOT_REPLY : {
2009-03-08 14:41:20 +01:00
const char * reply ;
size_t replylen ;
2009-03-31 15:25:47 +02:00
string contentType ;
2009-11-13 05:31:06 +01:00
m_agent - > getReply ( reply , replylen , contentType ) ;
2009-03-31 15:25:47 +02:00
// sanity check for reply: if known at all, it must be either XML or WBXML
if ( contentType . empty ( ) | |
contentType . find ( " application/vnd.syncml+wbxml " ) ! = contentType . npos | |
contentType . find ( " application/vnd.syncml+xml " ) ! = contentType . npos ) {
// put answer received earlier into SyncML engine's buffer
2009-07-28 07:21:27 +02:00
m_retries = 0 ;
sendBuffer . reset ( ) ;
2009-03-31 15:25:47 +02:00
m_engine . WriteSyncMLBuffer ( session ,
reply ,
replylen ) ;
2009-09-27 22:48:04 +02:00
if ( m_serverMode ) {
SharedKey sessionKey = m_engine . OpenSessionKey ( session ) ;
m_engine . SetStrValue ( sessionKey ,
" contenttype " ,
contentType ) ;
}
2009-03-31 15:25:47 +02:00
stepCmd = sysync : : STEPCMD_GOTDATA ; // we have received response data
2009-09-21 06:20:03 +02:00
break ;
2014-08-29 11:27:07 +02:00
} else if ( contentType = = " quitsync " ) {
SE_LOG_DEBUG ( NULL , " server is asking us to quit the sync session " ) ;
// Fake "done" events for each active source.
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : * m_sourceListPtr ) {
2014-08-29 11:27:07 +02:00
if ( source - > getFinalSyncMode ( ) ! = SYNC_NONE ) {
displaySourceProgress ( * source ,
SyncSourceEvent ( sysync : : PEV_SYNCEND , 0 , 0 , 0 ) ,
true ) ;
}
}
m_quitSync = true ;
break ;
2009-03-31 15:25:47 +02:00
} else {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " unexpected content type '%s' in reply, %d bytes: \n %.*s " ,
2009-03-31 15:25:47 +02:00
contentType . c_str ( ) , ( int ) replylen , ( int ) replylen , reply ) ;
2013-06-10 22:25:20 +02:00
SE_LOG_ERROR ( NULL , " unexpected reply from peer; might be a temporary problem, try again later " ) ;
2009-09-21 06:20:03 +02:00
} //fall through to network failure case
}
/* If this is a network error, it usually failed quickly, retry
* immediately has likely no effect . Manually sleep here to wait a while
* before retry . Sleep time will be calculated so that the
* message sending interval equals m_retryInterval .
*/
case TransportAgent : : FAILED : {
2011-02-16 09:50:43 +01:00
// Send might have failed because of abort or
// suspend request.
2013-04-26 11:13:50 +02:00
if ( flags . isSuspended ( ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " suspending after TransportAgent::FAILED as requested by user " ) ;
2011-02-16 09:50:43 +01:00
stepCmd = sysync : : STEPCMD_SUSPEND ;
break ;
2013-04-26 11:13:50 +02:00
} else if ( flags . isAborted ( ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " aborting after TransportAgent::FAILED as requested by user " ) ;
2011-02-16 09:50:43 +01:00
stepCmd = sysync : : STEPCMD_ABORT ;
break ;
}
2013-04-24 10:01:54 +02:00
Timespec curTime = Timespec : : monotonic ( ) ;
double duration = ( curTime - sendStart ) . duration ( ) ;
double resendDelay = m_retryInterval - ( curTime - resendStart ) . duration ( ) ;
if ( resendDelay < 0 ) {
resendDelay = 0 ;
}
// Similar if() as above for TIME_OUT. In addition, we must check that
// the next resend won't happen after the retryDuration, because then
2013-05-16 11:09:11 +02:00
// we might as well give up now immediately. Include some fuzz factor
// in case we woke up slightly too early.
2010-01-14 12:09:33 +01:00
if ( m_serverMode | |
2014-01-17 14:40:27 +01:00
! m_retryInterval | | duration + resendDelay + 0.9 > = m_retryDuration | | requestNum = = 1 ) {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL ,
2009-09-21 11:55:02 +02:00
" Transport giving up after %d retries and %ld:%02ldmin " ,
m_retries ,
2013-04-24 10:01:54 +02:00
( long ) duration / 60 ,
( long ) duration % 60 ) ;
2009-09-27 22:41:57 +02:00
SE_THROW_EXCEPTION ( TransportException , " transport failed, retry period exceeded " ) ;
2009-09-21 06:20:03 +02:00
} else {
2013-04-24 10:01:54 +02:00
// Resend after having ensured that the retryInterval is over.
if ( resendDelay > 0 ) {
if ( Sleep ( resendDelay ) > 0 ) {
2013-04-26 11:13:50 +02:00
if ( flags . isSuspended ( ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " suspending after premature exit from sleep() caused by user suspend " ) ;
2009-09-16 05:24:55 +02:00
stepCmd = sysync : : STEPCMD_SUSPEND ;
} else {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " aborting after premature exit from sleep() caused by user abort " ) ;
2009-09-16 05:24:55 +02:00
stepCmd = sysync : : STEPCMD_ABORT ;
}
2009-09-21 06:20:03 +02:00
break ;
}
2009-09-25 17:56:54 +02:00
}
2009-09-21 06:20:03 +02:00
m_retries + + ;
stepCmd = sysync : : STEPCMD_RESENDDATA ;
2009-03-31 15:25:47 +02:00
}
2009-02-01 16:16:16 +01:00
break ;
2009-03-31 15:25:47 +02:00
}
local sync: kill syncevo-local-sync with SIGTERM
Shutting down syncevo-local-sync in a timely manner when
aborting is hard: the process might be stuck in a blocking
call which cannot be made to check the abort request (blocking
libneon, activesyncd client library, ...).
The best that can be done is to let the process be killed by the
SIGTERM. To have some trace of that, catch the signal and log the
signal; there's a slight risk that the logging system is in an
inconsistent state, but overall that risk is minor.
Because syncevo-local-sync catches SIGINT, ForkExec::stop() must send
SIGTERM in addition to SIGINT. To suppress redundant and misleading
ERROR messages when the bad child status is handled, the
ForkExecParent remembers that itself asked the child to stop and only
treats unexpected "killed by signal" results as error.
The local transport must call that stop() in its cancel(). It enters
the "canceled" state which prevents all further communication with the
child, in particular waiting for the child sync report; doing that
would produce another redundant error message about "child exited
without sending report".
Calling stop() in the local transport's shutdown() is no longer
possible, because it would kill the child right away. Before it simply
had no effect, because SIGINT was ignored. This points towards an
unsolved problem: how long should the parent wait for the child after
the sync is done? If the child gets stuck hard after sending its last
message, the parent currently waits forever until the user aborts.
In the sync event loop the caller of the transport must recognize
CANCELED as something which might be desired and thus should not be
logged as ERROR. That way the Synthesis engine is called one more time
with STEPCMD_ABORT also in those cases where the transport itself
detected the abort request first.
2012-01-20 15:28:54 +01:00
case TransportAgent : : CANCELED :
// Send might have failed because of abort or
// suspend request.
2013-04-26 11:13:50 +02:00
if ( flags . isSuspended ( ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " suspending after TransportAgent::CANCELED as requested by user " ) ;
local sync: kill syncevo-local-sync with SIGTERM
Shutting down syncevo-local-sync in a timely manner when
aborting is hard: the process might be stuck in a blocking
call which cannot be made to check the abort request (blocking
libneon, activesyncd client library, ...).
The best that can be done is to let the process be killed by the
SIGTERM. To have some trace of that, catch the signal and log the
signal; there's a slight risk that the logging system is in an
inconsistent state, but overall that risk is minor.
Because syncevo-local-sync catches SIGINT, ForkExec::stop() must send
SIGTERM in addition to SIGINT. To suppress redundant and misleading
ERROR messages when the bad child status is handled, the
ForkExecParent remembers that itself asked the child to stop and only
treats unexpected "killed by signal" results as error.
The local transport must call that stop() in its cancel(). It enters
the "canceled" state which prevents all further communication with the
child, in particular waiting for the child sync report; doing that
would produce another redundant error message about "child exited
without sending report".
Calling stop() in the local transport's shutdown() is no longer
possible, because it would kill the child right away. Before it simply
had no effect, because SIGINT was ignored. This points towards an
unsolved problem: how long should the parent wait for the child after
the sync is done? If the child gets stuck hard after sending its last
message, the parent currently waits forever until the user aborts.
In the sync event loop the caller of the transport must recognize
CANCELED as something which might be desired and thus should not be
logged as ERROR. That way the Synthesis engine is called one more time
with STEPCMD_ABORT also in those cases where the transport itself
detected the abort request first.
2012-01-20 15:28:54 +01:00
stepCmd = sysync : : STEPCMD_SUSPEND ;
break ;
2013-04-26 11:13:50 +02:00
} else if ( flags . isAborted ( ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " aborting after TransportAgent::CANCELED as requested by user " ) ;
local sync: kill syncevo-local-sync with SIGTERM
Shutting down syncevo-local-sync in a timely manner when
aborting is hard: the process might be stuck in a blocking
call which cannot be made to check the abort request (blocking
libneon, activesyncd client library, ...).
The best that can be done is to let the process be killed by the
SIGTERM. To have some trace of that, catch the signal and log the
signal; there's a slight risk that the logging system is in an
inconsistent state, but overall that risk is minor.
Because syncevo-local-sync catches SIGINT, ForkExec::stop() must send
SIGTERM in addition to SIGINT. To suppress redundant and misleading
ERROR messages when the bad child status is handled, the
ForkExecParent remembers that itself asked the child to stop and only
treats unexpected "killed by signal" results as error.
The local transport must call that stop() in its cancel(). It enters
the "canceled" state which prevents all further communication with the
child, in particular waiting for the child sync report; doing that
would produce another redundant error message about "child exited
without sending report".
Calling stop() in the local transport's shutdown() is no longer
possible, because it would kill the child right away. Before it simply
had no effect, because SIGINT was ignored. This points towards an
unsolved problem: how long should the parent wait for the child after
the sync is done? If the child gets stuck hard after sending its last
message, the parent currently waits forever until the user aborts.
In the sync event loop the caller of the transport must recognize
CANCELED as something which might be desired and thus should not be
logged as ERROR. That way the Synthesis engine is called one more time
with STEPCMD_ABORT also in those cases where the transport itself
detected the abort request first.
2012-01-20 15:28:54 +01:00
stepCmd = sysync : : STEPCMD_ABORT ;
break ;
}
// not sure exactly why it is canceled
SE_THROW_EXCEPTION_STATUS ( BadSynthesisResult ,
" transport canceled " ,
sysync : : LOCERR_USERABORT ) ;
break ;
2009-03-08 14:41:20 +01:00
default :
stepCmd = sysync : : STEPCMD_TRANSPFAIL ; // communication with server failed
2009-02-01 16:16:16 +01:00
break ;
}
}
2012-05-08 13:54:24 +02:00
// Don't tell engine to abort when it already did.
if ( aborting & & stepCmd = = sysync : : STEPCMD_ABORT ) {
stepCmd = sysync : : STEPCMD_DONE ;
}
2009-03-11 12:59:25 +01:00
previousStepCmd = stepCmd ;
2009-02-01 16:16:16 +01:00
// loop until session done or aborted with error
2009-03-23 16:13:45 +01:00
} catch ( const BadSynthesisResult & result ) {
if ( result . result ( ) = = sysync : : LOCERR_USERABORT & & aborting ) {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " Aborted as requested. " ) ;
2009-03-23 16:13:45 +01:00
stepCmd = sysync : : STEPCMD_DONE ;
} else if ( result . result ( ) = = sysync : : LOCERR_USERSUSPEND & & suspending ) {
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " Suspended as requested. " ) ;
2009-03-23 16:13:45 +01:00
stepCmd = sysync : : STEPCMD_DONE ;
} else if ( aborting ) {
// aborting very early can lead to results different from LOCERR_USERABORT
// => don't treat this as error
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " Aborted with unexpected result (%d) " ,
2009-03-23 16:13:45 +01:00
static_cast < int > ( result . result ( ) ) ) ;
stepCmd = sysync : : STEPCMD_DONE ;
} else {
2009-10-06 17:22:47 +02:00
Exception : : handle ( & status ) ;
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " aborting after catching fatal error " ) ;
2012-05-08 13:54:24 +02:00
// Don't tell engine to abort when it already did.
stepCmd = aborting ? sysync : : STEPCMD_DONE : sysync : : STEPCMD_ABORT ;
2009-03-23 16:13:45 +01:00
}
2009-02-01 16:16:16 +01:00
} catch ( . . . ) {
2009-10-06 17:22:47 +02:00
Exception : : handle ( & status ) ;
2013-04-08 19:17:36 +02:00
SE_LOG_DEBUG ( NULL , " aborting after catching fatal error " ) ;
2012-05-08 13:54:24 +02:00
// Don't tell engine to abort when it already did.
stepCmd = aborting ? sysync : : STEPCMD_DONE : sysync : : STEPCMD_ABORT ;
2009-02-01 16:16:16 +01:00
}
2009-02-23 16:36:17 +01:00
} while ( stepCmd ! = sysync : : STEPCMD_DONE & & stepCmd ! = sysync : : STEPCMD_ERROR ) ;
2009-02-20 19:20:08 +01:00
2009-10-07 17:57:38 +02:00
// If we get here without error, then close down connection normally.
// Otherwise destruct the agent without further communication.
2013-04-26 11:13:50 +02:00
if ( ! status & & ! flags . isAborted ( ) ) {
2009-10-07 17:57:38 +02:00
try {
2009-11-13 05:31:06 +01:00
m_agent - > shutdown ( ) ;
2009-09-27 22:48:04 +02:00
// TODO: implement timeout for peers which fail to respond
2013-04-26 11:13:50 +02:00
while ( ! flags . isAborted ( ) & &
2009-11-13 05:31:06 +01:00
m_agent - > wait ( true ) = = TransportAgent : : ACTIVE ) {
// TODO: allow aborting the sync here
2009-10-07 17:57:38 +02:00
}
} catch ( . . . ) {
status = handleException ( ) ;
}
}
2013-05-16 11:10:59 +02:00
// Let session shut down before auto-destructing anything else
// (like our signal blocker). This may take a while, because it
// may involve shutting down the helper background thread which
// opened our local datastore.
SE_LOG_DEBUG ( NULL , " closing session " ) ;
2014-01-31 17:30:04 +01:00
// setFreeze() no longer has an effect and returns false from now on.
m_syncFreeze = SYNC_FREEZE_NONE ;
2014-09-03 14:44:59 +02:00
m_initialMessage . reset ( ) ;
2013-05-16 11:10:59 +02:00
sessionSentinel . reset ( ) ;
2014-09-03 14:44:59 +02:00
sendBuffer . reset ( ) ;
2013-05-16 11:10:59 +02:00
session . reset ( ) ;
SE_LOG_DEBUG ( NULL , " session closed " ) ;
2009-02-20 19:20:08 +01:00
return status ;
2009-02-01 16:16:16 +01:00
}
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
string SyncContext : : getSynthesisDatadir ( )
{
2012-12-04 15:38:03 +01:00
if ( isEphemeral ( ) & & m_sourceListPtr ) {
2014-07-22 16:04:03 +02:00
// Suppress writing in libsynthesis binfile client.
return " /dev/null " ;
2012-12-04 15:38:03 +01:00
} else if ( m_localSync & & ! m_serverMode ) {
support local sync (BMC #712)
Local sync is configured with a new syncURL = local://<context> where
<context> identifies the set of databases to synchronize with. The
URI of each source in the config identifies the source in that context
to synchronize with.
The databases in that context run a SyncML session as client. The
config itself is for a server. Reversing these roles is possible by
putting the config into the other context.
A sync is started by the server side, via the new LocalTransportAgent.
That agent forks, sets up the client side, then passes messages
back and forth via stream sockets. Stream sockets are useful because
unexpected peer shutdown can be detected.
Running the server side requires a few changes:
- do not send a SAN message, the client will start the
message exchange based on the config
- wait for that message before doing anything
The client side is more difficult:
- Per-peer config nodes do not exist in the target context.
They are stored in a hidden .<context> directory inside
the server config tree. This depends on the new "registering nodes
in the tree" feature. All nodes are hidden, because users
are not meant to edit any of them. Their name is intentionally
chosen like traditional nodes so that removing the config
also removes the new files.
- All relevant per-peer properties must be copied from the server
config (log level, printing changes, ...); they cannot be set
differently.
Because two separate SyncML sessions are used, we end up with
two normal session directories and log files.
The implementation is not complete yet:
- no glib support, so cannot be used in syncevo-dbus-server
- no support for CTRL-C and abort
- no interactive password entry for target sources
- unexpected slow syncs are detected on the client side, but
not reported properly on the server side
2010-07-31 18:28:53 +02:00
return m_localClientRootPath + " /.synthesis " ;
} else {
return getRootPath ( ) + " /.synthesis " ;
}
}
2009-09-14 12:44:27 +02:00
SyncMLStatus SyncContext : : handleException ( )
{
SyncMLStatus res = Exception : : handle ( ) ;
return res ;
}
2009-02-01 16:16:16 +01:00
2009-10-05 14:49:32 +02:00
void SyncContext : : status ( )
2007-11-08 22:22:52 +01:00
{
2012-06-05 14:57:32 +02:00
checkConfig ( " status check " ) ;
2007-11-08 22:22:52 +01:00
2009-07-03 12:27:07 +02:00
SourceList sourceList ( * this , false ) ;
2008-03-06 23:23:13 +01:00
initSources ( sourceList ) ;
2013-07-29 13:57:46 +02:00
PasswordConfigProperty : : checkPasswords ( getUserInterfaceNonNull ( ) , * this ,
// Don't need sync passwords.
PasswordConfigProperty : : CHECK_PASSWORD_ALL & ~ PasswordConfigProperty : : CHECK_PASSWORD_SYNC ,
sourceList . getSourceNames ( ) ) ;
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : sourceList ) {
2008-04-07 20:47:05 +02:00
source - > open ( ) ;
}
2007-11-08 22:22:52 +01:00
2009-04-29 16:55:31 +02:00
SyncReport changes ;
checkSourceChanges ( sourceList , changes ) ;
stringstream out ;
changes . prettyPrint ( out ,
SyncReport : : WITHOUT_SERVER |
SyncReport : : WITHOUT_CONFLICTS |
SyncReport : : WITHOUT_REJECTS |
SyncReport : : WITH_TOTAL ) ;
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " Local item changes: \n %s " ,
2009-04-29 16:55:31 +02:00
out . str ( ) . c_str ( ) ) ;
2010-03-01 15:34:26 +01:00
sourceList . accessSession ( getLogDir ( ) ) ;
2013-04-08 22:43:07 +02:00
Logger : : instance ( ) . setLevel ( Logger : : INFO ) ;
2007-11-08 22:22:52 +01:00
string prevLogdir = sourceList . getPrevLogdir ( ) ;
bool found = access ( prevLogdir . c_str ( ) , R_OK | X_OK ) = = 0 ;
if ( found ) {
2012-01-09 18:33:39 +01:00
if ( ! m_quiet & & getPrintChanges ( ) ) {
try {
sourceList . setPath ( prevLogdir ) ;
2018-01-30 17:00:24 +01:00
sourceList . dumpDatabases ( " current " , nullptr ) ;
2010-10-29 16:00:50 +02:00
sourceList . dumpLocalChanges ( " " , " after " , " current " , " " ) ;
2012-01-09 18:33:39 +01:00
} catch ( . . . ) {
Exception : : handle ( ) ;
2010-10-29 16:00:50 +02:00
}
2007-11-08 22:22:52 +01:00
}
} else {
2013-04-08 19:17:36 +02:00
SE_LOG_SHOW ( NULL , " Previous log directory not found. " ) ;
2011-01-18 15:07:46 +01:00
if ( getLogDir ( ) . empty ( ) ) {
2013-04-08 19:17:36 +02:00
SE_LOG_SHOW ( NULL , " Enable the 'logdir' option and synchronize to use this feature. " ) ;
2007-11-08 22:22:52 +01:00
}
}
}
2009-04-15 15:58:05 +02:00
2009-10-05 14:49:32 +02:00
void SyncContext : : checkStatus ( SyncReport & report )
2009-04-29 16:55:31 +02:00
{
2012-06-05 14:57:32 +02:00
checkConfig ( " status check " ) ;
2009-04-29 16:55:31 +02:00
2009-07-03 12:27:07 +02:00
SourceList sourceList ( * this , false ) ;
2009-04-29 16:55:31 +02:00
initSources ( sourceList ) ;
2013-07-29 13:57:46 +02:00
PasswordConfigProperty : : checkPasswords ( getUserInterfaceNonNull ( ) , * this ,
// Don't need sync passwords.
PasswordConfigProperty : : CHECK_PASSWORD_ALL & ~ PasswordConfigProperty : : CHECK_PASSWORD_SYNC ,
sourceList . getSourceNames ( ) ) ;
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : sourceList ) {
2009-04-29 16:55:31 +02:00
source - > open ( ) ;
}
checkSourceChanges ( sourceList , report ) ;
}
2009-04-23 16:47:07 +02:00
static void logRestoreReport ( const SyncReport & report , bool dryrun )
{
if ( ! report . empty ( ) ) {
stringstream out ;
2009-04-29 16:55:31 +02:00
report . prettyPrint ( out , SyncReport : : WITHOUT_SERVER | SyncReport : : WITHOUT_CONFLICTS | SyncReport : : WITH_TOTAL ) ;
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " Item changes %s applied locally during restore: \n %s " ,
2009-04-23 16:47:07 +02:00
dryrun ? " to be " : " that were " ,
out . str ( ) . c_str ( ) ) ;
2013-04-08 19:17:36 +02:00
SE_LOG_INFO ( NULL , " The same incremental changes will be applied to the server during the next sync. " ) ;
SE_LOG_INFO ( NULL , " Use -sync refresh-from-client to replace the complete data on the server. " ) ;
2009-04-23 16:47:07 +02:00
}
}
2009-10-05 14:49:32 +02:00
void SyncContext : : checkSourceChanges ( SourceList & sourceList , SyncReport & changes )
2009-04-29 16:55:31 +02:00
{
2018-01-30 17:00:24 +01:00
changes . setStart ( time ( nullptr ) ) ;
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : sourceList ) {
2011-01-17 20:37:27 +01:00
SyncSourceReport local ;
redesigned SyncSource base class + API
The main motivation for this change is that it allows the implementor
of a backend to choose the implementations for the different aspects
of a datasource (change tracking, item import/export, logging, ...)
independently of each other. For example, change tracking via revision
strings can now be combined with exchanging data with the Synthesis
engine via a single string (the traditional method in SyncEvolution)
and with direct access to the Synthesis field list (now possible for
the first time).
The new backend API is based on the concept of providing
implementations for certain functionality via function objects instead
of implementing certain virtual methods. The advantage is that
implementors can define their own, custom interfaces and mix and match
implementations of the different groups of functionality.
Logging (see SyncSourceLogging in a later commit) can be done by
wrapping some arbitrary other item import/export function objects
(decorator design pattern).
The class hierarchy is now this:
- SyncSourceBase: interface for common utility code, all other
classes are derived from it and thus can use that code
- SyncSource: base class which implements SyncSourceBase and
hooks a datasource into the SyncEvolution core;
its "struct Operations" holds the function objects which
can be implemented in different ways
- TestingSyncSource: combines some of the following classes
into an interface that is expected by the client-test
program; backends only have to derive from (and implement this)
if they want to use the automated testing
- TrackingSyncSource: provides the same functionality as
before (change tracking via revision strings, item import/export
as string) in a single interface; the description of the pure
virtual methods are duplicated so that developers can go through
this class and find everything they need to know to implement
it
The following classes contain the code that was previously
found in the EvolutionSyncSource base class. Implementors
can derive from them and call the init() methods to inherit
and activate the functionality:
- SyncSourceSession: binds Synthesis session callbacks to
virtual methods beginSync(), endSync()
- SyncSourceChanges: implements Synthesis item tracking callbacks
with set of LUIDs that the user of the class has to fill
- SyncSourceDelete: binds Synthesis delete callback to
virtual method
- SyncSourceRaw: read and write items in the backends format,
used for testing and backup/restore
- SyncSourceSerialize: exchanges items with Synthesis engine
using a string representation of the data; this is how
EvolutionSyncSource has traditionally worked, so much of the
same virtual methods are now in this class
- SyncSourceRevisions: utility class which does change tracking
via some kind of "revision" string which changes each time
an item is modified; this code was previously in the
TrackingSyncSource
2009-08-25 09:27:46 +02:00
if ( source - > getOperations ( ) . m_checkStatus ) {
source - > getOperations ( ) . m_checkStatus ( local ) ;
2011-01-17 20:37:27 +01:00
} else {
// no information available
local . setItemStat ( SyncSourceReport : : ITEM_LOCAL ,
SyncSourceReport : : ITEM_ADDED ,
SyncSourceReport : : ITEM_TOTAL ,
- 1 ) ;
local . setItemStat ( SyncSourceReport : : ITEM_LOCAL ,
SyncSourceReport : : ITEM_UPDATED ,
SyncSourceReport : : ITEM_TOTAL ,
- 1 ) ;
local . setItemStat ( SyncSourceReport : : ITEM_LOCAL ,
SyncSourceReport : : ITEM_REMOVED ,
SyncSourceReport : : ITEM_TOTAL ,
- 1 ) ;
local . setItemStat ( SyncSourceReport : : ITEM_LOCAL ,
SyncSourceReport : : ITEM_ANY ,
SyncSourceReport : : ITEM_TOTAL ,
- 1 ) ;
}
changes . addSyncSourceReport ( source - > getName ( ) , local ) ;
2009-04-29 16:55:31 +02:00
}
2018-01-30 17:00:24 +01:00
changes . setEnd ( time ( nullptr ) ) ;
2009-04-29 16:55:31 +02:00
}
2009-12-15 18:19:14 +01:00
bool SyncContext : : checkForScriptAbort ( SharedSession session )
{
try {
SharedKey sessionKey = m_engine . OpenSessionKey ( session ) ;
SharedKey contextKey = m_engine . OpenKeyByPath ( sessionKey , " /sessionvars " ) ;
bool abort = m_engine . GetInt32Value ( contextKey , " delayedabort " ) ;
return abort ;
2020-03-02 13:24:24 +01:00
} catch ( const NoSuchKey & ) {
2009-12-15 18:19:14 +01:00
// this is necessary because the session might already have
// been closed, which removes the variable
return false ;
2020-03-02 13:24:24 +01:00
} catch ( const BadSynthesisResult & ) {
2009-12-15 18:19:14 +01:00
return false ;
}
}
2009-10-05 14:49:32 +02:00
void SyncContext : : restore ( const string & dirname , RestoreDatabase database )
2009-04-23 16:47:07 +02:00
{
2012-06-05 14:57:32 +02:00
checkConfig ( " restore " ) ;
2009-04-23 16:47:07 +02:00
2009-07-03 12:27:07 +02:00
SourceList sourceList ( * this , false ) ;
2010-03-01 15:34:26 +01:00
sourceList . accessSession ( dirname . c_str ( ) ) ;
2013-04-08 22:43:07 +02:00
Logger : : instance ( ) . setLevel ( Logger : : INFO ) ;
2009-04-23 16:47:07 +02:00
initSources ( sourceList ) ;
2013-07-29 13:57:46 +02:00
PasswordConfigProperty : : checkPasswords ( getUserInterfaceNonNull ( ) , * this ,
// Don't need sync passwords.
PasswordConfigProperty : : CHECK_PASSWORD_ALL & ~ PasswordConfigProperty : : CHECK_PASSWORD_SYNC ,
sourceList . getSourceNames ( ) ) ;
2009-04-23 16:47:07 +02:00
string datadump = database = = DATABASE_BEFORE_SYNC ? " before " : " after " ;
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : sourceList ) {
2009-12-22 09:47:31 +01:00
// fake a source alert event
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
displaySourceProgress ( * source , SyncSourceEvent ( sysync : : PEV_ALERTED , - 1 , 0 , 0 ) , true ) ;
2009-04-23 16:47:07 +02:00
source - > open ( ) ;
}
2012-01-09 18:33:39 +01:00
if ( ! m_quiet & & getPrintChanges ( ) ) {
2018-01-30 17:00:24 +01:00
sourceList . dumpDatabases ( " current " , nullptr ) ;
SyncML server: delayed checking of sources (MB #7710)
With this patch, SyncML server sources are only opened() and their
data dumped when a client really uses them. As before, sources are
only enabled in the server if their sync mode is not "disabled". This
tolerates sources which cannot be instantiated because their "type" is
not supported.
The patch changes the SourceList and its methods so that they can do
the database dumps and comparisons for a single source at a
time. SourceList tracks which of its sources were dumped before the
sync and uses that information at the end to produce the "after sync"
comparison.
That "after sync" comparison was a reduced copy of the
dumpLocalChanges() source code. The copy was replaced with a suitably
parameterized call to dumpLocalChanges(), which became easy after
adding the "oldSession" parameter in a recent patch. That output now
is as follows:
-------------------------> snip <-----------------------------------
Changes applied during synchronization:
+---------------|-----------------------|-----------------------|-CON-+
| | LOCAL | REMOTE | FLI |
| Source | NEW | MOD | DEL | ERR | NEW | MOD | DEL | ERR | CTS |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| addressbook | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| calendar | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| two-way, 0 KB sent by client, 0 KB received |
| item(s) in database backup: 20 before sync, 20 after it |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| start Wed Feb 10 16:38:15 2010, duration 0:02min |
| synchronization completed successfully |
+---------------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Data modified locally during sync:
*** addressbook ***
no changes
*** calendar ***
no changes
-------------------------> snip <-----------------------------------
Previously the last heading was "Changes applied to client during
synchronization", which is wrong for the server (it is not a
client) and did not properly distinguish between item and data
changes (items may be changed without affecting the set of data,
as in removing one item and adding it with the same content).
In a server, the "*** <source> ***" part is only printed for active
sources, whereas the table always contains all sources with sync mode
!= "disabled". If we had progress events for the server, it should be
more obvious that some sources were not really used during the
sync. Alternatively we could also remove them from the report.
Also fixed several other such "to server/client" messages. They were
written from the perspective of a client and were wrong when running
as server. Using "remotely" and "locally" instead works on both client
and server.
2010-02-10 17:47:24 +01:00
sourceList . dumpLocalChanges ( dirname , " current " , datadump , " " ,
" Data changes to be applied locally during restore: \n " ,
2009-04-23 16:47:07 +02:00
" CLIENT_TEST_LEFT_NAME='current data' "
" CLIENT_TEST_REMOVED='after restore' "
" CLIENT_TEST_REMOVED='to be removed' "
" CLIENT_TEST_ADDED='to be added' " ) ;
}
SyncReport report ;
try {
2018-01-16 10:58:04 +01:00
for ( SyncSource * source : sourceList ) {
2009-04-23 16:47:07 +02:00
SyncSourceReport sourcereport ;
try {
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
displaySourceProgress ( * source , SyncSourceEvent ( sysync : : PEV_SYNCSTART , 0 , 0 , 0 ) , true ) ;
2009-04-23 16:47:07 +02:00
sourceList . restoreDatabase ( * source ,
datadump ,
m_dryrun ,
sourcereport ) ;
sync: less verbose output, shorter runtime
For each incoming change, one INFO line with "received x[/out of y]"
was printed, immediately followed by another line with total counts
"added x, updated y, removed z". For each outgoing change, a "sent
x[/out of y]" was printed.
In addition, these changes were forwarded to the D-Bus server where a
"percent complete" was calculated and broadcasted to clients. All of
that caused a very high overhead for every single change, even if the
actual logging was off. The syncevo-dbus-server was constantly
consuming CPU time during a sync when it should have been mostly idle.
To avoid this overhead, the updated received/sent numbers that come
from the Synthesis engine are now cached and only processed when done
with a SyncML message or some other event happens (whatever happens
first).
To keep the implementation simple, the "added x, updated y, removed z"
information is ignored completely and no longer appears in the output.
As a result, syncevo-dbus-server is now almost completely idle during
a running sync with no log output. Such a sync involving 10000 contacts
was sped up from 37s to 26s total runtime.
2013-07-11 11:46:07 +02:00
displaySourceProgress ( * source , SyncSourceEvent ( sysync : : PEV_SYNCEND , 0 , 0 , 0 ) , true ) ;
2009-04-23 16:47:07 +02:00
report . addSyncSourceReport ( source - > getName ( ) , sourcereport ) ;
} catch ( . . . ) {
sourcereport . recordStatus ( STATUS_FATAL ) ;
report . addSyncSourceReport ( source - > getName ( ) , sourcereport ) ;
throw ;
}
}
} catch ( . . . ) {
logRestoreReport ( report , m_dryrun ) ;
throw ;
}
logRestoreReport ( report , m_dryrun ) ;
}
2009-10-05 14:49:32 +02:00
void SyncContext : : getSessions ( vector < string > & dirs )
2009-04-15 15:58:05 +02:00
{
2018-01-16 17:17:34 +01:00
make_weak_shared : : make < LogDir > ( * this ) - > previousLogdirs ( dirs ) ;
2009-04-15 15:58:05 +02:00
}
2009-04-15 21:03:26 +02:00
2009-12-03 10:37:00 +01:00
string SyncContext : : readSessionInfo ( const string & dir , SyncReport & report )
2009-04-15 21:03:26 +02:00
{
2018-01-16 17:17:34 +01:00
auto logging = make_weak_shared : : make < LogDir > ( * this ) ;
Logging: thread-safe
Logging must be thread-safe, because the glib log callback may be
called from arbitrary threads. This becomes more important with EDS
3.8, because it shifts the execution of synchronous calls into
threads.
Thread-safe logging will also be required for running the Synthesis
engine multithreaded, to overlap SyncML client communication with
preparing the sources.
To achieve this, the core Logging module protects its global data with
a recursive mutex. A recursive mutes is used because logging calls
themselves may be recursive, so ensuring single-lock semantic would be
hard.
Ref-counted boost pointers are used to track usage of Logger
instances. This allows removal of an instance from the logging stack
while it may still be in use. Destruction then will be delayed until
the last user of the instance drops it. The instance itself must be
prepared to handle this.
The Logging mutex is available to users of the Logging module. Code
which holds the logging mutex should not lock any other mutex, to
avoid deadlocks. The new code is a bit fuzzy on that, because it calls
other modules (glib, Synthesis engine) while holding the mutex. If
that becomes a problem, then the mutex can be unlocked, at the risk of
leading to reordered log messages in different channels (see
ServerLogger).
Making all loggers follow the new rules uses different
approaches.
Loggers like the one in the local transport child which use a parent
logger and an additional ref-counted class like the D-Bus helper keep
a weak reference to the helper and lock it before use. If it is gone
already, the second logging part is skipped. This is the recommended
approach.
In cases where introducing ref-counting for the second class would
have been too intrusive (Server and SessionHelper), a fake
boost::shared_ptr without a destructor is used as an intermediate step
towards the recommended approach. To avoid race conditions while the
instance these fake pointers refer to destructs, an explicit
"remove()" method is necessary which must hold the Logging
mutex. Using the potentially removed pointer must do the same. Such
fake ref-counted Loggers cannot be used as parent logger of other
loggers, because then remove() would not be able to drop the last
reference to the fake boost::shared_ptr.
Loggers with fake boost::shared_ptr must keep a strong reference,
because no-one else does. The goal is to turn this into weak
references eventually.
LogDir must protect concurrent access to m_report and the Synthesis
engine.
The LogRedirectLogger assumes that it is still the active logger while
disabling itself. The remove() callback method will always be invoked
before removing a logger from the stack.
2013-04-09 21:32:35 +02:00
logging - > openLogdir ( dir ) ;
logging - > readReport ( report ) ;
return logging - > getPeerNameFromLogdir ( dir ) ;
2009-04-15 21:03:26 +02:00
}
2009-07-09 18:58:21 +02:00
2010-02-18 10:24:05 +01:00
# ifdef ENABLE_UNIT_TESTS
/**
* This class works LogDirTest as scratch directory .
2011-05-05 14:15:55 +02:00
* LogDirTest / [ file_event | file_contact ] _ [ one | two | empty ] contain different
2010-02-18 10:24:05 +01:00
* sets of items for use in a FileSyncSource .
*
* With that setup and a fake SyncContext it is possible to simulate
* sessions and test the resulting logdirs .
*/
2013-09-23 20:10:42 +02:00
class LogDirTest : public CppUnit : : TestFixture
2010-02-18 10:24:05 +01:00
{
2013-09-23 20:10:42 +02:00
class LogContext : public SyncContext , public Logger
{
public :
LogContext ( ) :
SyncContext ( " nosuchconfig@nosuchcontext " )
{ }
ostringstream m_out ;
/** capture output produced while test ran */
void messagev ( const MessageOptions & options ,
const char * format ,
va_list args )
{
std : : string str = StringPrintfV ( format , args ) ;
m_out < < ' [ ' < < levelToStr ( options . m_level ) < < ' ] ' < < str ;
if ( ! boost : : ends_with ( str , " \n " ) ) {
m_out < < std : : endl ;
}
}
} ;
2018-01-16 17:17:34 +01:00
std : : shared_ptr < LogContext > m_logContext ;
2013-09-23 20:10:42 +02:00
2010-02-18 10:24:05 +01:00
public :
LogDirTest ( ) :
m_maxLogDirs ( 10 )
2010-03-26 10:13:35 +01:00
{
command line: cleaned up output
The user-visible part of this change is that command line output now
uses the same [ERROR/INFO] prefixes like the rest of SyncEvolution,
instead of "Error:". Several messages were split into [ERROR] and
[INFO] parts on seperate lines. Multi-line messages with such a prefix
now have the prefix at the start of each line. Full sentences start
with captital letters.
All usage errors related to the synopsis of the command line now
include the synopsis, without the detailed documentation of all
options. Some of those errors dumped the full documentation, which was
way too much information and pushed the actual synopsis off the
screen. Some other errors did not include usage information at all.
All output still goes to stdout, stderr is not used at all. Should be
changed in a seperate patch, because currently error messages during
operations like "--export -" get mixed with the result of the
operation.
Technically the output handling was simplified. All output is printed
via the logging system, instead of using a mixture of logging and
streaming into std::cout. The advantage is that it will be easier to
redirect all regular output inside the syncevo-dbus-helper to the
parent. In particular, the following code could be removed:
- the somewhat hacky std::streambuf->logging bridge code (CmdlineStreamBuf)
- SyncContext set/getOutput()
- ostream constructor parameters for Cmdline and derived classes
The new code uses SE_LOG_SHOW() to produce output without prefix. Each
call ends at the next line, regardless whether the string ends in a
newline or not. The LoggerStdout was adapted to behave according to
that expectation, and it inserts the line prefix at the start of each
line - probably didn't matter before, because hardly any (no?!)
message had line breaks.
Because of this implicit newline in the logging code, some newlines
become redundant; SE_LOG_SHOW("") is used to insert an empty line
where needed. Calls to the logging system are minimized if possible by
assembling output in buffers first, to reduce overhead and to adhere
to the "one call per message" guideline.
Testing was adapted accordingly. It's a bit stricter now, too, because
it checks the entire error output instead of just the last line. The
previous use of Cmdline ostreams to capture output from the class was
replaced with loggers which hook into the logging system while the
test runs and store the output. Same with SyncContext testing.
Conflicts:
src/dbus/server/cmdline-wrapper.h
2012-04-11 10:22:57 +02:00
}
~ LogDirTest ( ) {
2010-03-26 10:13:35 +01:00
}
2010-02-18 10:24:05 +01:00
void setUp ( ) {
static const char * vcard_1 =
" BEGIN:VCARD \n "
" VERSION:2.1 \n "
" TITLE:tester \n "
" FN:John Doe \n "
" N:Doe;John;;; \n "
" X-MOZILLA-HTML:FALSE \n "
" TEL;TYPE=WORK;TYPE=VOICE:business 1 \n "
" EMAIL:john.doe@work.com \n "
" X-AIM:AIM JOHN \n "
" END:VCARD \n " ;
static const char * vcard_2 =
" BEGIN:VCARD \n "
" VERSION:2.1 \n "
" TITLE:developer \n "
" FN:John Doe \n "
" N:Doe;John;;; \n "
" X-MOZILLA-HTML:TRUE \n "
" BDAY:2006-01-08 \n "
" END:VCARD \n " ;
static const char * ical_1 =
" BEGIN:VCALENDAR \n "
" PRODID:-//Ximian//NONSGML Evolution Calendar//EN \n "
" VERSION:2.0 \n "
" METHOD:PUBLISH \n "
" BEGIN:VEVENT \n "
" SUMMARY:phone meeting \n "
" DTEND:20060406T163000Z \n "
" DTSTART:20060406T160000Z \n "
" UID:1234567890!@#$%^&*()<>@dummy \n "
" DTSTAMP:20060406T211449Z \n "
" LAST-MODIFIED:20060409T213201 \n "
" CREATED:20060409T213201 \n "
" LOCATION:calling from home \n "
" DESCRIPTION:let's talk \n "
" CLASS:PUBLIC \n "
" TRANSP:OPAQUE \n "
" SEQUENCE:1 \n "
" BEGIN:VALARM \n "
" DESCRIPTION:alarm \n "
" ACTION:DISPLAY \n "
" TRIGGER;VALUE=DURATION;RELATED=START:-PT15M \n "
" END:VALARM \n "
" END:VEVENT \n "
" END:VCALENDAR \n " ;
static const char * ical_2 =
" BEGIN:VCALENDAR \n "
" PRODID:-//Ximian//NONSGML Evolution Calendar//EN \n "
" VERSION:2.0 \n "
" METHOD:PUBLISH \n "
" BEGIN:VEVENT \n "
" SUMMARY:phone meeting \n "
" DTEND:20060406T163000Z \n "
" DTSTART:20060406T160000Z \n "
" UID:1234567890!@#$%^&*()<>@dummy \n "
" DTSTAMP:20060406T211449Z \n "
" LAST-MODIFIED:20060409T213201 \n "
" CREATED:20060409T213201 \n "
" LOCATION:my office \n "
" CATEGORIES:WORK \n "
" DESCRIPTION:what the heck \\ , let's even shout a bit \n "
" CLASS:PUBLIC \n "
" TRANSP:OPAQUE \n "
" SEQUENCE:1 \n "
" END:VEVENT \n "
" END:VCALENDAR \n " ;
rm_r ( " LogDirTest " ) ;
2011-05-05 14:15:55 +02:00
dump ( " file_event.one " , " 1 " , ical_1 ) ;
dump ( " file_event.two " , " 1 " , ical_1 ) ;
dump ( " file_event.two " , " 2 " , ical_2 ) ;
mkdir_p ( getLogData ( ) + " /file_event.empty " ) ;
dump ( " file_contact.one " , " 1 " , vcard_1 ) ;
dump ( " file_contact.two " , " 1 " , vcard_1 ) ;
dump ( " file_contact.two " , " 2 " , vcard_2 ) ;
mkdir_p ( getLogData ( ) + " /file_contact.empty " ) ;
2010-02-18 10:24:05 +01:00
mkdir_p ( getLogDir ( ) ) ;
m_maxLogDirs = 0 ;
2013-09-23 20:10:42 +02:00
// Suppress output by redirecting into LogContext::m_out.
// It's not tested at the moment.
m_logContext . reset ( new LogContext ) ;
Logger : : addLogger ( m_logContext ) ;
}
void tearDown ( ) {
Logger : : removeLogger ( m_logContext . get ( ) ) ;
m_logContext . reset ( ) ;
2010-02-18 10:24:05 +01:00
}
private :
string getLogData ( ) { return " LogDirTest/data " ; }
2011-11-21 16:37:53 +01:00
virtual InitStateString getLogDir ( ) const { return " LogDirTest/cache/syncevolution " ; }
2010-02-18 10:24:05 +01:00
int m_maxLogDirs ;
void dump ( const char * dir , const char * file , const char * data ) {
string name = getLogData ( ) ;
name + = " / " ;
name + = dir ;
mkdir_p ( name ) ;
name + = " / " ;
name + = file ;
ofstream out ( name . c_str ( ) ) ;
out < < data ;
}
CPPUNIT_TEST_SUITE ( LogDirTest ) ;
CPPUNIT_TEST ( testQuickCompare ) ;
CPPUNIT_TEST ( testSessionNoChanges ) ;
CPPUNIT_TEST ( testSessionChanges ) ;
CPPUNIT_TEST ( testMultipleSessions ) ;
2010-02-18 15:37:33 +01:00
CPPUNIT_TEST ( testExpire ) ;
2010-02-18 10:24:05 +01:00
CPPUNIT_TEST_SUITE_END ( ) ;
/**
* Simulate a session involving one or more sources .
*
* @ param changeServer pretend that peer got changed
* @ param status result of session
2011-05-05 14:15:55 +02:00
* @ param varargs sourcename ( " file_event " ) ,
2018-01-30 17:00:24 +01:00
* statebefore ( nullptr for no dump , or suffix like " _one " ) ,
* stateafter ( nullptr for same as before ) , . . . , nullptr
2010-02-18 10:24:05 +01:00
* @ return logdir created for the session
*/
string session ( bool changeServer , SyncMLStatus status , . . . ) {
2013-04-08 22:43:07 +02:00
Logger : : Level level = Logger : : instance ( ) . getLevel ( ) ;
2013-09-23 20:10:42 +02:00
SourceList list ( * m_logContext , true ) ;
2010-02-18 10:24:05 +01:00
list . setLogLevel ( SourceList : : LOGGING_QUIET ) ;
SyncReport report ;
2011-01-18 15:07:46 +01:00
list . startSession ( " " , m_maxLogDirs , 0 , & report ) ;
2010-02-18 10:24:05 +01:00
va_list ap ;
va_start ( ap , status ) ;
while ( true ) {
const char * sourcename = va_arg ( ap , const char * ) ;
if ( ! sourcename ) {
break ;
}
2018-01-30 17:00:24 +01:00
const char * type = nullptr ;
2011-05-05 14:15:55 +02:00
if ( ! strcmp ( sourcename , " file_event " ) ) {
2010-02-18 10:24:05 +01:00
type = " file:text/calendar:2.0 " ;
2011-05-05 14:15:55 +02:00
} else if ( ! strcmp ( sourcename , " file_contact " ) ) {
2010-02-18 10:24:05 +01:00
type = " file:text/vcard:3.0 " ;
}
CPPUNIT_ASSERT ( type ) ;
string datadir = getLogData ( ) + " / " ;
2018-01-29 16:45:25 +01:00
auto source = SyncSource : : createTestingSource ( sourcename , type , true ,
( string ( " file:// " ) + datadir ) . c_str ( ) ) ;
2010-02-18 10:24:05 +01:00
datadir + = sourcename ;
datadir + = " _1 " ;
source - > open ( ) ;
if ( changeServer ) {
// fake one added item on server
source - > setItemStat ( SyncSourceReport : : ITEM_REMOTE ,
SyncSourceReport : : ITEM_ADDED ,
SyncSourceReport : : ITEM_TOTAL ,
1 ) ;
}
2018-01-29 16:45:25 +01:00
list . addSource ( std : : move ( source ) ) ;
2010-02-18 10:24:05 +01:00
const char * before = va_arg ( ap , const char * ) ;
const char * after = va_arg ( ap , const char * ) ;
if ( before ) {
// do a "before" dump after directing the source towards the desired data
rm_r ( datadir ) ;
CPPUNIT_ASSERT_EQUAL ( 0 , symlink ( ( string ( sourcename ) + before ) . c_str ( ) ,
datadir . c_str ( ) ) ) ;
list . syncPrepare ( sourcename ) ;
if ( after ) {
rm_r ( datadir ) ;
CPPUNIT_ASSERT_EQUAL ( 0 , symlink ( ( string ( sourcename ) + after ) . c_str ( ) ,
datadir . c_str ( ) ) ) ;
}
}
}
list . syncDone ( status , & report ) ;
2020-03-02 13:27:20 +01:00
va_end ( ap ) ;
2010-02-18 10:24:05 +01:00
2013-04-08 22:43:07 +02:00
Logger : : instance ( ) . setLevel ( level ) ;
2010-02-18 10:24:05 +01:00
return list . getLogdir ( ) ;
}
typedef vector < string > Sessions_t ;
// full paths to all sessions, sorted
Sessions_t listSessions ( ) {
Sessions_t sessions ;
string logdir = getLogDir ( ) ;
ReadDir dirs ( logdir ) ;
2018-01-16 10:58:04 +01:00
for ( const string & dir : dirs ) {
2014-03-19 14:39:42 +01:00
sessions . push_back ( RealPath ( logdir + " / " + dir ) ) ;
2010-02-18 10:24:05 +01:00
}
sort ( sessions . begin ( ) , sessions . end ( ) ) ;
return sessions ;
}
void testQuickCompare ( ) {
// identical dirs => identical files
2011-05-05 14:15:55 +02:00
CPPUNIT_ASSERT ( ! LogDir : : haveDifferentContent ( " file_event " ,
2010-02-18 10:24:05 +01:00
getLogData ( ) , " empty " ,
getLogData ( ) , " empty " ) ) ;
2011-05-05 14:15:55 +02:00
CPPUNIT_ASSERT ( ! LogDir : : haveDifferentContent ( " file_event " ,
2010-02-18 10:24:05 +01:00
getLogData ( ) , " one " ,
getLogData ( ) , " one " ) ) ;
2011-05-05 14:15:55 +02:00
CPPUNIT_ASSERT ( ! LogDir : : haveDifferentContent ( " file_event " ,
2010-02-18 10:24:05 +01:00
getLogData ( ) , " two " ,
getLogData ( ) , " two " ) ) ;
// some files shared
2011-05-05 14:15:55 +02:00
CPPUNIT_ASSERT ( ! system ( " cp -l -r LogDirTest/data/file_event.two LogDirTest/data/file_event.copy && rm LogDirTest/data/file_event.copy/2 " ) ) ;
CPPUNIT_ASSERT ( LogDir : : haveDifferentContent ( " file_event " ,
2010-02-18 10:24:05 +01:00
getLogData ( ) , " two " ,
getLogData ( ) , " copy " ) ) ;
2011-05-05 14:15:55 +02:00
CPPUNIT_ASSERT ( LogDir : : haveDifferentContent ( " file_event " ,
2010-02-18 10:24:05 +01:00
getLogData ( ) , " copy " ,
getLogData ( ) , " one " ) ) ;
}
void testSessionNoChanges ( ) {
ScopedEnvChange config ( " XDG_CONFIG_HOME " , " LogDirTest/config " ) ;
ScopedEnvChange cache ( " XDG_CACHE_HOME " , " LogDirTest/cache " ) ;
// simple session with no changes
2011-05-05 14:15:55 +02:00
string dir = session ( false , STATUS_OK , " file_event " , " .one " , " .one " , ( char * ) 0 ) ;
2010-02-18 10:24:05 +01:00
Sessions_t sessions = listSessions ( ) ;
CPPUNIT_ASSERT_EQUAL ( ( size_t ) 1 , sessions . size ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( dir , sessions [ 0 ] ) ;
2012-06-05 10:27:29 +02:00
IniFileConfigNode status ( dir , " status.ini " , true ) ;
2010-02-18 10:24:05 +01:00
CPPUNIT_ASSERT ( status . exists ( ) ) ;
2012-06-05 12:57:33 +02:00
CPPUNIT_ASSERT_EQUAL ( string ( " 1 " ) , status . readProperty ( " source-file__event-backup-before " ) . get ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( string ( " 1 " ) , status . readProperty ( " source-file__event-backup-after " ) . get ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( string ( " 200 " ) , status . readProperty ( " status " ) . get ( ) ) ;
2011-05-05 14:15:55 +02:00
CPPUNIT_ASSERT ( ! LogDir : : haveDifferentContent ( " file_event " ,
2010-02-18 10:24:05 +01:00
dir , " before " ,
dir , " after " ) ) ;
}
void testSessionChanges ( ) {
ScopedEnvChange config ( " XDG_CONFIG_HOME " , " LogDirTest/config " ) ;
ScopedEnvChange cache ( " XDG_CACHE_HOME " , " LogDirTest/cache " ) ;
// session with local changes
2011-05-05 14:15:55 +02:00
string dir = session ( false , STATUS_OK , " file_event " , " .one " , " .two " , ( char * ) 0 ) ;
2010-02-18 10:24:05 +01:00
Sessions_t sessions = listSessions ( ) ;
CPPUNIT_ASSERT_EQUAL ( ( size_t ) 1 , sessions . size ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( dir , sessions [ 0 ] ) ;
2012-06-05 10:27:29 +02:00
IniFileConfigNode status ( dir , " status.ini " , true ) ;
2010-02-18 10:24:05 +01:00
CPPUNIT_ASSERT ( status . exists ( ) ) ;
2012-06-05 12:57:33 +02:00
CPPUNIT_ASSERT_EQUAL ( string ( " 1 " ) , status . readProperty ( " source-file__event-backup-before " ) . get ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( string ( " 2 " ) , status . readProperty ( " source-file__event-backup-after " ) . get ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( string ( " 200 " ) , status . readProperty ( " status " ) . get ( ) ) ;
2011-05-05 14:15:55 +02:00
CPPUNIT_ASSERT ( LogDir : : haveDifferentContent ( " file_event " ,
2010-02-18 10:24:05 +01:00
dir , " before " ,
dir , " after " ) ) ;
}
void testMultipleSessions ( ) {
ScopedEnvChange config ( " XDG_CONFIG_HOME " , " LogDirTest/config " ) ;
ScopedEnvChange cache ( " XDG_CACHE_HOME " , " LogDirTest/cache " ) ;
// two sessions, starting with 1 item, adding 1 during the sync, then
// removing it again during the second
string dir = session ( false , STATUS_OK ,
2011-05-05 14:15:55 +02:00
" file_event " , " .one " , " .two " ,
" file_contact " , " .one " , " .two " ,
2010-02-18 10:24:05 +01:00
( char * ) 0 ) ;
{
Sessions_t sessions = listSessions ( ) ;
CPPUNIT_ASSERT_EQUAL ( ( size_t ) 1 , sessions . size ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( dir , sessions [ 0 ] ) ;
2012-06-05 10:27:29 +02:00
IniFileConfigNode status ( dir , " status.ini " , true ) ;
2010-02-18 10:24:05 +01:00
CPPUNIT_ASSERT ( status . exists ( ) ) ;
2012-06-05 12:57:33 +02:00
CPPUNIT_ASSERT_EQUAL ( string ( " 1 " ) , status . readProperty ( " source-file__event-backup-before " ) . get ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( string ( " 2 " ) , status . readProperty ( " source-file__event-backup-after " ) . get ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( string ( " 1 " ) , status . readProperty ( " source-file__contact-backup-before " ) . get ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( string ( " 2 " ) , status . readProperty ( " source-file__contact-backup-after " ) . get ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( string ( " 200 " ) , status . readProperty ( " status " ) . get ( ) ) ;
2011-05-05 14:15:55 +02:00
CPPUNIT_ASSERT ( LogDir : : haveDifferentContent ( " file_event " ,
2010-02-18 10:24:05 +01:00
dir , " before " ,
dir , " after " ) ) ;
2011-05-05 14:15:55 +02:00
CPPUNIT_ASSERT ( LogDir : : haveDifferentContent ( " file_contact " ,
2010-02-18 10:24:05 +01:00
dir , " before " ,
dir , " after " ) ) ;
}
string seconddir = session ( false , STATUS_OK ,
2011-05-05 14:15:55 +02:00
" file_event " , " .two " , " .one " ,
" file_contact " , " .two " , " .one " ,
2010-02-18 10:24:05 +01:00
( char * ) 0 ) ;
{
Sessions_t sessions = listSessions ( ) ;
CPPUNIT_ASSERT_EQUAL ( ( size_t ) 2 , sessions . size ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( dir , sessions [ 0 ] ) ;
CPPUNIT_ASSERT_EQUAL ( seconddir , sessions [ 1 ] ) ;
2012-06-05 10:27:29 +02:00
IniFileConfigNode status ( seconddir , " status.ini " , true ) ;
2010-02-18 10:24:05 +01:00
CPPUNIT_ASSERT ( status . exists ( ) ) ;
2012-06-05 12:57:33 +02:00
CPPUNIT_ASSERT_EQUAL ( string ( " 2 " ) , status . readProperty ( " source-file__event-backup-before " ) . get ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( string ( " 1 " ) , status . readProperty ( " source-file__event-backup-after " ) . get ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( string ( " 2 " ) , status . readProperty ( " source-file__contact-backup-before " ) . get ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( string ( " 1 " ) , status . readProperty ( " source-file__contact-backup-after " ) . get ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( string ( " 200 " ) , status . readProperty ( " status " ) . get ( ) ) ;
2011-05-05 14:15:55 +02:00
CPPUNIT_ASSERT ( LogDir : : haveDifferentContent ( " file_event " ,
2010-02-18 10:24:05 +01:00
seconddir , " before " ,
seconddir , " after " ) ) ;
2011-05-05 14:15:55 +02:00
CPPUNIT_ASSERT ( LogDir : : haveDifferentContent ( " file_contact " ,
2010-02-18 10:24:05 +01:00
seconddir , " before " ,
seconddir , " after " ) ) ;
}
2011-05-05 14:15:55 +02:00
CPPUNIT_ASSERT ( ! LogDir : : haveDifferentContent ( " file_event " ,
2010-02-18 10:24:05 +01:00
dir , " after " ,
seconddir , " before " ) ) ;
2011-05-05 14:15:55 +02:00
CPPUNIT_ASSERT ( ! LogDir : : haveDifferentContent ( " file_contact " ,
2010-02-18 10:24:05 +01:00
dir , " after " ,
seconddir , " before " ) ) ;
}
2010-02-18 15:37:33 +01:00
void testExpire ( ) {
ScopedEnvChange config ( " XDG_CONFIG_HOME " , " LogDirTest/config " ) ;
ScopedEnvChange cache ( " XDG_CACHE_HOME " , " LogDirTest/cache " ) ;
string dirs [ 5 ] ;
Sessions_t sessions ;
m_maxLogDirs = 1 ;
// The latest session always must be preserved, even if it
// is normally considered less important (no error in this case).
dirs [ 0 ] = session ( false , STATUS_FATAL , ( char * ) 0 ) ;
dirs [ 0 ] = session ( false , STATUS_OK , ( char * ) 0 ) ;
sessions = listSessions ( ) ;
CPPUNIT_ASSERT_EQUAL ( ( size_t ) 1 , sessions . size ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( dirs [ 0 ] , sessions [ 0 ] ) ;
// all things being equal, then expire the oldest session,
// leaving us with two here
m_maxLogDirs = 2 ;
dirs [ 0 ] = session ( false , STATUS_OK , ( char * ) 0 ) ;
dirs [ 1 ] = session ( false , STATUS_OK , ( char * ) 0 ) ;
sessions = listSessions ( ) ;
CPPUNIT_ASSERT_EQUAL ( ( size_t ) 2 , sessions . size ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( dirs [ 0 ] , sessions [ 0 ] ) ;
CPPUNIT_ASSERT_EQUAL ( dirs [ 1 ] , sessions [ 1 ] ) ;
2011-05-05 14:15:55 +02:00
// When syncing first file_event, then file_contact, both sessions
2010-02-18 15:37:33 +01:00
// must be preserved despite m_maxLogDirs = 1, otherwise
2011-05-05 14:15:55 +02:00
// we would loose the only existent backup of file_event.
dirs [ 0 ] = session ( false , STATUS_OK , " file_event " , " .two " , " .one " , ( char * ) 0 ) ;
dirs [ 1 ] = session ( false , STATUS_OK , " file_contact " , " .two " , " .one " , ( char * ) 0 ) ;
2010-02-18 15:37:33 +01:00
sessions = listSessions ( ) ;
CPPUNIT_ASSERT_EQUAL ( ( size_t ) 2 , sessions . size ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( dirs [ 0 ] , sessions [ 0 ] ) ;
CPPUNIT_ASSERT_EQUAL ( dirs [ 1 ] , sessions [ 1 ] ) ;
// after synchronizing both, we can expire both the old sessions
m_maxLogDirs = 1 ;
dirs [ 0 ] = session ( false , STATUS_OK ,
2011-05-05 14:15:55 +02:00
" file_event " , " .two " , " .one " ,
" file_contact " , " .two " , " .one " ,
2010-02-18 15:37:33 +01:00
( char * ) 0 ) ;
sessions = listSessions ( ) ;
CPPUNIT_ASSERT_EQUAL ( ( size_t ) 1 , sessions . size ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( dirs [ 0 ] , sessions [ 0 ] ) ;
// when doing multiple failed syncs without dumps, keep the sessions
// which have database dumps
m_maxLogDirs = 2 ;
dirs [ 1 ] = session ( false , STATUS_FATAL , ( char * ) 0 ) ;
dirs [ 1 ] = session ( false , STATUS_FATAL , ( char * ) 0 ) ;
sessions = listSessions ( ) ;
CPPUNIT_ASSERT_EQUAL ( ( size_t ) 2 , sessions . size ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( dirs [ 0 ] , sessions [ 0 ] ) ;
CPPUNIT_ASSERT_EQUAL ( dirs [ 1 ] , sessions [ 1 ] ) ;
// when doing syncs which don't change data, keep the sessions which
// did change something: keep oldest backup because it created the
// backups for the first time
dirs [ 1 ] = session ( false , STATUS_OK ,
2011-05-05 14:15:55 +02:00
" file_event " , " .one " , " .one " ,
" file_contact " , " .one " , " .one " ,
2010-02-18 15:37:33 +01:00
( char * ) 0 ) ;
dirs [ 1 ] = session ( false , STATUS_OK ,
2011-05-05 14:15:55 +02:00
" file_event " , " .one " , " .one " ,
" file_contact " , " .one " , " .one " ,
2010-02-18 15:37:33 +01:00
( char * ) 0 ) ;
sessions = listSessions ( ) ;
CPPUNIT_ASSERT_EQUAL ( ( size_t ) 2 , sessions . size ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( dirs [ 0 ] , sessions [ 0 ] ) ;
CPPUNIT_ASSERT_EQUAL ( dirs [ 1 ] , sessions [ 1 ] ) ;
// when making a change in each sync, we end up with the two
// most recent sessions eventually: first change server,
// then local
dirs [ 1 ] = session ( true , STATUS_OK ,
2011-05-05 14:15:55 +02:00
" file_event " , " .one " , " .one " ,
" file_contact " , " .one " , " .one " ,
2010-02-18 15:37:33 +01:00
( char * ) 0 ) ;
sessions = listSessions ( ) ;
CPPUNIT_ASSERT_EQUAL ( ( size_t ) 2 , sessions . size ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( dirs [ 0 ] , sessions [ 0 ] ) ;
CPPUNIT_ASSERT_EQUAL ( dirs [ 1 ] , sessions [ 1 ] ) ;
dirs [ 0 ] = dirs [ 1 ] ;
dirs [ 1 ] = session ( false , STATUS_OK ,
2011-05-05 14:15:55 +02:00
" file_event " , " .one " , " .two " ,
" file_contact " , " .one " , " .two " ,
2010-02-18 15:37:33 +01:00
( char * ) 0 ) ;
sessions = listSessions ( ) ;
CPPUNIT_ASSERT_EQUAL ( ( size_t ) 2 , sessions . size ( ) ) ;
CPPUNIT_ASSERT_EQUAL ( dirs [ 0 ] , sessions [ 0 ] ) ;
CPPUNIT_ASSERT_EQUAL ( dirs [ 1 ] , sessions [ 1 ] ) ;
}
2010-02-18 10:24:05 +01:00
} ;
SYNCEVOLUTION_TEST_SUITE_REGISTRATION ( LogDirTest ) ;
# endif // ENABLE_UNIT_TESTS
2009-10-02 17:23:53 +02:00
SE_END_CXX