syncevolution/src/dbus/server/connection.cpp

654 lines
27 KiB
C++
Raw Normal View History

/*
* Copyright (C) 2011 Intel Corporation
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) version 3.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
* 02110-1301 USA
*/
#include "server.h"
#include "connection.h"
#include "client.h"
#include <synthesis/san.h>
#include <syncevo/TransportAgent.h>
#include <syncevo/SyncContext.h>
using namespace GDBusCXX;
SE_BEGIN_CXX
void Connection::failed(const std::string &reason)
{
SE_LOG_DEBUG(NULL, "Connection %s: failed: %s (old state %s)",
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
m_sessionID.c_str(),
reason.c_str(),
SessionCommon::ConnectionStateToString(m_state).c_str());
if (m_failure.empty()) {
m_failure = reason;
if (m_session) {
m_session->setStubConnectionError(reason);
}
}
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
// notify client
abort();
// ensure that state is failed
m_state = SessionCommon::FAILED;
// tell helper (again)
m_statusSignal(reason);
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
// A "failed" connection is dead, no further operations on it
// are allowed, in particular not the normal Connection.Close().
// Therefore remove ourselves.
//
// But don't delete ourselves while some code of the Connection still
// runs. Instead let server do that as part of its event loop.
boost::shared_ptr<Connection> c = m_me.lock();
if (c) {
m_server.delayDeletion(c);
m_server.detach(this);
}
}
std::string Connection::buildDescription(const StringMap &peer)
{
StringMap::const_iterator
desc = peer.find("description"),
id = peer.find("id"),
trans = peer.find("transport"),
trans_desc = peer.find("transport_description");
std::string buffer;
buffer.reserve(256);
if (desc != peer.end()) {
buffer += desc->second;
}
if (id != peer.end() || trans != peer.end()) {
if (!buffer.empty()) {
buffer += " ";
}
buffer += "(";
if (id != peer.end()) {
buffer += id->second;
if (trans != peer.end()) {
buffer += " via ";
}
}
if (trans != peer.end()) {
buffer += trans->second;
if (trans_desc != peer.end()) {
buffer += " ";
buffer += trans_desc->second;
}
}
buffer += ")";
}
return buffer;
}
static bool IsSyncML(const std::string &messageType)
{
return messageType == TransportAgent::m_contentTypeSyncML ||
messageType == TransportAgent::m_contentTypeSyncWBXML;
}
void Connection::process(const Caller_t &caller,
const GDBusCXX::DBusArray<uint8_t> &message,
const std::string &message_type)
{
SE_LOG_DEBUG(NULL, "Connection %s: D-Bus client %s sends %lu bytes, %s (old state %s)",
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
m_sessionID.c_str(),
caller.c_str(),
(unsigned long)message.first,
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
message_type.c_str(),
SessionCommon::ConnectionStateToString(m_state).c_str());
boost::shared_ptr<Client> client(m_server.findClient(caller));
if (!client) {
SE_THROW("unknown client");
}
boost::shared_ptr<Connection> myself =
boost::static_pointer_cast<Connection, Resource>(client->findResource(this));
if (!myself) {
SE_THROW("client does not own connection");
}
// any kind of error from now on terminates the connection
try {
switch (m_state) {
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
case SessionCommon::SETUP: {
std::string config;
std::string peerDeviceID;
bool serverMode = false;
bool serverAlerted = false;
// check message type, determine whether we act
// as client or server, choose config
if (message_type == "HTTP Config") {
// type used for testing, payload is config name
config.assign(reinterpret_cast<const char *>(message.second),
message.first);
} else if (message_type == TransportAgent::m_contentTypeServerAlertedNotificationDS) {
serverAlerted = true;
sysync::SanPackage san;
if (san.PassSan(const_cast<uint8_t *>(message.second), message.first, 2) || san.GetHeader()) {
// We are very tolerant regarding the content of the message.
// If it doesn't parse, try to do something useful anyway.
// only for SAN 1.2, for SAN 1.0/1.1 we can not be sure
// whether it is a SAN package or a normal sync pacakge
if (message_type == TransportAgent::m_contentTypeServerAlertedNotificationDS) {
config = "default";
SE_LOG_DEBUG(NULL, "SAN parsing failed, falling back to 'default' config");
}
} else { //Server alerted notification case
// Extract server ID and match it against a server
// configuration. Multiple different peers might use the
// same serverID ("PC Suite"), so check properties of the
// of our configs first before going back to the name itself.
std::string serverID = san.fServerID;
SyncConfig::ConfigList servers = SyncConfig::getConfigs();
BOOST_FOREACH(const SyncConfig::ConfigList::value_type &server,
servers) {
SyncConfig conf(server.first);
vector<string> urls = conf.getSyncURL();
BOOST_FOREACH (const string &url, urls) {
if (url == serverID) {
config = server.first;
break;
}
}
if (!config.empty()) {
break;
}
}
// for Bluetooth transports match against mac address.
StringMap::const_iterator id = m_peer.find("id"),
trans = m_peer.find("transport");
if (trans != m_peer.end() && id != m_peer.end()) {
if (trans->second == "org.openobex.obexd") {
m_peerBtAddr = id->second.substr(0, id->second.find("+"));
BOOST_FOREACH(const SyncConfig::ConfigList::value_type &server,
servers) {
SyncConfig conf(server.first);
vector<string> urls = conf.getSyncURL();
BOOST_FOREACH (string &url, urls){
url = url.substr (0, url.find("+"));
SE_LOG_DEBUG(NULL, "matching against %s",url.c_str());
if (url.find ("obex-bt://") ==0 && url.substr(strlen("obex-bt://"), url.npos) == m_peerBtAddr) {
config = server.first;
break;
}
}
if (!config.empty()){
break;
}
}
}
}
if (config.empty()) {
BOOST_FOREACH(const SyncConfig::ConfigList::value_type &server,
servers) {
if (server.first == serverID) {
config = serverID;
break;
}
}
}
// create a default configuration name if none matched
if (config.empty()) {
config = serverID+"_"+getCurrentTime();
SE_LOG_DEBUG(NULL, "SAN Server ID '%s' unknown, falling back to automatically created '%s' config",
serverID.c_str(), config.c_str());
}
SE_LOG_DEBUG(NULL, "SAN sync with config %s", config.c_str());
m_SANContent.reset (new SANContent ());
// extract number of sources
int numSources = san.fNSync;
int syncType;
uint32_t contentType;
std::string serverURI;
if (!numSources) {
source -> datastore rename, improved terminology The word "source" implies reading, while in fact access is read/write. "datastore" avoids that misconception. Writing it in one word emphasizes that it is single entity. While renaming, also remove references to explicit --*-property parameters. The only necessary use today is "--sync-property ?" and "--datastore-property ?". --datastore-property was used instead of the short --store-property because "store" might be mistaken for the verb. It doesn't matter that it is longer because it doesn't get typed often. --source-property must remain valid for backward compatility. As many user-visible instances of "source" as possible got replaced in text strings by the newer term "datastore". Debug messages were left unchanged unless some regex happened to match it. The source code will continue to use the old variable and class names based on "source". Various documentation enhancements: Better explain what local sync is and how it involves two sync configs. "originating config" gets introduces instead of just "sync config". Better explain the relationship between contexts, sync configs, and source configs ("a sync config can use the datastore configs in the same context"). An entire section on config properties in the terminology section. "item" added (Todd Wilson correctly pointed out that it was missing). Less focus on conflict resolution, as suggested by Graham Cobb. Fix examples that became invalid when fixing the password storage/lookup mechanism for GNOME keyring in 1.4. The "command line conventions", "Synchronization beyond SyncML" and "CalDAV and CardDAV" sections were updated. It's possible that the other sections also contain slightly incorrect usage of the terminology or are simply out-dated.
2014-07-28 15:29:41 +02:00
SE_LOG_DEBUG(NULL, "SAN message with no datastores, using selected modes");
// Synchronize all known sources with the default mode.
if (san.GetNthSync(0, syncType, contentType, serverURI)) {
SE_LOG_DEBUG(NULL, "SAN invalid header, using default modes");
} else if (syncType < SYNC_FIRST || syncType > SYNC_LAST) {
SE_LOG_DEBUG(NULL, "SAN invalid sync type %d, using default modes", syncType);
} else {
m_syncMode = PrettyPrintSyncMode(SyncMode(syncType), true);
source -> datastore rename, improved terminology The word "source" implies reading, while in fact access is read/write. "datastore" avoids that misconception. Writing it in one word emphasizes that it is single entity. While renaming, also remove references to explicit --*-property parameters. The only necessary use today is "--sync-property ?" and "--datastore-property ?". --datastore-property was used instead of the short --store-property because "store" might be mistaken for the verb. It doesn't matter that it is longer because it doesn't get typed often. --source-property must remain valid for backward compatility. As many user-visible instances of "source" as possible got replaced in text strings by the newer term "datastore". Debug messages were left unchanged unless some regex happened to match it. The source code will continue to use the old variable and class names based on "source". Various documentation enhancements: Better explain what local sync is and how it involves two sync configs. "originating config" gets introduces instead of just "sync config". Better explain the relationship between contexts, sync configs, and source configs ("a sync config can use the datastore configs in the same context"). An entire section on config properties in the terminology section. "item" added (Todd Wilson correctly pointed out that it was missing). Less focus on conflict resolution, as suggested by Graham Cobb. Fix examples that became invalid when fixing the password storage/lookup mechanism for GNOME keyring in 1.4. The "command line conventions", "Synchronization beyond SyncML" and "CalDAV and CardDAV" sections were updated. It's possible that the other sections also contain slightly incorrect usage of the terminology or are simply out-dated.
2014-07-28 15:29:41 +02:00
SE_LOG_DEBUG(NULL, "SAN sync mode for all configured datastores: %s", m_syncMode.c_str());
}
} else {
for (int sync = 1; sync <= numSources; sync++) {
if (san.GetNthSync(sync, syncType, contentType, serverURI)) {
SE_LOG_DEBUG(NULL, "SAN invalid sync entry #%d", sync);
} else if (syncType < SYNC_FIRST || syncType > SYNC_LAST) {
SE_LOG_DEBUG(NULL, "SAN invalid sync type %d for entry #%d, ignoring entry", syncType, sync);
} else {
std::string syncMode = PrettyPrintSyncMode(SyncMode(syncType), true);
m_SANContent->m_syncType.push_back (syncMode);
m_SANContent->m_serverURI.push_back (serverURI);
m_SANContent->m_contentType.push_back (contentType);
}
}
}
}
// TODO: use the session ID set by the server if non-null
} else if (// relaxed checking for XML: ignore stuff like "; CHARSET=UTF-8"
IsSyncML(message_type.substr(0, message_type.find(';')))) {
// run a new SyncML session as server
serverMode = true;
if (m_peer.find("config") == m_peer.end() &&
!m_peer["config"].empty()) {
SE_LOG_DEBUG(NULL, "ignoring pre-chosen config '%s'",
m_peer["config"].c_str());
}
// peek into the data to extract the locURI = device ID,
// then use it to find the configuration
SyncContext::SyncMLMessageInfo info;
info = SyncContext::analyzeSyncMLMessage(reinterpret_cast<const char *>(message.second),
message.first,
message_type);
if (info.m_deviceID.empty()) {
// TODO: proper exception
SE_THROW("could not extract LocURI=deviceID from initial message");
}
BOOST_FOREACH(const SyncConfig::ConfigList::value_type &entry,
SyncConfig::getConfigs()) {
SyncConfig peer(entry.first);
if (info.m_deviceID == peer.getRemoteDevID()) {
config = entry.first;
SE_LOG_INFO(NULL, "matched %s against config %s (%s)",
info.toString().c_str(),
entry.first.c_str(),
entry.second.c_str());
// Stop searching. Other peer configs might have the same remoteDevID.
// We go with the first one found, which because of the sort order
// of getConfigs() ensures that "foo" is found before "foo.old".
break;
}
}
if (config.empty()) {
// TODO: proper exception
SE_THROW(string("no configuration found for ") + info.toString());
}
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
// identified peer, still need to abort previous sessions below
peerDeviceID = info.m_deviceID;
} else {
SE_THROW(StringPrintf("message type '%s' not supported for starting a sync", message_type.c_str()));
}
// run session as client or server
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
m_state = SessionCommon::PROCESSING;
m_session = Session::createSession(m_server,
peerDeviceID,
config,
m_sessionID);
m_session->activate();
if (serverMode) {
m_session->initServer(SharedBuffer(reinterpret_cast<const char *>(message.second),
message.first),
message_type);
}
m_session->setServerAlerted(serverAlerted);
m_session->setPriority(Session::PRI_CONNECTION);
m_session->setStubConnection(myself);
// this will be reset only when the connection shuts down okay
// or overwritten with the error given to us in
// Connection::close()
m_session->setStubConnectionError("closed prematurely");
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
// Now abort all earlier sessions, if necessary. The new
// session will be enqueued below and thus won't get
// killed. It also won't run unless all other sessions
// before it terminate, therefore we don't need to check
// for success.
if (!peerDeviceID.empty()) {
// TODO: On failure we should kill the connection (beware,
// it might go away before killing completes and/or
// fails - need to use shared pointer tracking).
//
// boost::shared_ptr<Connection> c = m_me.lock();
// if (!c) {
// SE_THROW("internal error: Connection::process() cannot lock its own instance");
// }
m_server.killSessionsAsync(peerDeviceID,
SimpleResult(SuccessCb_t(),
ErrorCb_t()));
}
m_server.enqueue(m_session);
break;
}
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
case SessionCommon::PROCESSING:
SE_THROW("protocol error: already processing a message");
break;
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
case SessionCommon::WAITING:
m_incomingMsg = SharedBuffer(reinterpret_cast<const char *>(message.second),
message.first);
m_incomingMsgType = message_type;
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
m_messageSignal(DBusArray<uint8_t>(m_incomingMsg.size(),
reinterpret_cast<uint8_t *>(m_incomingMsg.get())),
m_incomingMsgType);
m_state = SessionCommon::PROCESSING;
m_timeout.deactivate();
break;
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
case SessionCommon::FINAL:
SE_THROW("protocol error: final reply sent, no further message processing possible");
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
case SessionCommon::DONE:
SE_THROW("protocol error: connection closed, no further message processing possible");
break;
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
case SessionCommon::FAILED:
SE_THROW(m_failure);
break;
default:
SE_THROW("protocol error: unknown internal state");
break;
}
} catch (const std::exception &error) {
failed(error.what());
throw;
} catch (...) {
failed("unknown exception in Connection::process");
throw;
}
}
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
void Connection::send(const DBusArray<uint8_t> buffer,
const std::string &type,
const std::string &url)
{
SE_LOG_DEBUG(NULL, "Connection %s: send %lu bytes, %s, %s (old state %s)",
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
m_sessionID.c_str(),
(unsigned long)buffer.first,
type.c_str(),
url.c_str(),
SessionCommon::ConnectionStateToString(m_state).c_str());
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
if (m_state != SessionCommon::PROCESSING) {
SE_THROW_EXCEPTION(TransportException,
"cannot send to our D-Bus peer");
}
// Change state in advance. If we fail while replying, then all
// further resends will fail with the error above.
m_state = SessionCommon::WAITING;
activateTimeout();
m_incomingMsg = SharedBuffer();
// TODO: turn D-Bus exceptions into transport exceptions
StringMap meta;
meta["URL"] = url;
reply(buffer, type, meta, false, m_sessionID);
}
void Connection::sendFinalMsg()
{
SE_LOG_DEBUG(NULL, "Connection %s: shut down (old state %s)",
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
m_sessionID.c_str(),
SessionCommon::ConnectionStateToString(m_state).c_str());
if (m_state == SessionCommon::PROCESSING) {
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
// send final, empty message and wait for close
m_state = SessionCommon::FINAL;
reply(GDBusCXX::DBusArray<uint8_t>(0, 0),
"", StringMap(),
true, m_sessionID);
}
}
void Connection::close(const Caller_t &caller,
bool normal,
const std::string &error)
{
SE_LOG_DEBUG(NULL, "Connection %s: client %s closes connection %s %s%s%s (old state %s)",
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
m_sessionID.c_str(),
caller.c_str(),
getPath(),
normal ? "normally" : "with error",
error.empty() ? "" : ": ",
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
error.c_str(),
SessionCommon::ConnectionStateToString(m_state).c_str());
boost::shared_ptr<Client> client(m_server.findClient(caller));
if (!client) {
SE_THROW("unknown client");
}
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
// Remove reference to us from client, will destruct *this*
// instance. To let us finish our work safely, keep a reference
// that the server will unref when everything is idle again.
boost::shared_ptr<Connection> c = m_me.lock();
if (!c) {
SE_THROW("connection already destructing");
}
m_server.delayDeletion(c);
client->detach(this);
if (!normal ||
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
m_state != SessionCommon::FINAL) {
std::string err = error.empty() ?
"connection closed unexpectedly" :
error;
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
m_statusSignal(err);
if (m_session) {
m_session->setStubConnectionError(err);
}
failed(err);
} else {
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
m_state = SessionCommon::DONE;
m_statusSignal("");
if (m_session) {
m_session->setStubConnectionError("");
}
}
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
// TODO (?): errors during shutdown of the helper will not get
// reported back to the client, which sees the operation as
// completed successfully once this call returns.
}
void Connection::abort()
{
if (!m_abortSent) {
SE_LOG_DEBUG(NULL, "Connection %s: send abort to client (state %s)",
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
m_sessionID.c_str(),
SessionCommon::ConnectionStateToString(m_state).c_str());
sendAbort();
m_abortSent = true;
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
} else {
SE_LOG_DEBUG(NULL, "Connection %s: not sending abort to client, already done (state %s)",
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
m_sessionID.c_str(),
SessionCommon::ConnectionStateToString(m_state).c_str());
}
}
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
void Connection::shutdown()
{
SE_LOG_DEBUG(NULL, "Connection %s: self-destructing (state %s)",
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
m_sessionID.c_str(),
SessionCommon::ConnectionStateToString(m_state).c_str());
// trigger removal of this connection by removing all
// references to it
m_server.detach(this);
}
Connection::Connection(Server &server,
const DBusConnectionPtr &conn,
const std::string &sessionID,
const StringMap &peer,
bool must_authenticate) :
GDBus: API and usage cleanup DBusCallObject and friends were not used anywhere. Removed. DBusObject and DBusRemoteObject used pure virtual methods to let derived classes provide information about interface, path, destination, method and the connection. The idea behind that was that most of these strings are static and thus do not need to be copied. The downside is that one had to derive from these classes in order to provide the required information. The same class could not own two instances of the generic DBusObject to access two different destinations. It also sprinkled the code for setting up D-Bus operations into several different places (constructor, class definition). Now the information is passed to the DBus[Remote]Object constructor and stored in the classes, which are thus merely containers for the information and thus easier to use. Users of the classes still derive from them, to keep the change smaller, although that is no longer necessary. Removed plain DBusConnection pointers from the C++ interface, return DBusConnectionPtr instead. The exception is DBusObject, which still returns plain pointers owned by the instance (as before) to ease integration with the underlying D-Bus library. DBUS_CONNECTION_TYPE is not needed. This revealed that quite some code incorrectly took another reference to the connection when assigning to DBusConnectionPtr (assignment always increases the refcount, only in the constructor is that optional). As a result the private connections were never destructed. Apparently there were some global pointers to active connections, so that this (minor) resource leak didn't show up in valgrind. It also showed that the closing of the D-Bus connection never happened properly, although libdbus requires it. The ref counting mechanism cannot be used for this because it cannot be checked whether the last reference is about to be dropped. The slightly hackish solution is to designate one DBusObject as the "main owner" of the connection. When that object destructs, it closes the connection. There might still be some other references; they simply cannot (and shouldn't) send or receive messages anymore.
2011-12-16 13:17:08 +01:00
DBusObjectHelper(conn,
std::string("/org/syncevolution/Connection/") + sessionID,
"org.syncevolution.Connection",
boost::bind(&Server::autoTermCallback, &server)),
m_server(server),
m_peer(peer),
m_mustAuthenticate(must_authenticate),
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
m_state(SessionCommon::SETUP),
m_sessionID(sessionID),
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
m_timeoutSeconds(-1),
sendAbort(*this, "Abort"),
m_abortSent(false),
reply(*this, "Reply"),
m_description(buildDescription(peer))
{
add(this, &Connection::process, "Process");
add(this, &Connection::close, "Close");
add(sendAbort);
add(reply);
m_server.autoTermRef();
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
SE_LOG_DEBUG(NULL, "Connection %s: created",
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
m_sessionID.c_str());
}
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
boost::shared_ptr<Connection> Connection::createConnection(Server &server,
const DBusConnectionPtr &conn,
const std::string &sessionID,
const StringMap &peer,
bool must_authenticate)
{
boost::shared_ptr<Connection> c(new Connection(server, conn, sessionID, peer, must_authenticate));
c->m_me = c;
return c;
}
Connection::~Connection()
{
SE_LOG_DEBUG(NULL, "Connection %s: done with '%s'%s%s%s (old state %s)",
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
m_sessionID.c_str(),
m_description.c_str(),
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
m_state == SessionCommon::DONE ? ", normal shutdown" : " unexpectedly",
m_failure.empty() ? "" : ": ",
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
m_failure.c_str(),
SessionCommon::ConnectionStateToString(m_state).c_str());
try {
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
if (m_state != SessionCommon::DONE) {
abort();
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
m_state = SessionCommon::FAILED;
}
// DBusTransportAgent waiting? Wake it up.
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
m_statusSignal(m_failure);
m_session.reset();
} catch (...) {
// log errors, but do not propagate them because we are
// destructing
Exception::handle();
}
m_server.autoTermUnref();
}
void Connection::ready()
{
SE_LOG_DEBUG(NULL, "Connection %s: ready to run (old state %s)",
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
m_sessionID.c_str(),
SessionCommon::ConnectionStateToString(m_state).c_str());
//if configuration not yet created
std::string configName = m_session->getConfigName();
SyncConfig config (configName);
if (!config.exists() && m_SANContent) {
SE_LOG_DEBUG(NULL, "Configuration %s not exists for a runnable session in a SAN context, create it automatically", configName.c_str());
ReadOperations::Config_t from;
const std::string templateName = "SyncEvolution";
// TODO: support SAN from other well known servers
ReadOperations ops(templateName, m_server);
ops.getConfig(true , from);
if (!m_peerBtAddr.empty()){
from[""]["SyncURL"] = string ("obex-bt://") + m_peerBtAddr;
}
m_session->setConfig (false, false, from);
}
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
// As we cannot resend messages via D-Bus even if running as
// client (API not designed for it), let's use the hard server
// timeout from RetryDuration here.
m_timeoutSeconds = config.getRetryDuration();
const SyncContext context (configName);
std::list<std::string> sources = context.getSyncSources();
if (m_SANContent && !m_SANContent->m_syncType.empty()) {
// check what the server wants us to synchronize
// and only synchronize that
m_syncMode = "disabled";
for (size_t sync=0; sync<m_SANContent->m_syncType.size(); sync++) {
std::string syncMode = m_SANContent->m_syncType[sync];
std::string serverURI = m_SANContent->m_serverURI[sync];
//uint32_t contentType = m_SANContent->m_contentType[sync];
bool found = false;
BOOST_FOREACH(const std::string &source, sources) {
boost::shared_ptr<const PersistentSyncSourceConfig> sourceConfig(context.getSyncSourceConfig(source));
// prefix match because the local
// configuration might contain
// additional parameters (like date
// range selection for events)
if (boost::starts_with(sourceConfig->getURINonEmpty(), serverURI)) {
SE_LOG_DEBUG(NULL,
source -> datastore rename, improved terminology The word "source" implies reading, while in fact access is read/write. "datastore" avoids that misconception. Writing it in one word emphasizes that it is single entity. While renaming, also remove references to explicit --*-property parameters. The only necessary use today is "--sync-property ?" and "--datastore-property ?". --datastore-property was used instead of the short --store-property because "store" might be mistaken for the verb. It doesn't matter that it is longer because it doesn't get typed often. --source-property must remain valid for backward compatility. As many user-visible instances of "source" as possible got replaced in text strings by the newer term "datastore". Debug messages were left unchanged unless some regex happened to match it. The source code will continue to use the old variable and class names based on "source". Various documentation enhancements: Better explain what local sync is and how it involves two sync configs. "originating config" gets introduces instead of just "sync config". Better explain the relationship between contexts, sync configs, and source configs ("a sync config can use the datastore configs in the same context"). An entire section on config properties in the terminology section. "item" added (Todd Wilson correctly pointed out that it was missing). Less focus on conflict resolution, as suggested by Graham Cobb. Fix examples that became invalid when fixing the password storage/lookup mechanism for GNOME keyring in 1.4. The "command line conventions", "Synchronization beyond SyncML" and "CalDAV and CardDAV" sections were updated. It's possible that the other sections also contain slightly incorrect usage of the terminology or are simply out-dated.
2014-07-28 15:29:41 +02:00
"SAN entry #%d = datastore %s with mode %s",
(int)sync, source.c_str(), syncMode.c_str());
m_sourceModes[source] = syncMode;
found = true;
break;
}
}
if (!found) {
SE_LOG_DEBUG(NULL,
"SAN entry #%d with mode %s ignored because Server URI %s is unknown",
(int)sync, syncMode.c_str(), serverURI.c_str());
}
}
if (m_sourceModes.empty()) {
SE_LOG_DEBUG(NULL,
"SAN message with no known entries, falling back to default");
m_syncMode = "";
}
}
if (m_SANContent) {
m_session->setRemoteInitiated(true);
}
// proceed with sync now that our session is ready
m_session->sync(m_syncMode, m_sourceModes);
}
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
void Connection::activateTimeout()
{
if (m_timeoutSeconds >= 0) {
m_timeout.runOnce(m_timeoutSeconds,
boost::bind(&Connection::timeoutCb,
this));
} else {
m_timeout.deactivate();
}
}
void Connection::timeoutCb()
{
SE_LOG_DEBUG(NULL, "Connection %s: timed out after %ds (state %s)",
D-Bus Connection: more strict error handling, logging Running the fork/exec implementation under valgrind caused some tests to fail because a) some tests ran longer (fixed by increasing timeouts) and b) some tests resulted in different D-Bus communication depending on the timing. Added more debug logging in the syncevo-dbus-server Connection class and the syncevo-dbus-helper DBusTransport to track this down. There were multiple reasons, usually related to handling aborted connections. The D-Bus API explicitly says about the "Abort" signal sent by the server: "This signal is sent at most once for each connection. No reply will be sent on an aborted connection." The old code did send an empty, final reply after aborting and the test-dbus.py actually checked for it. Now that final message is really only send when the connection is still waiting for a reply (state == PROCESSING) and hasn't been aborted. The test was fixed accordingly. The "Abort" documentation also says that "all further operations on it [= Connection] will fail". Despite that comment one D-Bus test did a Connection.Close() after receiving the Abort signal. The server now destroys the Connection instance once it has failed and thus the Close() call failed. It was removed. The Connection class now consistently uses delayed deletion, instead of destructing itself while some of its methods are still active. A bit safer. While thinking about the server<->helper communication I noticed that a Connection.Close() succeeds even if the helper hasn't shut down yet. Not sure whether there are relevant error scenarios where we need to tell the client that shutdown of the helper failed.
2012-03-30 10:04:31 +02:00
m_sessionID.c_str(), m_timeoutSeconds,
SessionCommon::ConnectionStateToString(m_state).c_str());
D-Bus server: fork/exec for sync, command line and restore operations This commit moves the blocking syncing, database restore and command line execution into a separate, short-lived process executing the syncevo-dbus-helper. The advantage is that the main syncevo-dbus-server remains responsive under all circumstances (fully asynchronous now) and suffers less from memory leaks and/or crashes during a sync. The core idea behind the new architecture is that Session remains the D-Bus facing side of a session. It continues to run inside syncevo-dbus-server and uses the syncevo-dbus-helper transparently via a custom D-Bus interface between the two processes. State changes of the helper are mirrored in the server. Later the helper might also be used multiple times in a Session. For example, anything related to loading backends should be moved into the helper (currently the "is config usable" check still runs in the syncevo-dbus-server and needs to load/initialize backends). The startup code of the helper already handles that (see boolean result of operation callback), but it is not used yet in practice. At the moment, only the helper provides a D-Bus API. It sends out signals when it needs information from the server. The server watches those and replies when ready. The helper monitors the connection to the parent and detects that it won't get an answer if that connection goes down. The problem of "helper died unexpectedly" is also handled, by not returning a D-Bus method reply until the requested operation is completed (different from the way how the public D-Bus API is defined!). The Connection class continues to use such a Session, as before. It's now fully asynchronous and exchanges messages with the helper via the Session class. Inside syncevo-dbus-server, boost::signals2 and the dbus-callbacks infrastructure for asynchronous methods execution are used heavily now. The glib event loop is entered exactly once and only left to shut down. Inside syncevo-dbus-helper, the event loop is entered only as needed. Password requests sent from syncevo-local-sync to syncevo-dbus-helper are handled asynchronously inside the event loop driven by the local transport. syncevo-dbus-helper and syncevo-local-sync are conceptually very similar. Should investigate whether a single executable can serve both functions. The AutoSyncManager was completely rewritten. The data structure is a lot simpler now (basically just a cache of transient information about a sync config and the relevant config properties that define auto syncing). The main work happens inside the schedule() call, which verifies whether a session can run and, if not possible for some reasons, ensures that it gets invoked again when that blocker is gone (timeout over, server idle, etc.). The new code also uses signals/slots instead of explicit coupling between the different classes. All code still lives inside the src/dbus/server directory. This simplifies checking differences in partly modified files like dbus-sync.cpp. A future commit will move the helper files. The syslog logger code is referenced by the server, but never used. This functionality needs further thought: - Make usage depend on command line option? Beware that test-dbus.py looks for the "ready to run" output and thus startup breaks when all output goes to syslog instead of stdout. - Redirect glib messages into syslog (done by LogRedirect, disabled when using LoggerSyslog)? The syncevo-dbus-server now sends the final "Session.StatusChanged done" signal immediately. The old implementation accidentally delayed sending that for 100 seconds. The revised test-dbus.py checks for more "session done" quit events to cover this fix. Only user-visible messages should have the INFO level in any of the helpers. Messages about starting and stopping processes are related to implementation details and thus should only have DEBUG level. The user also doesn't care about where the operation eventually runs. All messages related to it should be in INFO/DEBUG/ERROR messages without a process name. Therefore now syncevo-dbus-server logs with a process name (also makes some explicit argv[0] logging redundant; requires changes in test-dbus.py) and syncevo-dbus-helper doesn't. syncevo-local-sync is different from syncevo-dbus-helper: it produces user-relevant output (the other half of the local sync). It's output is carefully chosen so that the process name is something the user understands (target context) and output can be clearly related to one side or the other (for example, context names are included in the sync table). Output handling is based on the same idea as output handling in the syncevo-dbus-server: - Session registers itself as the top-most logger and sends SyncEvolution logging via D-Bus to the parent, which re-sends it with the right D-Bus object path as output of the session. - Output redirection catches all other output and feeds it back to the Session log handler, from where it goes via D-Bus to the parent. The advantage of this approach is that level information is made available directly to the parent and that message boundaries are preserved properly. stderr and stdout are redirected into the parent and logged there as error. Normally the child should not print anything. While it runs, LogRedirect inside it will capture output and log it internally. Anything reaching the parent thus must be from early process startup or shutdown. Almost all communication from syncevo-dbus-helper to syncevo-dbus-server is purely information for the syncevo-dbus-server; syncevo-dbus-helper doesn't care whether the signal can be delivered. The only exception is the information request, which must succeed. Instead of catching exceptions everywhere, the optional signals are declared as such in the EmitSignal template parameterization and no longer throw exceptions when something goes wrong. They also don't log anything, because that could lead to quite a lof of output.
2012-03-26 17:19:25 +02:00
failed(StringPrintf("timed out after %ds", m_timeoutSeconds));
}
SE_END_CXX