Migrating from Owncloud to Nextcloud on Fedora 24

Installing nextcloud on fedora is not trivial as you first need to download the code, install some dependencies and compile it.

git clone https://github.com/nextcloud/client_theming.git
git submodule update --init --recursive
sudo dnf install cmake gcc-c++ openssl-devel sqlite-devel qt5-qtwebkit-devel libqt5keychain-devel
mkdir build-linux
cd build-linux
cmake -D OEM_THEME_DIR=`pwd`/../nextcloudtheme ../client
make
sudo make install

Without installing the dependencies you would get the following error messages:

  • No CMAKE_CXX_COMPILER could be found.
  • Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the
    system variable OPENSSL_ROOT_DIR (missing: OPENSSL_LIBRARIES
    OPENSSL_INCLUDE_DIR) (Required is at least version “1.0.0”)
  • No package ‘sqlite3’ found. Could NOT find SQLite3 (missing: SQLITE3_LIBRARIES SQLITE3_INCLUDE_DIRS)
    (Required is at least version “3.8.0”)
  • Could NOT find Qt4 (missing: QT_QTWEBKIT_INCLUDE_DIR QT_QTWEBKIT_LIBRARY) (found suitable version “4.8.7”, minimum required is “4.7.0”)
    Qt QTWEBKIT library not found.
  • Could NOT find QtKeychain (missing: QTKEYCHAIN_LIBRARY
    QTKEYCHAIN_INCLUDE_DIR)

Before you can start it you need to add the following line in ~/.bashrc and restart the system

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib64

otherwise you get the error nextcloud: error while loading shared libraries: libnextcloudsync.so.0: cannot open shared object file: No such file or directory

Also don’t forget to uninstall the old client:

sudo dnf remove owncloud-client

On the first start the Nextcloud Connection Wizard opens. You can now enter the Server Adress, Username and Password. Then you need to select the folders that you want to sync and as local folder you chose the same folder as you configured with owncloud. If you do this, the option “Keep local data” appears and if you check it, the data will be taken over.

Using a .NET FileWatcher with a dfs properly

There is not a lot of information with practical experience about the Microsoft .NET File Watcher Technology available in the internet. So here some advices.

How it works

It is basically a wrapper around the windows api function ReadDirectoryChangesW.

Possible restrictions with remote locations

There might be several limitations in place that you usually cannot see:

  • Latency raises the chance for a buffer overflow.
  • The buffer size might be artifically restricted to a size smaller than 64 kB.
  • The implementation might miss the ReadDirectoryChangesW function or it might be implemented incorrectly (for example if you are dealing with a linux/unix server)
  • The server might shut down incorrectly. In such a case the FileWatcher might be unaware of the fact that his handle is invalid and just stop reporting changes from that point on. Unfortunately it will usually also not raise any error and there is no timeout. The only solution I know about is a frequent reregistration (setting EnableRaising events to false and the to true). But if you do this, you need to pay attention not to loose any events.

My advice would be to do some testing whenever you are using the FileWatcher with a remote location. You could create a programm that generates some files and verify that the events are properly raised

Buffer

The FileWatcher uses an internal buffer of 8 kB. It is possible to raise it to 64 kB. This should be done if you are monitoring an entire disk where there can be many events. However it is costly because it will be nonpaged memory. When you get the event it frees some space in the buffer. This means it is really important to process the events as fast as possible. Don’t do any check in the event handler. Add the event to another queue of your own and process it there asynchronously. If the buffer overflows you get an error and you

The size of the elements of the buffer is 12 bytes + two times the length of the path. In the worst case it can hold only 15 events (all path 260 characters) when configured with a buffer size of 8 kB.

Error Handling

You need to register to the Error event.

One reason why an error could be raised is that the buffer has been filled up. In that case you will get an InternalBufferOverflowException. It is likely that you missed events.

You might also get an error when a remote location shuts down or some network error happens. In such a case it is usually advisable to reregister the watcher. Otherwise it might be the case that it doesn’t raise any more events (invalid handle).

Locking

You will receive many events when the file it concerns is still locked. Therefore you need some logic to wait until the file is “free” before you do whatever you want to do.

Configuring it correctly

You should set the NotifyFilter property to as few changes as possible in order not to stress the buffer to much. You can also create multiple instances of the FileWatcher each monitoring the same directory configured for different change types. In that case the logic further down the stream will be more complex to merge all those events.

If you only need to watch some filetypes you can also set the filter property

Furthermore the property IncludeSubdirectories should be set if you want to monitor an entire drive or folder structure

Access Rights special case

Another strange aspect of the FileWatcher is that it won’t respect the access rights of the user registering the FileWatcher. Thus you can get events for files that your program is not even allowed to see. You should filter such events or at least be prepared to see them.

Using it with a Windows DFS based on Windows Server 2012 R2

I used it in a configuration which had multiple data servers behind a common root. It worked without any problem, except for the buffer overflows if lots of activity was going on.

I got events about DFSRPrivate folders. This is a typical case of the missing consideration of access rights. The application was not allowed to access those “internal” files and needed to skip them

Some events happened with temporary office files (when the users opens word- or excel-documents). I also filtered them out.

More information

  • http://blogs.msdn.com/b/winsdk/archive/2015/05/19/filesystemwatcher-follies.aspx
  • https://msdn.microsoft.com/en-us/library/system.io.filesystemwatcher%28v=vs.110%29.aspx
  • http://stackoverflow.com/questions/13916595/is-it-really-that-expensive-to-increase-filesystemwatcher-internalbuffersize

PHP-Script for sending personalized html newsletters

I searched for a simple solution to send some personalized html e-mails with embedded pictures. However I didn’t find one. Thus I created the following script:

<?php
require 'class.phpmailer.php';

foreach(file('recipients.csv') as $line)
{
    list($to, $name, $greeting) = explode("\t", $line);
	$mail = new PHPMailer;

	$mail->setFrom('no-reply@example.net', 'Example Sender');
	$mail->CharSet = 'utf-8';
	$mail->SetLanguage ("en");
	$mail->isHTML(true);

	$mail->addAddress($to, $name);
	$mail->Subject = "Example Newsletter";

	$mail->AddEmbeddedImage('test.png', 'test');

	$body = "<html>
  		<head>
    			<meta http-equiv=\"content-type\" content=\"text/html; charset=utf-8\">
  		</head>
  		<body>
    		<p>$greeting $name</p>
		<p>This is a personalized newsletter</p>
		<p>Nice picture <img alt=\"Example Pic\"
                  src=\"cid:test\"
                  height=\"424\" width=\"300\"></p>
		</body>
		</html>";

	$mail->Body = $body;

	if(!$mail->send()) {
	    echo 'Message could not be sent.';
	    echo 'Mailer Error: ' . $mail->ErrorInfo;
	} else {
	    echo 'Message to ' . $to . " has been sent\r\n";
	}
}
?>

It uses phpmailer and a simple csv-file to generate and send the emails. To execute it you just need to upload it to a server and execute it with:

php sendnewsletter.php

To add new recipients you can open the file recipients.csv in Microsoft Excel or LibreOffice and add new rows. When you save it just select the CSV-Format and choose the tabulator character as deliminator.

The generated email in this example looks like this:

It can be freely adapted in the code. You just need to escape ” with \”

Download the code

Find out what library is missing Fedora 22

Sometimes some libraries are missing. This leads to the following error message:

./command: error while loading shared libraries: abc.so.0: cannot open shared object file: No such file or directory

It will only display the first missing library. So first you can use the following command to identify them:

ldd idaq | grep found

This might lead to the following output, where you can see a list of all missing libraries:

        libgthread-2.0.so => not found
	libfreetype.so.6 => not found
	libSM.so.6 => not found
	libICE.so.6 => not found
	libXrender.so.1 => not found
	libfontconfig.so.1 => not found
	libXext.so.6 => not found
	libX11.so.6 => not found

The next step is finding out which library you need to install (32-Bit libraries in /usr/lib, 64-Bit libraries in /usr/lib64):

dnf provides /usr/lib/libgthread-2.0.so

This will output something like:

glib2-devel-2.44.1-1.fc22.i686

If you want to install it you can omit the version number and the .fc22:

dnf install glib2-devel.i686

Run IDA Disassembler on x64 Fedora 22

When executing idaq I got the following error message:

./idaq: error while loading shared libraries: libgthread-2.0.so.0: cannot open shared object file: No such file or directory

This error appeared because I lacked some libraries (32-Bit version). To fix it I had to run the following commands:

dnf install glib2-devel.i686
dnf install freetype.i686
dnf install libSM.i686
dnf install libXrender.i686
dnf install libXext.i686
dnf install fontconfig.i686

DevArt Bulk Data Insertion into an Oracle Database

There are some possibilities to bulk insert data. For my tests I inserted 1’500 rows containing some strings.

  • Use the Context: 50 seconds
  • Use ArrayBinding: 43 seconds
  • Use the OracleLoader class: 14 seconds

It seems that the OracleLoader is the fastes way to do this.

ArrayBinding:

        private void TestArrayBindingInsert(Context context) {
            const string connectionString = "DATA SOURCE=(DESCRIPTION=
(ADDRESS= (PROTOCOL=TCP) (HOST=...) ...";
            var connection = new OracleConnection(connectionString);
            connection.Open();

            var command = connection.CreateCommand();
            command.CommandText =  "INSERT INTO TESTTABLE (TEXT1, TEXT2) VALUES (:p1, :p2)";

            command.Parameters.Add("p1", OracleDbType.VarChar);
            command.Parameters.Add("p2", OracleDbType.Clob);
            command.Parameters["p1"].Value = _texts.ToArray();
            command.Parameters["p2"].Value =  _texts.ToArray();
            command.Prepare();
            command.ExecuteArray(protocolIds.Length);
            command.Dispose();
            connection.Close();
        }

OracleDataLoader:

        private void TestOracleLoaderInsert(MainModelEntities context, long
userId, long writeGroupId) {
            string[] texts1 = _texts.ToArray();
            string[] texts2 = _texts.ToArray();

            const string connectionString = "DATA SOURCE=(DESCRIPTION=
(ADDRESS= (PROTOCOL=TCP) (HOST=...) ...";

            var connection = new OracleConnection(connectionString);
            connection.Open();

            OracleLoader loader = new OracleLoader {
                Connection = connection,
                TableName = "TestTable"
            };

            loader.CreateColumns();
            loader.Open();

            for (int i = 0; i < texts1.Length; i++) {
                loader.SetValue("TEXT1", actionTextsEn[i]);
                loader.SetValue("TEXT2", actionTextsFr[i]);
                loader.NextRow();
            }
            loader.Close();
            connection.Close();
        }

Oracle SQL Fast Retrieval of CLOBS

If a table contains CLOBS, depending on the method you use to connect, usually the CLOB-columns just return a handle that will trigger another connection/query to the database in order to retrieve the value. This makes it really slow. An easy way to speed it up is to retrieve everything which is small enough as string value. A string value however is limited to 4’000 characters. Thus you need to issue two queries:

select t.xml.getCLOBVal() from my_table t where length(t.xml.getclobval()) <> length(dbms_lob.substr(t.xml.getclobval(), 4000, 1))
select t.xml.getstringval() from my_table t where length(t.xml.getclobval()) = length(dbms_lob.substr(t.xml.getclobval(), 4000, 1))

The dbms_lob.substr part is there to avoid problems with multibyte characters and encodings.

Some database providers like ODP.net provide properties like InitialLOBFetchSize that can be set to retrieve the data in-line as well. Devart on the other hand does not support any in-line loading of CLOB data.

Import local subversion repository to git

If you want to import a local subversion repository (maybe a dump) to git you can do this quite easily:

First you need to map the Subversion commit authors to Git commiters. For you just create a textfile author-mapping.txt:

svnauthorname=gitauthorname 

And then you run the following commands:

mkdir repo && cd repo
git svn init file:///path/svnrepo --no-metadata
git config svn.authorsfile ~/author-mapping.txt
git svn fetch

Sometimes the protocol file:// cannot be handled. In that case you can run svnserve –daemon to serve your subversion repository locally. Then you can use svn://localhost/svnrepo instead.

Mangled Names on Synology Diskstation

From time to time I noticed strange filenames on the Synology Diskstation. Names like BGELMF~0. They always had a tilde. Those strange names are called mangled names. Samba has a feature to provide them to bad clients. Unfortunately this can also lead to “renaming” those files.

It is better to prevent this behavior altogether. To do this you need to connect over SSH to your diskstation and then you need to login as root.

Issue the following commands:

cd /usr/syno/etc
vi smb.conf

Now insert the following line in the section global:

global
	mangled names = no

Notice: this option could be reset after an update of the diskstation. If the bad filenames appear again, you should verify that the settings is still active.