Grav CMS – Prepare Media/Images

Grav has a caching mechanism that automatically generates thumbnails. Still it is advisable to reduce the original size of the images. I use two tools to achieve this:

find . -iname '*.jpg' -exec convert \{} -verbose -resize 2048x2048\> \{} \;
exiftool -all= *

The above command rescales all images to maximum width or height of 2048 (preserving the aspect ratio) and removes all EXIV Information.

Another task is to generate a .meta.yaml file for each picture. Unfortunately Grav has no mechanism to achieve this out of the box. However a small script can help:

for f in *.jpg
do
echo "Creating metadata file for jpg file - $f"
echo "alt_text: " > "$f.meta.yaml"
done

How to download referenced images and videos of tweets

Recently I wanted to download a twitter stream. I didn’t want to write a specific program for it. After some research I found the following steps:

  1. Load the tweets into a Google Sheet. This can be done using the Twitter Archiver plugin
  2. Download the file to the desktop
  3. Remove everything but the column “Media” and safe the file as a csv file
  4. Remove the empty lines:
    sed -i '/^$/d' file.csv
  5. Let wget download the images/files
    wget -i file.csv

Extremely slow apt-get update on Debian Jessie

I was fiddling around with the phpdocker.io service. I generated a PHP 5.6 image (phpdockerio/php56-fpm). When I tried to run it, the following line was extremely slow:

RUN apt-get update \

&& apt-get -y –no-install-recommends install php5-mysql php5-gd php5-imagick \

&& apt-get clean; rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/doc/*

First I thought that it is completely frozen, as it didn’t show any progress. Then I realized that it is just extremely slow as it printed a line after an hours or so. Upon further analysis I found this bug: https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1332440. Because of other containers I had a high number for ulimit configured.

I then realized that it was pretty trivial to fix it. I just had to lower the ulimit temporarely. Doing this made it possible to run the command in a couple of seconds:

RUN ulimit -n 10000 && apt-get update \

&& apt-get -y –no-install-recommends install php5-mysql php5-gd php5-imagick \

&& apt-get clean; rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/doc/*

Installing Brother DCP-J41100W Scanner under Fedora 28 Linux

After upgrading from Fedora 28 the scanner was not working anymore. Here some hints to fix this:
1. Download the Driver Install Tool from the Brother Support Webpage
2. Run the install script

linux-brprinter-installer-2.2.sh

(in my case I specified a Device URI and entered the IP adress)
3. Install libnsl by running

dnf install libnsl

4. The scanner should now work. You can check this by running

scanimage -L

. It should be listed

Solr setup full-text search in 5 minutes

  1. Download the latest version of solr (Solr 6.5.1)
  2. Unpack it and go into the bin directory
  3. Start it up by executing:
    ./solr start
  4. Initialize the configuration by executing:
    ./solr create -c files -d ../example/files/conf
  5. Index your files by executing:
    ./post -c files ~/Documents
  6. Open the web browser and start Searching

At this point it is basically already working.

I prefer to do some more optimizations:

  1. Open server/solr/files/conf/velocity/head.vm and remove the css .result-document:hover. This gets rid of the annoying zoom effect when hovering search results
  2. Open server/solr/files/conf/velocity/hit.vm and replace
    
        
        $title
      
    

    with

      
        
        $title
      ;
    

    This adds a link for each result to directly open the file. (as the link is local, you need to install the firefox extension local_filesystem_links).