22 Mar

Resilio Connect

For a couple of months ago i was looking at a p2p sync service which works almost as seafile or ‘owncloud’ except with a p2p engine which has been improved/hardening according to their webpage. So i signed up for some more information, and a possibility to try or look on a demo. It took a while and i thought it was a no show, but last week i received a phone call with more information about the products and how it worked. But I also received a demo key to try it out for the company where i work.

So in this short post i will go through the product from the installation point to fully test the product and give some pro’s and con’s. But first i’ll go through how it works.

P2P

Peer to Peer is a rather interesting protocol, instead of the old classic client -> server thinking it’s more about decentralise the content. This by putting files or content into smaller chunks and distributing it to peers or more like nodes that also has the file/content or parts of the file/content. This also means that we don’t need a dedicated server constantly to serve the file or content, but the collected bunch of nodes need to have 100% of the content/file to sync up. P2P works great as long the content is distributed to one or more nodes, computers that are fully synced or has the whole content is often called seeders since they are able to serve the whole content while the peers has parts of the content and is probably still downloading the content. But the main point is decentralise, increase the speed of the peers and distribute the load over several nodes.

Resilio

Free, Pro, & Workgroup
Resilio offers 3 different kinds of products, Resilio free / pro is a basic filesyncing program which resembles seafile except the engine. Free is rather limited but still a good product, if you spend some extra bucks you get a decent program that syncs your files across several devices and it also includes selective sync. I would go with Resilio free if it’s just syncing from device to device and if i don’t have several devices at home. But if i have several devices and across wan interface i would go with the pro edition, since i could limit the bandwidth usage and use selective sync to limit what i sync across the wan. More info about it here: https://www.resilio.com/individuals/

Workgroup is the second product that is rather good if you want to use it at work, for example projects. The difference between workgroup and resilio free/pro is the several user concept. Thus you are able to share files and folders with co-workers. And it’s very easy to manage which user have access to which file and easy to remove unacceptable users. Another thing they have done with workgroup that free/pro doesn’t offer is ease of access to the files, it’s very easy to share a qr code and give readonly access to specific user. This is also a good product compared to the other syncing tools because it offers unlimited data. More info here: https://www.resilio.com/workgroups/

Connect
Resilio Connect is a enterprise solution, unlike workgroup and pro/free solution its goal is to have a centralised management control over file syncing and command execution(almost like group policy in windows server) . It’s a rather good solution if you only want centralised control and not allowing the users to select what to sync and when to sync. It also has several good features that like scheduled syncing, wan optimisation, selective syncing and several other features. It’s a very good tool to sync data between servers and sending a distributed cmd, their management interface also allows the administrator the view if ‘jobs’ are done and executed. Connect allows almost no user control, this means the users / clients are not allowed to say when how where to sync the data, that is up to the administrator to direct and manage. More information can be found here: https://www.resilio.com/connect/

Installing

After a clean install of debian i decided to download and install Resilio, it seems very simple to install on windows since all the installation guides tend to deal with that. But from my perspective the linux install seems, odd. From what i can read it’s a basic download and un-tar in /opt. I would never do it this way. But since this is a test run with the product sure i’ll go with it. No boot script which means that you have either find one or write it on your own. Remember, NEVER RUN THIS AS ROOT!!!

Installing the management console:

# Install
cd /tmp/
wget https://changethis/resilio-connect-server-linux-x64.tar.gz
tar -xvf resilio-connect-server-linux-x64.tar.gz -C /opt

useradd -M -r -s /bin/false resilioconnect
chmod -R 755 /opt/resilio-connect-server
chown -R resilioconnect:resilioconnect /opt/resilio-connect-server

# No boot script published, please create your own. 

# To run it
/opt/resilio-connect-server/srvctrl start
Testing

First test contained syncing a 3.8gb ISO file between 2 computers and 1 server. 2nd test was to try syncing cmd’s and forcing the computers to reboot on demand. 3rd and final test was to work on the same document at the same time in a synced folder.

The setup
The setup is a very basic network, one AC access point connected to a gbic switch. one macbook connected to the access point with the max theoretical speed of 300Mbit. one server with debian connected to the switch with a 1 gbic interface, this server also has the resilio connect management control installed. And one client pc connected with a 1 gbic interface to the switch. Both the macbook and the pc client has the resilio agent installed and connected to the server. A clear image of the setup is shown bellow.

1. Result from filesync
The files synced rather fast, first the server got the whole file but at the same time the client was also synchronising. Which meant once the server was done it also pushed out (seeded) the file that was stored in the shared folder, pushing the client to a peak of 700Mbit, which wasn’t bad. It took a bit longer with several small files to sync up but it also the same, pushing both the server and the clients to its limit.

2. Results from cmd’s
First i ran basic restart script to find out if the computers & server actually reboots, resulted in none of them restarted since none of the agents on the computers had the privilege to execute it. Though the Resilio manager returned ‘job done’. It was rather neat since i was able to set different reboot cmd’s depending on OS. The second cmd was basic fetch sys information from the systems into the shared sync folder, which was a success. All the systems returned the information that they had privilege to access. This could be a alternative way to shoot out scripts and files, perfect if you are managing several servers that requires a patch or application.

3. Several editing the same file
I did a basic text document editing, just to find out what happens when two or more users were working with the same file at the same moment. On one computer i wrote a couple of lines saved and kept it open, on the other computer i edited the same file and save upon exit. I waited for the sync to be completed. It wrote over the file on computer 1. Saved and exit on computer 1 again and the old version was back. Thus no version check or conflicting copies were done, which i dislike since i always have users editing projects at the same time. Maybe a feature they might implement but who knows.

Conclusion

Resilio seems to be in the early stage of development, it has a really good concept and possible a good engine. But lacks several features that could give the administrator and the users more right to manage content. But i will give a very strong warning to who ever uses this software, make sure to jail this software and ensure no one ever gets access to the management interface without permission or experience. If this software is installed by default settings, and is running with the wrong permission it could do more damage than good.

What i would love to see is mixing both resilio connect and resilio workgroup, specially if it’s a larger company that needs workgroup features but still some control over how everything is used over the network and when. thus be able to limit the bw usage different hours and so on. And making it easier for those users that needs to be forced into syncing different folders. And maybe expand the possibility to backup user data using this tech. The selective sync might be advanced for beginners and would be easier with a interface showing what to sync and not to sync in a project.

This is almost as good as the project in the tv show ‘Silicon Valley’ super good idé and engine, but lacks the bigger picture and direction on where to go. It will be a hit for advanced users and administrators, but doesn’t reach the less savvy users.

Pros
+ Fast syncing
+ Server downtime isn’t the end of the world
+ Nice looking interface
+ BG Sync works perfect
+ Large file transfers
Cons
– Users will not be able to select what to sync.
– Users will use their bw to sync.
– Linux lacks boot script and sketchy install
– Has few functions compared to the competition
– Lacks the features that Resilio Workgroup has.
– Lacks a bigger picture
06 Mar

Freebsd 11.0 – FEMP

FEMP

This is a basic setup of FEMP, it’s splitted in several parts. First you have to select either installing it from ports or building it all from source files, both works but the port install takes less time todo. After you have done the installation move forward to configuration and trying it out. This is by no mean a optimal guid, but a fast jump into installing FEMP.

Through Ports

###################################################################
#
# First i start off with Nginx since its the easy part of the install
#
###################################################################

# Enter the ports folder for nginx
cd /usr/ports/www/nginx

# Configure the build, Basic settings are fine for beginners
make config-recursive

# Install and clean up
make install clean

# Add nginx to startup upon boot of the os
echo 'nginx_enable="YES"' >> /etc/rc.conf

# Create a www data folder that is easy to access for the future. And change owner.
mkdir /home/www
mkdir /home/www/default
chown www:www /home/www


###################################################################
#
#  Next step is install the database, i prefer mariadb.
#
###################################################################

# Enter the ports folder for mariadb server 10.1
cd /usr/ports/databases/mariadb101-server/

# Configure the build, accept basic settings
make config-recursive

# Install and clean up, this will take a while.
make install clean

# Setup the password for mysql root account, default is none if asked for passwd.
# Set a strong password, and then default settings on the rest.
/usr/local/bin/mysql_secure_installation

# Add it to startup upon boot of os.
echo 'mysql_enable="YES"' >> /etc/rc.conf

###################################################################
#
# Now time for the last part install php.
#
###################################################################

# Enter the ports folder for mariadb server 10.1
cd /usr/ports/lang/php70

# Ensure that fpm is marked.
make config-recursive

# Install and clean-up
make install clean

# Add php to spawn upon boot
echo 'php_fpm_enable="YES"' >> /etc/rc.conf

# Copy the production file into php.ini
cp /usr/local/etc/php.ini-production /usr/local/etc/php.ini

# install following extensions Json, session, mysqli, mbstring, 
# gd, openssl, zlib, zip, pdo_mysql
cd /usr/ports/lang/php70-extensions
make config-recursive
make install clean
Building from source
######################################
#  Fetch all the files
######################################
cd /tmp/
fetch ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.40.tar.gz
fetch http://www.cpan.org/src/5.0/perl-5.24.1.tar.gz
fetch http://nginx.org/download/nginx-1.11.9.tar.gz
fetch http://se2.php.net/get/php-7.1.1.tar.gz/from/this/mirror
fetch http://acc.dl.osdn.jp/php-i18n/52624/libmbfl-1.2.0.tar.gz
fetch http://www.zlib.net/zlib-1.2.11.tar.gz
fetch --no-verify-peer https://www.openssl.org/source/openssl-1.0.2k.tar.gz
fetch http://www.libarchive.org/downloads/libarchive-3.2.2.tar.gz
fetch --no-verify-peer https://curl.haxx.se/download/curl-7.52.1.tar.gz
fetch --no-verify-peer https://cmake.org/files/v3.7/cmake-3.7.2.tar.gz
fetch --no-verify-peer https://ftp.gnu.org/gnu/m4/m4-1.4.18.tar.gz
fetch --no-verify-peer https://ftp.gnu.org/gnu/bison/bison-3.0.4.tar.gz
fetch --no-verify-peer https://ftp.gnu.org/gnu/bash/bash-4.4.tar.gz
fetch --no-verify-peer https://ftp.gnu.org/gnu/bash/readline-5.1.tar.gz
fetch ftp://sourceware.org/pub/libffi/libffi-3.2.1.tar.gz
fetch --no-verify-peer https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz
fetch ftp://ftp.gnu.org/gnu/gss/gss-1.0.3.tar.gz
fetch http://ftp.ddg.lth.se/mariadb//mariadb-10.1.21/source/mariadb-10.1.21.tar.gz
fetch --no-verify-peer https://ftp.gnu.org/gnu/automake/automake-1.15.tar.gz
fetch --no-verify-peer https://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz
fetch http://ftp.acc.umu.se/mirror/gnu.org/gnu/libtool/libtool-2.4.tar.gz
fetch ftp://xmlsoft.org/libxml2/libxml2-2.9.4.tar.gz
fetch http://downloads.webmproject.org/releases/webp/libwebp-0.6.0.tar.gz
fetch --no-verify-peer https://kent.dl.sourceforge.net/project/libpng/libpng16/1.6.28/libpng-1.6.28.tar.gz
fetch --no-verify-peer https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz
fetch http://www.ijg.org/files/jpegsrc.v9b.tar.gz
fetch --no-verify-peer https://mirrors.netix.net/sourceforge/m/mc/mcrypt/Libmcrypt/2.5.8/libmcrypt-2.5.8.tar.gz



######################################
#  Un-tar everything in /tmp
######################################
tar -zxf pcre-8.40.tar.gz | tar -zxf nginx-1.11.9.tar.gz | tar -zxf mirror
tar -zxf mariadb-10.1.21.tar.gz  | tar -zxf perl-5.24.1.tar.gz | tar -zxf zlib-1.2.11.tar.gz
tar -zxf libmbfl-1.2.0.tar.gz | tar -zxf openssl-1.0.2k.tar.gz
tar -zxf libarchive-3.2.2.tar.gz | tar -zxf curl-7.52.1.tar.gz | tar -zxf cmake-3.7.2.tar.gz
tar -zxf m4-1.4.18.tar.gz | tar -zxf bison-3.0.4.tar.gz | tar -zxf bash-4.4.tar.gz
tar -zxf readline-5.1.tar.gz | tar -zxf libffi-3.2.1.tar.gz | tar -zxf Python-2.7.13.tgz
tar -zxf gss-1.0.3.tar.gz | tar -zxf mariadb-10.1.21.tar.gz | tar -zxf automake-1.15.tar.gz
tar -zxf autoconf-2.69.tar.gz | tar -zxf libtool-2.4.tar.gz | tar -zxf libmcrypt-2.5.8.tar.gz
tar -zxf libxml2-2.9.4.tar.gz | tar -zxf libwebp-0.6.0.tar.gz
tar -zxf libpng-1.6.28.tar.gz | tar -zxf gmp-6.1.2.tar.xz | tar -zxf jpegsrc.v9b.tar.gz


######################################
#
#  Required files/packages  before
#  install of php,nginx,mariadb
#
######################################
cd /tmp/perl-5.24.1
./configure.gnu
make
make install

cd /tmp/openssl-1.0.2k
./config -fPIC shared
make
make install

cd /tmp/libarchive-3.2.2
./configure
make
make install

cd /tmp/curl-7.52.1
./configure
make
make install

cd /tmp/cmake-3.7.2
./configure
make
make install

cd /tmp/m4-1.4.18
./configure
make
make install

cd /tmp/bison-3.0.4
./configure
make
make install

cd /tmp/bash-4.4
./configure
make
make install

cd /tmp/readline-5.1
./configure
make
make install

cd /tmp/libffi-3.2.1
./configure
make
make install

cd /tmp/Python-2.7.13
./configure --enable-shared --enable-optimizations
make
make install

cd /tmp/gss-1.0.3
./configure
make
make install

cd /tmp/pcre-8.40
./configure
make
make install

cd /tmp/autoconf-2.69
./configure
make
make install

cd /tmp/automake-1.15
./configure
make
make install

cd /tmp/libtool-2.4
./configure
make
make install

cd /tmp/libmbfl-1.2.0
chmod +x buildconf
./buildconf
./configure
make
make install

cd /tmp/libxml2-2.9.4
./configure
make
make install

cd /tmp/libwebp-0.6.0
./configure
make
make install

cd /tmp/libpng-1.6.28
./configure
make
make install

cd /tmp/gmp-6.1.2
./configure
make
make install

cd /tmp/jpeg-9b
./configure
make
make install

cd /tmp/libmcrypt-2.5.8
./configure --disable-posix-threads
make
make install

######################################
#  MARIADB
######################################
cd /tmp/mariadb-10.1.21
pw groupadd mysql
pw adduser mysql -g mysql -d /usr/local/mysql

# In the menu just press c for configure, takes a while.
# Write no on 'JEMALLOC_STATIC_LIBRARY' & 'WITH_JEMALLO',
# and last line to write no on 'PLUGIN_TOKUDB'
# press c and then g to save config & quit.
ccmake .

# Compiling using 4 threads.
make -j4

# Install everything
make install

# Add the run file
mkdir /usr/local/etc/
mkdir /usr/local/etc/rc.d
cd /usr/local/etc/rc.d
fetch --no-verify-peer https://lyxi.ga/wp-content/uploads/2017/mysql-server
chmod +x /usr/local/etc/rc.d/mysql-server

# Add the default my.cnf (change this later on..)
mkdir /var/db/mysql
cd /var/db/mysql/
fetch --no-verify-peer https://lyxi.ga/wp-content/uploads/2017/my.cnf

# Start sql server
/usr/local/etc/rc.d/mysql-server onestart

# Set password for root mysql user
/usr/local/mysql/bin/mysql_secure_installation


######################################
#  NGINX
######################################
cd /tmp/nginx-1.11.9
./configure --sbin-path=/usr/local/sbin/nginx --conf-path=/usr/local/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --with-cc-opt="-I /usr/local/include" --with-ld-opt="-L /usr/local/lib" --with-http_stub_status_module
make
make install

# Add the run file
cd /usr/local/etc/rc.d
fetch --no-verify-peer https://lyxi.ga/wp-content/uploads/nginx
chmod +x /usr/local/etc/rc.d/nginx

mkdir /var/log/nginx/
mkdir /home/www
mkdir /home/www/default
chown www:www /home/www

######################################
#  PHP
######################################
cd /tmp/php-7.1.1

# Compiling php with the needed extensions.
./configure \
--enable-fpm \
--with-fpm-user=www \
--with-fpm-group=www \
--enable-libxml \
--enable-zip \
--with-bz2=shared \
--with-curl=shared \
--with-gd \
--with-jpeg-dir=/usr \
--with-png-dir=/usr \
--with-webp-dir=/usr \
--enable-gd-native-ttf \
--with-gmp=shared \
--enable-mbstring \
--enable-bcmath \
--with-mcrypt \
--with-mhash=shared \
--with-mysqli=/usr/local/mysql/bin/mysql_config \
--with-pdo-mysql \
--enable-sockets \
--with-zlib \
--enable-ftp \
--enable-sysvmsg \
--enable-sysvsem \
--enable-sysvshm \
--with-openssl

# Install
make -j4
make install

# Add the run file
cd /usr/local/etc/rc.d
fetch --no-verify-peer https://lyxi.ga/wp-content/uploads/2017/php-fpm
chmod +x /usr/local/etc/rc.d/php-fpm

# Copy the production ini to the correct folder and so php loads the correct ini
cd /usr/local/etc/
fetch --no-verify-peer https://lyxi.ga/wp-content/uploads/2017/php.ini
chmod 755 php.ini

cp /usr/local/etc/php-fpm.d/www.conf.default /usr/local/etc/php-fpm.d/www.conf
cp /usr/local/etc/php-fpm.conf.default /usr/local/etc/php-fpm.conf

# Change this line in /usr/local/etc/php-fpm.conf
include=NONE/etc/php-fpm.d/*.conf
# To
include=/usr/local/etc/php-fpm.d/*.conf

# And in php-fpm.conf uncomment and change
;pid = run/php-fpm.pid
# To
pid = /var/run/php-fpm.pid


######################################
#  Last part + cleanup
######################################
# Add programs to boot, nginx,php,mysql.
echo 'php_fpm_enable="YES"' >> /etc/rc.conf
echo 'mysql_enable="YES"' >> /etc/rc.conf
echo 'nginx_enable="YES"' >> /etc/rc.conf

# Remove all files in /tmp
rm -R /tmp/*
######################################
#  Now continue to configuration before trying out the services!
######################################


Configurations

There are a couple of basic configurations that need to be done. Lets start with configuring php-fpm, and continue with php.ini and then get nginx to work with php.

/usr/local/etc/php-fpm.d/www.conf

# Next we uncomment the user and group of the ownership
;listen.owner = www
;listen.group = www
;listen.mode = 0660

# To following 
listen.owner = www
listen.group = www
listen.mode = 0660

/usr/local/etc/php.ini

# Security option that needs to be uncommented and set to 0
# ;cgi.fix_pathinfo=1

# Change it to
cgi.fix_pathinfo=0

# If you plan to transfer large files, change this to preferred value
upload_max_filesize = 2M

# And if you have larger transfers make sure that the execution time is extended
max_execution_time = 30

/usr/local/etc/nginx/nginx.conf
First delete the old nginx.conf and replace it with the one bellow.

# Set the worker process to auto, or the amount of cpus/core you have
worker_processes  auto;
# Store errors in logs/error.log and log even warnings
error_log /var/log/nginx/error.log warn;
# Create and store pid in logs/pid
pid /var/run/nginx.pid;

events {
    worker_connections  1024;
    multi_accept on;
    use kqueue;
}


http {
    # Define the mime.types
    include       mime.types;
    default_type  application/octet-stream;

    # How logs are structured
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    # Store access logs in logs/access.log
    access_log  /var/log/nginx/access.log  main;

    # Optimising nginx, when serving static content.
    sendfile        on;
    tcp_nopush     on;
    tcp_nodelay    on;
    # Set keepalive time out to 15 sec
    keepalive_timeout  15;

    # Enable gzip compression
    gzip  on;
    
    # Disable gzip on old crap iexplorer
    gzip_disable "MSIE [1-6]\.(?!.*SV1)";

    # Include site configurations from the sites/ folder.
    include  /usr/local/etc/nginx/sites/*;
}

/usr/local/etc/nginx/sites/default.site

server {
    listen 80;
    server_name localhost;
    root /home/www/default;

    location / {
        index  index.php index.html index.htm;
    }

    location = /50x.html {
        root   html;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include        fastcgi_params;
    }

    location ~ /\.ht$ {
         deny  all;
    }
}
/usr/local/etc/my.cnf

If you want to change some settings to mariadb, the settings are located in /usr/local/share/mysql/ with filnames such my-small.cnf, my-medium.cnf, my-larg.cnf, my-huge.cnf open the preferred size, configure it and save it as /usr/local/etc/my.cnf

And we are done! Good luck optimising!

Testing the build

Basic test could be done by running a phpinfo() call in the index file just to make sure it works
/home/www/default/index.php

< ?php
phpinfo();
? >

Or we could do a quick test to see if php, mariadb and nginx works together, by first trying to install and use phpmyadmin & wordpress.

Install Phpmyadmin

# Enter the tmp folder
cd /tmp

# Fetch the phpmyadmin
fetch --no-verify-peer https://files.phpmyadmin.net/phpMyAdmin/4.6.5.2/phpMyAdmin-4.6.5.2-all-languages.tar.gz

# Untar the file
tar -zxf phpMyAdmin-4.6.5.2-all-languages.tar.gz

# Make directory phpmyadmin in webfolder
mkdir /home/www/default/phpmyadmin

# Copy the files to the webfolder
cp -R /tmp/phpMyAdmin-4.6.5.2-all-languages/* /home/www/default/phpmyadmin

#remove temp files
rm -R /tmp/phpMyAdmin-4.6.5.2-all-languages | rm phpMyAdmin-4.6.5.2-all-languages.tar.gz

Now you should be able to access phpmyadmin @ http://ip_to_server/phpmyadmin While trying it out, ensure that you make a database for the wordpress install.

Install WP

# Enter the tmp folder
cd /tmp

# Fetch the latest wordpress file
fetch --no-verify-peer https://wordpress.org/latest.tar.gz

# Untar the file
tar -zxf latest.tar.gz

# Copy wordpress file into default www folder 
cp -R /tmp/wordpress/* /home/www/default/

# Remove temp files
rm -R /tmp/wordpress | rm latest.tar.gz 

# Done

After adding the wordpress files to the www-folder you have to enter http://ip_to_server/ and follow the installation guide, if everything is setup correctly the setup should work flawlessly. Remember to set the right folder permissions to get it to work, correct permissions can be found at: https://codex.wordpress.org/Changing_File_Permissions

There are alot of improvements that could be done to this setup. For example adding caching features, or adding/removing extensions, and solving the snmp issue if the service provider blocks it and so on.

01 Feb

Watercooling – Open system

This is my first attempt connecting a watercooling to a computer, it contains a couple of newbie issues due to lack of reading before ordering. But in the end everything went well and fully running. I want to thank my students for giving me suggestions how to complete this build and for coming up with improvements, since I’m not perfect in any kind of way. And you will find the full shopping cart for this water-cooling build at the bottom of the post.

GPU Waterblock

First I had to remove the fan / current cooling from the graphic card, and clean it. Which was rather easy. Screwing on the gpu block wasn’t a big deal ensure that the holes are upwards for clean and good connection.

Two things I forgot while ordering the cooling system was the passive heatsink for the small chipset/memory marked in the 3rd photo. And I was stupid or lazy enough that I didn’t read the item description while ordering so I had no connector for the GPU-block to the tube. So in my second delivery i added connector, heatsink, and a extra tube just in case of incidents.

Installing the components

Installing the components will be easy, just put down the mainboard in the case and screw it down, same with graphic card, memory cards, and power supply. I’m skipping the wires till the end, just to ensure that i wont drill into them while modifying the case. The images bellow is self explaining of how everything is placed and screwed down into the case.

Installing radiator

Starting of with the placement of each part, there are some alternatives to where everything has a place to be screwed down. But all those alternatives are meant for a smaller kit, thus if connected to those places everything will weak and easy to break, even possible that it will give some vibration in the case.

I decided to use the 5.25” drive bay for the radiator since i had no mini display or dvd for the build. This means that i had to drill for some extra holes to keep it in place. Since the powersupply is right under the radiator i gave it some space by leaning the radiator on the back plate and screwing everything down. On the plus side, it looks good and fits perfect with the rest of the case.

Images of the radiator modification/placement

Installing the pump/reservoir

The pump + reservoir was a bit trickier to add to the case, i was given several tips from my students where to put it, either on the right side of the computer case making it look slick but had a flaw when it came to connecting the pipes and wires also a hazel to get the uv-led strip to shine on everything. Another place was to put it in the back of the case almost where the water pipe holes were located which is a good place for the wiring, but it would be hidden all the time.

But we ended up placing it at the bottom of the mainboard which will hide some of the wires, and the tubes will be easy to connect and give that extra nice look. But this also meant that we had to drill 4 holes in the case which was a easy fix.

Images of the pump/reservoir placement

Wiring

I kept the wiring pretty much basic, and I decided to skip sleeving for a future lesson. The goal is hide all the wires and keep the wires in order, instead of a rat nest that people tend to create. This part is just time consuming and not that hard to do. I left some areas to improve for the future, but that have to wait because i have to replace the power supply due to a faulty connector, and sleeve the wires with bright green sleeves.

Images of the wiring

Watertubes

The watertubes was easy to install, the hard part was making sure that i had enough to make it go around. It didn’t allow any mistake, and none were made. I recommend to have the correct tools for cutting the tubes and measuring correct lengths. We decided to wire it water from pump -> radiator -> cpu -> gpu -> back to pump.

Up and running end images
PC-Specs

CPU: Intel 4670k | Mainboard: Asus Z87-Pro | GPU: Gigabyte Nvidia 660OC-2GD
Memory: 2x8Gb | Powersupply: Corsair 500W modular psu. | HDD: 2x 1TB WD-Black, in raid 0.

Bill of material

http://www.coolerkit.com/shop/ek-vga-supremacy-3316p.html – GPU Block
http://www.coolerkit.com/shop/xspc-raystorm-d5-4261p.html?key=01XS025 – Full Kit
http://www.coolerkit.com/shop/alphacool-gpu-heatsinks-1763p.html?key=AL002 – GPU Heatsink
http://www.coolerkit.com/shop/alphacool-gpu-heatsinks-4170p.html – More GPU heatsink
http://www.coolerkit.com/shop/xspc-push-on-2991p.html – 2 Extra push-on screws for the gpu
http://www.coolerkit.com/shop/coollaboratory-liquid-2585p.html – The liquid 1L

19 Jan

Freebsd 11.0 – LanCache

During one of the lan events I was hosting for my students I noticed the total bandwidth for 3 days, it was very high. In this case it was 2,15TB in total of data transferred in just a couple of days. No one complaint because the event was hosted on a 500/500Mbit fiber connection, but if I were to extend the event with more than 35 users problem would arise.

First though was a squid proxy, but since a lot of the data was from steam or similar game services I started to look into something called LAN-Caching. Which should cache as much as possible. I ended up with the configuration bellow, I will use that in the future to reduce bandwidth usage on the wan interface. And this configuration will be used on 2 identical high-perfomance servers connected to a core Cisco switch.

There are some improvements that I still have todo with this setup,I’m still doing try runs with some specific users to learn which service fails and what I fail to cache. This Isn’t a final build and must always be optimised and changed before an event.

Stats from the first event:

Network drawing – Practical example

Requirments
Complete hdd/ssd setup & mount it as /data. 
Complete the network structure Gateway + switches(read about lagg).
Fresh install of unbound or selected it during freebsd 11.0 install.
First install nginx – Server 1 & 2
# Select basic settings + add rewrite module + ssl + slice
cd /usr/ports/www/nginx
make config-recursive
make install clean
echo 'nginx_enable="YES"' >> /etc/rc.conf
/boot/loader.conf – Server 1 & 2
# Load carp module
carp_load="YES"
# Buffer incoming connections until certain http request arrives 
accf_http_load="YES"
# Wait for data accept filter
accf_data_load="YES"
# Load lagg module
if_lagg_load="YES"
/etc/sysctl.conf – Server 1 & 2
# Carp settings
net.inet.carp.allow=1
net.inet.carp.preempt=1
net.inet.carp.log=1
# Allow ip forwarding
net.inet.ip.forwarding=1
/etc/rc.conf – Server 1
# LAGG & Carp Server 1
ifconfig_bce0="up"
ifconfig_bce1="up"
defaultrouter="10.0.2.1"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto lacp laggport bce0 laggport bce1 10.0.2.2/24 up"

ifconfig_lagg0_aliases="\
        inet vhid 1 advskew 100 pass paswd alias 10.0.2.4/32 \
        inet vhid 2 advskew 100 pass paswd alias 10.0.2.5/32 \
        inet vhid 3 advskew 200 pass paswd alias 10.0.2.6/32 \
        inet vhid 4 advskew 100 pass paswd alias 10.0.2.7/32 \
        inet vhid 5 advskew 200 pass paswd alias 10.0.2.8/32 \
        inet vhid 6 advskew 100 pass paswd alias 10.0.2.9/32 \
        inet vhid 7 advskew 200 pass paswd alias 10.0.2.10/32 \
        inet vhid 8 advskew 100 pass paswd alias 10.0.2.11/32 \
        inet vhid 9 advskew 200 pass paswd alias 10.0.2.12/32 \
        inet vhid 10 advskew 100 pass paswd alias 10.0.2.13/32 \
        inet vhid 11 advskew 200 pass paswd alias 10.0.2.14/32 \
        inet vhid 12 advskew 100 pass paswd alias 10.0.2.15/32"
/etc/rc.conf – Server 2
# LAGG & Carp Server 2
ifconfig_bce0="up"
ifconfig_bce1="up"
defaultrouter="10.0.2.1"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto lacp laggport bce0 laggport bce1 10.0.2.3/24 up"

ifconfig_lagg0_aliases="\
        inet vhid 1 advskew 200 pass paswd alias 10.0.2.4/32 \
        inet vhid 2 advskew 200 pass paswd alias 10.0.2.5/32 \
        inet vhid 3 advskew 100 pass paswd alias 10.0.2.6/32 \
        inet vhid 4 advskew 200 pass paswd alias 10.0.2.7/32 \
        inet vhid 5 advskew 100 pass paswd alias 10.0.2.8/32 \
        inet vhid 6 advskew 200 pass paswd alias 10.0.2.9/32 \
        inet vhid 7 advskew 100 pass paswd alias 10.0.2.10/32 \
        inet vhid 8 advskew 200 pass paswd alias 10.0.2.11/32 \
        inet vhid 9 advskew 100 pass paswd alias 10.0.2.12/32 \
        inet vhid 10 advskew 200 pass paswd alias 10.0.2.13/32 \
        inet vhid 11 advskew 100 pass paswd alias 10.0.2.14/32 \
        inet vhid 12 advskew 200 pass paswd alias 10.0.2.15/32"
/etc/hosts – Server 1 & 2
# Link some host names to specific ip's, can be used in unbound and nginx
10.0.2.5        lancache-steam
10.0.2.6        lancache-riot
10.0.2.7        lancache-blizzard
10.0.2.8        lancache-hirez
10.0.2.9        lancache-origin
10.0.2.10       lancache-sony
10.0.2.11       lancache-arenanetworks
10.0.2.12       lancache-ubisoft
10.0.2.13       lancache-gog
10.0.2.14       lancache-turbine
10.0.2.15       lancache-microsoft
/etc/unbound/unbound.conf – Server 1 & 2
# Basic settings
server:
        interface:10.0.2.2
        interface:0.0.0.0
        interface:10.0.2.4
        access-control: 0.0.0.0/0 allow
        private-address: 10.0.2.0/24
        ip-transparent: yes
        do-ip4: yes
        do-udp: yes
        do-tcp:yes
        do-daemonize:yes
        username: unbound
        directory: /var/unbound
        chroot: /var/unbound
        pidfile: /var/run/local_unbound.pid
        auto-trust-anchor-file: /var/unbound/root.key

include: /var/unbound/lancaching.conf
include: /var/unbound/forward.conf
include: /var/unbound/lan-zones.conf
include: /var/unbound/control.conf
include: /var/unbound/conf.d/*.conf

Fetching lancaching.conf – Server 1 & 2
# Enter the unbound folder, fetch the file: lancaching.conf
cd /var/unbound/
fetch --no-verify-peer http://lyxi.ga/wp-content/uploads/2017/lancaching.conf
Nginx cache folders + logs – Server 1 & 2
# Create folders for logs and cache data
mkdir /data/ | mkdir /data/www/
mkdir /data/www/logs/ | mkdir /data/www/cache/
mkdir /data/www/cache/tmp | mkdir /data/www/cache/other | mkdir /data/www/cache/installs

# Change owner of the folder and set full permission on /data
chown -R www:www /data | chmod -R 777 /data

# Download the nginx configs for lan-cache, remove old nginx.conf & unpack.
rm /usr/local/etc/nginx/nginx.conf
cd /tmp/
# This contains a modified version of junkhacker's lancache,to work on freebsd.
fetch --no-verify-peer  http://lyxi.ga/wp-content/uploads/2017/lancachemaster.tar.gz
tar -zxf lancachemaster.tar.gz
cp -R /tmp/lancachemaster/* /usr/local/etc/nginx/
rm -R /tmp/lancachemaster | rm lancachemaster.tar.gz

# Original source of configs bellow for latest update: 
https://github.com/junkhacker/lancache

# Rebooting nginx 
/usr/local/etc/rc.d/nginx restart
Try it out
# Good to use command to monitor current connection speed 
systat -ifstat

# Once everything is setup and default dns is set to 10.0.2.4 you should be able to
# launch steam, do a fresh download, remove the game from library and download again.
# Result should be something similar as the photos bellow.

# This is by no mean optimal, modifications need to be done a couple of days before the event.

Photo before files cached:

Photo after files were cached:

Sources

https://blog.multiplay.co.uk/2014/04/lancache-dynamically-caching-game-installs-at-lans-using-nginx/
https://github.com/ForayJones/lancache
https://github.com/junkhacker/lancache
https://github.com/bntjah/lancache
https://blog.yolocation.pro/index.php/2016/02/03/how-to-install-lancache-on-debian/

15 Jan

Freebsd 11.0 – Link Aggregation (Lagg)

In this short post i’ll go through what link aggregation is, how it works and when/where it could be applied. And in the end a basic practical example how to setup lagg in freebsd 11.0

This is link aggregation

Link aggregation is pretty much grouping two more more network interfaces into one network group, creating either failover or a high throughput connection. The more interfaces you add to the group the higher the throughput you get and lesser risk of the connection to fail. There are also some weaknesses in link aggregation, for example if by co-incident two larger file transfers decides to go on link 1 in the group, then it’s only able to max the speed on link 1, and link 2 in the group is unused.

There are also different ways to use link aggregation which you could read at freebsd’s documentation under: round robin, lacp, failover, and fec / loadbalance.

Where / when to use it
Some use it to cut cost’s, for example its much cheaper to activate link aggregation on existing hardware and get some extra speed than buying a whole new equipment with better link speeds. New equipment usually means more work hours for the workers, planning, and paying for the equipment. But i would never recommend doing this, because you are just pushing the issue another couple of months forward. Instead its a solution to prevent downtime in the wait for new equipment to arrive, and give time to plan ahead and implement the new solution / upgrade. Link aggregation could also be used for redundancy one link breaks the other one takes the load, one other time to use link aggregation is for events that require more speed than the usual load, for example conventions or LANS.

Practical exampel

Easy setup to enable link aggregation on two network ports. Both ports will listen to the ip 10.0.2.2, remember to enable link aggregation on the switch or the network device connected to this server.

/boot/loader.conf

# load lagg upon boot
if_lagg_load="YES"

/etc/rc.conf

# Start both network cards
ifconfig_em0="up"
ifconfig_em1="up"

# Clone them into lagg0 and a loopback lo1
cloned_interfaces="lagg0 lo1"

# Put the lagg interface up, and select the lagg protocol lacp, 
# group the interfaces (laggports), and set a ip address to lagg0 10.0.2.2 
ifconfig_lagg0="up laggproto lacp laggport em0 laggport em1 10.0.2.2/24"

# set the default route (gateway)
defaultrouter="10.0.2.1"

Source
Much of the information is from the good documentation that freebsd has to offer, in this case: https://www.freebsd.org/doc/handbook/network-aggregation.html