Archive for ‘Ubuntu’ Category
Ubuntu »

Proxmox VPS for web development recipe….

datePosted on 17:17, March 10th, 2014 by Many Ayromlou

A little while ago our web developer asked me to look into proxmox containers and how we could take advantage of it to setup a development environment for him. The idea was to use the power of linux containers and enable him to develop fully functional/accessible sites in a private container. Here are the steps we will cover in this article:

  • Install proxmox on a machine with a single public IP address
  • Secure the machine with ufw to only allow connections from a specific IP address space
  • Setup a admin user other than root for proxmox admin interface
  • Setup proxmox to use the single IP address and the vmbridge for masquerading
  • Setup two Linux Ubuntu 12.04 containers with private addresses and enable the to access the internet via the bridge
  • Setup Apache on the proxmox host and configure it to do reverse proxy for the two ubuntu containers
  • Setup DNS (for the container instances) to point to proxmox host and test to make sure the “private” containers are accessible from Internet
  • Tighten up security on the reverse proxy on the proxmox host
  • Optionally only allow access to the proxy from specific IP address space

To do all this you need to download proxmox ISO file and burn it to a CD. Go through the installation of proxmox and set up the “host” with the single pubic IP address. This is simple enough so I’m not gonna cover it here. Once you have this setup you should be able to point your browser at the IP address (https://aaa.bbb.ccc.ddd:8006). NOTE: I will use aaa.bbb.ccc.ddd as the representation of the publicly available IP throughout.

Next we need to secure access to the host to only allow connections from a specific IP address space. In my case that’s the University network — — this is optional. We need to make sure ufw is installed. We also need to make sure ufw is allowing incoming connections by default and then block everything except access from the University network:

ufw default allow incoming
ufw allow proto tcp from to any port 8006
ufw deny proto tcp from any to any port 8006
ufw allow proto tcp from to any port 3128
ufw deny proto tcp from any to any port 3128
ufw allow proto tcp from to any port 111
ufw deny proto tcp from any to any port 111
ufw allow proto tcp from to any port 22
ufw deny proto tcp from any to any port 22
ufw enable

Note that I’m assuming your ssh connection to the host is via the University network ( Make adjustments to this if it’s not, otherwise you might lock yourself out. These basic rules will plug all the holes accessible publicly and only allow connections from our University network (

Setting up users in proxmox is a bit weird. You have to add a regular Unix user to the proxmox host environment and then add the user to proxmox later and give it permissions and roles. Here I will use a user “myadmin” to create something for our web developer to use.

useradd -m -s /bin/bash -U -G sudo myadmin

This will create a account “myadmin”,  join it to primary group “myadmin”, assign it /bin/bash as shell and make it part of the group “sudo” — which will allow the user to use the sudo command in the future. Next on the proxmox web interface we need to create a Admin group called “Admin”. In the proxmox interface we click on the Datacentre in the left pane and go to Groups and click the Create button. Call the group “Admin”. Now go to Permissions tab in the right pane. We need to create a Administrator Group Permission to assign to our “Admin” group. Click Add Group Permission (right below the tabs in this pane) and fill it in like below:

Screen Shot 2014-03-10 at 3.02.51 PM


In this window the path: / means the entire Datacentre (including the host and the containers/VM’s). You might want to adjust this. The Role “Administrator” is a predefined role that is pretty much the same as root. Now that our group “Admin” has the “Administrator” role for the entire Datacentre, we want to make the user “myadmin” — which is a unix account right now — be part of that, effectively creating another “root” account for our web developer. So back to the Users tab we click Add and create our new user (really just add the Unix user to proxmox):

Screen Shot 2014-03-10 at 3.15.42 PM


Okay, so now test and make sure you can access the host via ssh using myadmin as user, also make sure you can sudo to root on the host and check the web interface and ensure the myadmin account can login and see all the goodies in the data centre. Otherwise stop and fix.

At this point login/ssh to the host as root or myadmin (plus “sudo -i” to become root). We need to modify the networking config in /etc/network/interfaces to setup all the masquerading jazz. Make a back up of your interfaces file first and note the public IP address that is in there (I’m gonna use aaa.bbb.ccc.ddd as my public address here). Once you have a backup replace everything in the file with the following:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address  aaa.bbb.ccc.ddd

auto vmbr0
iface vmbr0 inet static
	bridge_ports none
	bridge_stp off
	bridge_fd 0

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '' -o eth0 -j MASQUERADE
	post-up   iptables -A FORWARD -s '' -o eth0 -j ACCEPT
	post-up   iptables -A FORWARD -d '' -m state --state ESTABLISHED,RELATED -i eth0 -j ACCEPT
        post-down iptables -t nat -D POSTROUTING -s '' -o eth0 -j MASQUERADE
	post-down iptables -D FORWARD -s '' -o eth0 -j ACCEPT
	post-down iptables -D FORWARD -d '' -m state --state ESTABLISHED,RELATED -i eth0 -j ACCEPT

So in the above I’m creating a separate private network ( behind the publicly available IP address aaa.bbb.ccc.ddd and am doing some iptables commands to setup masquerading. This is sorta like setting up a home router to share a publicly available IP address you have at home. Once this is in place reboot the host and make sure you can log back into https://aaa.bbb.ccc.ddd:8006/ and get the proxmox interface. If you’re good to go, as next step spin off two Ubuntu containers (I won’t go into details on this…..lots of docs out there for this). Your OperVZ Container confirmation screen should look something like this:

Screen Shot 2014-03-10 at 4.25.05 PM


The only really important thing here is that you setup the networking under Network tab as Bridged mode and select vmbr0 as your bridge. Once that’s done ssh back to your host (aaa.bbb.ccc.ddd). Assuming you have two containers 100 and 101, enter one of them by using the vzctl command:

vzctl enter 100

Once inside the container you need to setup the networking. Again the file here is /etc/network/interfaces (assuming you’re container is Ubuntu/Debian flavoured). Backup this file first and replace the content with the following:

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

auto eth0
#iface eth0 inet dhcp
iface eth0 inet static

Note here that I’m using google’s name server. You can use that or substitute your own “real” name servers. Once you reboot the container and enter it again via the host, you should be able to ping just about any real host (, or whatever). This gives us a basic NAT running on the host and you just need to increment the IP address ( in the above case) in the setup of the second container. At this point you should be able to enter either containers and ping something outside.

So the rest of this article describes how to setup a secure reverse proxy using apache on the proxmox host (aaa.bbb.ccc.ddd). This way you can just point arbitrary DNS names at aaa.bbb.ccc.ddd and choose (via apache config) which one of your containers will answer the call. You can even get fancy and have multiple hostnames proxied to the same container and do standard “Name based” virtual hosting inside the container. I will just show the one-to-one proxied connection here. Start by installing apache on the host (apt-get install apache). First we need to activate the proxy module. If you don’t have time to finish this entire procedure DO NOT CONTINUE. Literally in the time it takes to install and configure the proxy, script kiddies will hit your site and use you as a proxy to attack other sites. DO THE PROXY INSTALL AND CONFIG/SECURING PROCEDURE IN ONE SHOT.

Assuming apache is installed go to http://aaa.bbb.ccc.ddd and ensure you’re getting the apache “hello” screen. Now you can enable the three modules needed by issuing the following:

a2enmod proxy
a2enmod proxy_http
a2enmod headers

Once that’s done you need to make some changes to your proxmox hosts default apache config which is in /etc/apache2/sites-available/default. For the sake of completeness I’ve included my entire file here. Compare it to yours and modify accordingly:

LoadFile /usr/lib/x86_64-linux-gnu/

<VirtualHost *:80>
	ServerAdmin webmaster@localhost

	DocumentRoot /var/www
	<Directory />
		Options FollowSymLinks
		AllowOverride None
	<Directory /var/www/>
		Options Indexes FollowSymLinks MultiViews
		AllowOverride None
		Order allow,deny
		allow from all

	ErrorLog ${APACHE_LOG_DIR}/error.log

	CustomLog ${APACHE_LOG_DIR}/access.log combined

	ProxyRequests Off
	# Block all requests 
	<Proxy *>
	  Order deny,allow
	  Deny from all


<VirtualHost *:80>
	RequestHeader set Accept-Encoding
	ProxyPreserveHost On
	ProxyPass /
	ProxyPassReverse /
	<Proxy *>
	    Order deny,allow
	    Allow from all
<VirtualHost *:80>
	RequestHeader set Accept-Encoding
	ProxyPreserveHost On
	ProxyPass /
	ProxyPassReverse /
	<Proxy *>
	    Order deny,allow
	    Allow from all

Pay particular attention to parts that have the comment (# IMPORTANT: YOU NEED THIS)……Guess what…..YOU NEED THIS. The first one loads libxml2 which is needed. The second block of code makes sure you are in reverse proxy mode (not in forward proxy) and makes sure the main apache instance can’t be used for proxing. The third and fourth block enable reverse proxy for a particular virtual host name. Now we need to reload apache on our proxmox host and do some testing. Reload apache with (service apache2 reload) and for sanity sake change the index.html file in both containers (under /var/www/index.html) to reflect hosta and hostb. I’ve basically just added the words hosta and hostb to the html file. Register and as “A” fields in your DNS and point them at the IP address of the proxmox host (aaa.bbb.ccc.ddd).

If everything is working properly you should be able to use your browser and point at and get the index.html page specific to that container and the same for hostb. At this point you should be more or less good to go. If you need more containers addressable from internet, just keep adding this block of code to the proxmox hosts /etc/apache2/sites-available/default and change the hostname and increment the private IP addresses:

<VirtualHost *:80>
	RequestHeader set Accept-Encoding
	ProxyPreserveHost On
	ProxyPass /
	ProxyPassReverse /
	<Proxy *>
	    Order deny,allow
	    Allow from all

Optionally you can now go back and add a couple more ufw rules to only allow access from a particular IP address space (in my case the university network

ufw allow proto tcp from to any port 80
ufw deny proto tcp from any to any port 80

Again with this setup — since we’re preserving the request header and are passing it through the proxy back and forth — you can have hostd, hoste, hostf, all point to the same private IP address in the proxy and do a named virtual serving on the apache instance in the particular container, just like a standard named virtual host based setup. Hope this helps…..

Copying large number of files between two Unix/Linux/OSX Servers

datePosted on 14:38, August 15th, 2012 by Many Ayromlou

Here are some quick tip(s) for copying a ton of files between unixy machines really fast. You’re probably thinking “why not use rsync?”…..well rsync can be miserably slow if your source or destination cpu is underpowered. You can always do a rsync after these commands to make 100% certain that everything checks out, but try using one of these methods for the initial copy:

  • One way of doing it is
    tar -c /path/to/dir | ssh user@remote_server 'tar -xpvf - -C /absolute/path/to/remotedir'

    You’ll be prompted for the remote servers password or you can use the private key of the remote server using the -i switch in the ssh command. This has the side benefit of preserving permissions. An alternate version of this command can also be used to locally move folder structures across mount points while preserving permissions: 

    tar -cf - -C srcdir . | tar -xpf - -C destdir


    cd srcdir ; tar -cf - . | (cd destdir ; tar -xpf -)
  • Another way of doing it with netcat (nc) is
    srv1$ tar -cfv - * | nc -w1 4321

    followed by

    srv2$ nc -l -p 4321 |tar -xpfv - 

    Note that you type the first command on the source machine and the second command on the destination machine.

  • Yet another way of doing it with socat utility is
    host1$ tar -cvf - * | socat stdin tcp4:host2:portnum

    followed by

    host2$ socat tcp4-listen:portnum stdout | tar -xvpf - 

    Note that you type the first command on the source machine and the second command on the destination machine.

Once your favourite process (above) is done you can do a quick rsync to tie up any loose ends.

rsync -avW -e ssh /path/to/dir/ remote_server:/path/to/remotedir

Rsync will now fly through the filesystem as 99.9% of the time, 99.9% of the files on the destination are good. And as always make sure you understand the commands before you use them…..and keep backups just in case :-).

If you try to install Ubuntu 10.10 under parallels desktop 6.0 on OSX — atleast as of the writing of this article — you’ll soon discover that although your entire installation is done in a high (eg: 1920×1080) resolution, as soon as the install is done and you reboot, your VM is stuck at 1024×768. You can install the parallel tools using the menu option and it still won’t help — although it helps with 3D (ie: compiz). Under Gnomes System/Preferences/Monitors the highest resolution available is 1024×768 :-(. After searching around the net for the past week or so and trying just about every remedy — which did not work — I was about to give up, then I found the magic command that “makes it go” :-).

I’ve now got Ubuntu 10.10 running with PT/compiz under parallels 6.0 @ 1920×1080. No problem. Normally if you go inside ~/.config/ directory (.config folder under your home directory) you’ll notice that there is no “monitors.xml” file in there. That’s the per user X config file that gets the ball rolling. Generating the file is really easy. Open a teminal and issue the following command:


This will generate (hopefully) the following output:

Note that 1024×768 is the default. Now if you go inside ~/.config/ directory you’ll find a “monitors.xml” file (below). Once you’ve got this file you can go to System/Preferences/Monitors and choose the higher resolution options (eg:1920×1080). The xrandr command should generate the file for you. If it doesn’t (not sure why), here is my version for parallel 6.0. I think it’s pretty generic so you should be able to cut and paste the content:

<monitors version="1">
      <output name="default">

Fixing Plymouth (boot splash) in Ubuntu 10.10 aka. Maverick Meerkat

datePosted on 14:10, November 1st, 2010 by Many Ayromlou

If you’ve recently installed Ubuntu 10.10 and have installed Nvidia and/or ATI drivers — or installed ubuntu under emulation — you’ll end up with a (butt) ugly splash screen. In my case under parallel 6.0 I ended up with a text boot screen that just read “Ubuntu 10.10″……Ughhh. Here is a quick tutorial on how to get a nice splash restored. This procedure also works in 10.04. Keep in mind that I’m doing everything with 1280×1024 screen size. your mileage might vary (ie: you might want 1024×768). You’ll need to get a terminal session opened for this:

  • Get the nice splash screen installed
    sudo apt-get install v86d
  • Edit your grub config file and add the following
    sudo vi /etc/default/grub
  • Look for this line:
  • and replace it with this (note: 1280×1024 screen res…..your mileage might vary):
    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset video=uvesafb:mode_option=1280x1024-24,mtrr=3,scroll=ywrap"
  • Still in the same file look for this line:
  • and replace it with this (note: 1280×1024 screen res…..your mileage might vary):

Your /etc/default/grub file should look like this once you’re done (partial screenshot):

  • Save the file and issue the following command to start editing /etc/initramfs-tools/modules file:
    sudo vi /etc/initramfs-tools/modules
  • The file should be mostly commented out. At the end of the file insert the following line (note: 1280×1024 screen res…..your mileage might vary):
    uvesafb mode_option=1280x1024-24 mtrr=3 scroll=ywrap

Your /etc/initramfs-tools/modules file should look like this once you’re done:

  • Save the file and issue the following command:
    echo FRAMEBUFFER=y | sudo tee /etc/initramfs-tools/conf.d/splash
  • Finally issue the following two command to update grub:
    sudo update-grub2
    sudo update-initramfs -u

Reboot and Enjoy :-)

OpenShot Video Editor 1.0 released…..iMovie for Linux is here.

datePosted on 13:24, January 14th, 2010 by Many Ayromlou

For those of you who don’t know OpenShot Video Editor(TM) is an open-source program that creates, modifies, and edits video files. OpenShot provides extensive editing and compositing features, and has been designed as a practical tool for working with high-definition video including HDV and AVCHD.

Jonathan Thomas and crew have reached their 1.0 milestone (congrats :-)). The program is rock solid and is running beautifully on my Ubuntu 9.10 installation.

OpenShot’s Features include:

  • Support for many video, audio, and image formats (based on FFmpeg)
  • Gnome integration (drag and drop support)
  • Multiple tracks
  • Clip resizing, trimming, snapping, and cutting
  • Video transitions with real-time previews
  • Compositing, image overlays, watermarks
  • Title templates, title creation
  • SVG friendly, to create and include titles and credits
  • Scrolling motion picture credits
  • Solid color clips (including alpha compositing)
  • Support for Rotoscoping / Image sequences
  • Drag and drop timeline
  • Frame stepping, key-mappings: J,K, and L keys
  • Video encoding (based on FFmpeg)
  • Key Frame animation
  • Digital zooming of video clips
  • Speed changes on clips (slow motion etc)
  • Custom transition lumas and masks
  • Re-sizing of clips (frame size)
  • Audio mixing and editing
  • Presets for key frame animations and layout
  • Ken Burns effect (making video by panning over an image)
  • Digital video effects, including brightness, gamma, hue, greyscale, chroma key (bluescreen/greenscreen), and over 20 other video effects.
 There are 4 ways to install OpenShot: LiveDVD, PPA, DEB Installer, and the Build Wizard. Grab it here.

Ubuntu Software Centre "No Install Button" problem…..

datePosted on 23:03, November 26th, 2009 by Many Ayromlou

I recently upgraded netbook using the distribution upgrade and didn’t like the results, so I reinstalled Ubuntu Notebook Remix 9.10 Karmic Koala. Well, I’m sorry but I don’t think this Koala was ready for release. First there was the issue of where the heck are all the beloved Ubuntu tools. Gone is the Add/Remove software progy (you have to install manually), now we have Ubuntu Software Centre. Gone is being able to check off multiple packages for batch install, USC installs apps one at a time (which takes two mouse clicks per app).

To top it off — atleast in UNR 9.10 — there is no install button once you click on the arrow beside the packages. No, it’s not a problem with root/admin, I tried running it as root and same thing, NO INSTALL BUTTON on the install screen. Anyways it turns out once you get past the gargantuan Windows XP like update (125 updates) using the following two commands, the Ubuntu Software Centre magically comes back to life and gives you the “oh so important” install button. Come on Ubuntu…..I thought you were friendly. This Koala Bites HARD!!! :-). So the magic commands are….yeah you guessed it:

sudo apt-get update
sudo apt-get upgrade

BTW. If at some point the upgrade asks to replace /etc/defaults/grub say “yes” and go with the newer version. It does not harm the system.

Fix Ctrl-Alt-Backspace problem with Ubuntu 9.10+

datePosted on 13:58, October 31st, 2009 by Many Ayromlou

Downloaded and installed 9.10 yesterday and what do you know, someone decided to take away Ctrl-Alt-Backspace — or what I call “Three Finger Salute for Linux”. Whhhaaattt!!!!

How the heck are you supposed to kill and restart X without that…..A coworker suggested Alt-PrintScreen-K, but that just restarts GDM, not really useful when X decides to go south. Damit!!
The reason given on Ubuntu wiki is that “This is due to the fact that DontZap is no longer an option in the X server and has become an option in XKB instead.”
Well, fear not, whoever disabled it also created a easy way to reenable it again. Here is what you do:
  • In Gnome (Ubuntu):

    * Get to the System->Preferences->Keyboard menu.
    * Select the "Layouts" tab and click on the "Layout Options" button.
    * Then select "Key sequence to kill the X server" and enable "Control + Alt + Backspace".
  • In KDE (Kubuntu):
    * Launch "systemsettings"
    * Select "Regional & Language".
    * Select "Keyboard Layout".
    * Click on "Enable keyboard layouts" (in the Layout tab).
    * Select the "Advanced" tab. Then select "Key sequence to kill the X server" and enable "Control + Alt + Backspace".
  • Using Command-Line:
    You can type the following command to enable Zapping immediately.

    setxkbmap -option terminate:ctrl_alt_bksp

    If you're happy with the new behaviour you can add that command to your ~/.xinitrc in order to make the change permanent.
  • Using HAL:
    You can add the following line in /usr/share/hal/fdi/policy/10osvendor/10-x11-input.fdi (inside the <match key="info.capabilities" contains="input.keys"> section):

    <merge key="input.xkb.options" type="string">terminate:ctrl_alt_bksp</merge>

gksudo: Or how this old dog learned new tricks :-)

datePosted on 12:19, September 28th, 2009 by Many Ayromlou

Okay if you know about gksudo, fine. I just found out about it a little while back when I was trying to run ethereal. You see under Ubuntu (and a lot of other Linux distros) the concept of root user has been removed. There is no root (well there is, but you can’t access it), unless you specifically modify your system to activate it. That’s fine (most of the time), since you can use sudo to accomplish almost anything as the administrator. One thing that doesn’t work properly are the graphical applications that need root access. So here is where gksudo comes to rescue. In the case of ethereal I would issue the following command to get it to prompt me for sudo and run as root user:
gksudo ethereal
So next time you get tempted to open up that root account on your Ubuntu install, don’t, use gksudo and get those gui apps running as root.

How to change the default command line text editor in Ubuntu….

datePosted on 12:10, September 28th, 2009 by Many Ayromlou

I love Ubuntu, but there is one thing that really bugs the hell out of me. The default configured editor in Ubuntu is nano, a Pico clone. I hate Pico, therefor I hate nano :-). So how would you go about fixing this and changing the default editor to vi (or vim):

  1. Issue the following command: sudo update-alternatives --config editor
  2. Enter the superuser password when prompted.
  3. At the following screen choose the number beside the editor you want as default or alternatively just press Enter to keep the default the same.
    There are 3 alternatives which provide `editor'.

    Selection Alternative
    1 /usr/bin/vim.tiny
    2 /bin/ed
    *+ 3 /bin/nano

    Press enter to keep the default[*], or type selection number: 1
    Using '/usr/bin/vim.tiny' to provide 'editor'.

That’s it…..Have fun.

So after yesterdays rant, I went back and figured out how to install the Cacti monitoring software (OSS, Free) onto a Ubuntu 9.04 “Jaunty Jackalope” Desktop installation. This guide uses packages only, no compiling, no Makefiles or anything like that…..You should be able to just follow this and get a fully functioning Cacti installation in about 30 minutes. Here are the steps:

  1. install ubuntu 9.04 (“Jaunty Jackalope“) Desktop Edition on your machine
  2. Login, open a Shell window and install ubuntu LAMP (Linux/Apache/MySQL/PHP) server stack on your machine
    “sudo tasksel install lamp-server”.
    Note: Make sure you remember the password for “root” user in mysql Database, write it down somewhere, we will need it later on.
  3. Get a superuser shell started since it will make for less typing.
    “sudo -i”
    followed by your password. Be carefull from now on, you’re ROOT and can literally destroy your system if you issue the wrong command. Follow along by typing the commands in the rest of this document and answering the prompts where appropriate.
  4. Issue:
    “apt-get install rrdtool snmp php5-snmp php5 php5-gd”
    This will get all of the prereqs installed on your system. Answer “yes” when prompted for additional packages. 
  5. Issue:
    “apt-get install cacti-cactid”
    This will get cacti and cacti server installed. Again answer “yes” when prompted for additional packages.
  6. You’ll be presented with a bunch of ANSI screens that ask for information or give you choices to configure “libphp-adodb” package. Follow as per below:
    • Click “Okay” on php.ini update path (screen 1).
    • Choose “Apache 2” from the pull down on next screen (screen 2).
    • Click “Okay” on cacti and spine configuration screen (screen 3).
    • At this point some config scripts will run and you’ll see a bunch on jiberish on the screen. Let it go, don’t touch nothing.
    • Click “yes” on the dbconfig-common screen and provide the password from step 2. (above) for the mysql “root” user (screen 4).
    • Now you’re prompted to choose a password for a new mysql user known as “cacti”. I used the same password as “root” user since my system is single user only. You will need to confirm the password on the next screen (screen 5,6).
    • Almost there……..
  7. Now the hard part is over. Start your browser and point it at http://localhost/cacti — assuming you’re running the browser on the cacti machine — or the appropriate IP address instead of localhost.
  8. Click “Next” on the first screen (might want to read it too).
  9. Select “New Install” on screen 2 and Click “Next”
  10. On the next screen (Path Check screen) make sure everything is found and make 100% sure to select “RRDTool 1.2.x” from the RRDTool utility version pull down. Click “Finish” when you’re done.
  11. You’ll see the login screen. Use Username “admin” and Password “admin” to login. On the next screen you’re forced to change the password for user admin. This is a good thing. Change the password to something complicated and easy to remember (does that exist?). Click “Save”.
  12. Make sure under Configuration Settings/Paths that “Spine Poller file path” is correctly set to “/usr/sbin/spine”, and its found.
  13. Make sure under Configuration Settings/Poller you select “Poller type” and set it to “spine” and Click “Save”. You’re done……Please RTFM for more Cacti info (or come back here and you might potentially find another episode of my ramblings). Have Fun!!