Archive for ‘Unix’ Category
Unix »

Proxmox VPS for web development recipe….

datePosted on 17:17, March 10th, 2014 by Many Ayromlou

A little while ago our web developer asked me to look into proxmox containers and how we could take advantage of it to setup a development environment for him. The idea was to use the power of linux containers and enable him to develop fully functional/accessible sites in a private container. Here are the steps we will cover in this article:

  • Install proxmox on a machine with a single public IP address
  • Secure the machine with ufw to only allow connections from a specific IP address space
  • Setup a admin user other than root for proxmox admin interface
  • Setup proxmox to use the single IP address and the vmbridge for masquerading
  • Setup two Linux Ubuntu 12.04 containers with private addresses and enable the to access the internet via the bridge
  • Setup Apache on the proxmox host and configure it to do reverse proxy for the two ubuntu containers
  • Setup DNS (for the container instances) to point to proxmox host and test to make sure the “private” containers are accessible from Internet
  • Tighten up security on the reverse proxy on the proxmox host
  • Optionally only allow access to the proxy from specific IP address space

To do all this you need to download proxmox ISO file and burn it to a CD. Go through the installation of proxmox and set up the “host” with the single pubic IP address. This is simple enough so I’m not gonna cover it here. Once you have this setup you should be able to point your browser at the IP address (https://aaa.bbb.ccc.ddd:8006). NOTE: I will use aaa.bbb.ccc.ddd as the representation of the publicly available IP throughout.

Next we need to secure access to the host to only allow connections from a specific IP address space. In my case that’s the University network — — this is optional. We need to make sure ufw is installed. We also need to make sure ufw is allowing incoming connections by default and then block everything except access from the University network:

ufw default allow incoming
ufw allow proto tcp from to any port 8006
ufw deny proto tcp from any to any port 8006
ufw allow proto tcp from to any port 3128
ufw deny proto tcp from any to any port 3128
ufw allow proto tcp from to any port 111
ufw deny proto tcp from any to any port 111
ufw allow proto tcp from to any port 22
ufw deny proto tcp from any to any port 22
ufw enable

Note that I’m assuming your ssh connection to the host is via the University network ( Make adjustments to this if it’s not, otherwise you might lock yourself out. These basic rules will plug all the holes accessible publicly and only allow connections from our University network (

Setting up users in proxmox is a bit weird. You have to add a regular Unix user to the proxmox host environment and then add the user to proxmox later and give it permissions and roles. Here I will use a user “myadmin” to create something for our web developer to use.

useradd -m -s /bin/bash -U -G sudo myadmin

This will create a account “myadmin”,  join it to primary group “myadmin”, assign it /bin/bash as shell and make it part of the group “sudo” — which will allow the user to use the sudo command in the future. Next on the proxmox web interface we need to create a Admin group called “Admin”. In the proxmox interface we click on the Datacentre in the left pane and go to Groups and click the Create button. Call the group “Admin”. Now go to Permissions tab in the right pane. We need to create a Administrator Group Permission to assign to our “Admin” group. Click Add Group Permission (right below the tabs in this pane) and fill it in like below:

Screen Shot 2014-03-10 at 3.02.51 PM


In this window the path: / means the entire Datacentre (including the host and the containers/VM’s). You might want to adjust this. The Role “Administrator” is a predefined role that is pretty much the same as root. Now that our group “Admin” has the “Administrator” role for the entire Datacentre, we want to make the user “myadmin” — which is a unix account right now — be part of that, effectively creating another “root” account for our web developer. So back to the Users tab we click Add and create our new user (really just add the Unix user to proxmox):

Screen Shot 2014-03-10 at 3.15.42 PM


Okay, so now test and make sure you can access the host via ssh using myadmin as user, also make sure you can sudo to root on the host and check the web interface and ensure the myadmin account can login and see all the goodies in the data centre. Otherwise stop and fix.

At this point login/ssh to the host as root or myadmin (plus “sudo -i” to become root). We need to modify the networking config in /etc/network/interfaces to setup all the masquerading jazz. Make a back up of your interfaces file first and note the public IP address that is in there (I’m gonna use aaa.bbb.ccc.ddd as my public address here). Once you have a backup replace everything in the file with the following:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address  aaa.bbb.ccc.ddd

auto vmbr0
iface vmbr0 inet static
	bridge_ports none
	bridge_stp off
	bridge_fd 0

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '' -o eth0 -j MASQUERADE
	post-up   iptables -A FORWARD -s '' -o eth0 -j ACCEPT
	post-up   iptables -A FORWARD -d '' -m state --state ESTABLISHED,RELATED -i eth0 -j ACCEPT
        post-down iptables -t nat -D POSTROUTING -s '' -o eth0 -j MASQUERADE
	post-down iptables -D FORWARD -s '' -o eth0 -j ACCEPT
	post-down iptables -D FORWARD -d '' -m state --state ESTABLISHED,RELATED -i eth0 -j ACCEPT

So in the above I’m creating a separate private network ( behind the publicly available IP address aaa.bbb.ccc.ddd and am doing some iptables commands to setup masquerading. This is sorta like setting up a home router to share a publicly available IP address you have at home. Once this is in place reboot the host and make sure you can log back into https://aaa.bbb.ccc.ddd:8006/ and get the proxmox interface. If you’re good to go, as next step spin off two Ubuntu containers (I won’t go into details on this…..lots of docs out there for this). Your OperVZ Container confirmation screen should look something like this:

Screen Shot 2014-03-10 at 4.25.05 PM


The only really important thing here is that you setup the networking under Network tab as Bridged mode and select vmbr0 as your bridge. Once that’s done ssh back to your host (aaa.bbb.ccc.ddd). Assuming you have two containers 100 and 101, enter one of them by using the vzctl command:

vzctl enter 100

Once inside the container you need to setup the networking. Again the file here is /etc/network/interfaces (assuming you’re container is Ubuntu/Debian flavoured). Backup this file first and replace the content with the following:

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

auto eth0
#iface eth0 inet dhcp
iface eth0 inet static

Note here that I’m using google’s name server. You can use that or substitute your own “real” name servers. Once you reboot the container and enter it again via the host, you should be able to ping just about any real host (, or whatever). This gives us a basic NAT running on the host and you just need to increment the IP address ( in the above case) in the setup of the second container. At this point you should be able to enter either containers and ping something outside.

So the rest of this article describes how to setup a secure reverse proxy using apache on the proxmox host (aaa.bbb.ccc.ddd). This way you can just point arbitrary DNS names at aaa.bbb.ccc.ddd and choose (via apache config) which one of your containers will answer the call. You can even get fancy and have multiple hostnames proxied to the same container and do standard “Name based” virtual hosting inside the container. I will just show the one-to-one proxied connection here. Start by installing apache on the host (apt-get install apache). First we need to activate the proxy module. If you don’t have time to finish this entire procedure DO NOT CONTINUE. Literally in the time it takes to install and configure the proxy, script kiddies will hit your site and use you as a proxy to attack other sites. DO THE PROXY INSTALL AND CONFIG/SECURING PROCEDURE IN ONE SHOT.

Assuming apache is installed go to http://aaa.bbb.ccc.ddd and ensure you’re getting the apache “hello” screen. Now you can enable the three modules needed by issuing the following:

a2enmod proxy
a2enmod proxy_http
a2enmod headers

Once that’s done you need to make some changes to your proxmox hosts default apache config which is in /etc/apache2/sites-available/default. For the sake of completeness I’ve included my entire file here. Compare it to yours and modify accordingly:

LoadFile /usr/lib/x86_64-linux-gnu/

<VirtualHost *:80>
	ServerAdmin webmaster@localhost

	DocumentRoot /var/www
	<Directory />
		Options FollowSymLinks
		AllowOverride None
	<Directory /var/www/>
		Options Indexes FollowSymLinks MultiViews
		AllowOverride None
		Order allow,deny
		allow from all

	ErrorLog ${APACHE_LOG_DIR}/error.log

	CustomLog ${APACHE_LOG_DIR}/access.log combined

	ProxyRequests Off
	# Block all requests 
	<Proxy *>
	  Order deny,allow
	  Deny from all


<VirtualHost *:80>
	RequestHeader set Accept-Encoding
	ProxyPreserveHost On
	ProxyPass /
	ProxyPassReverse /
	<Proxy *>
	    Order deny,allow
	    Allow from all
<VirtualHost *:80>
	RequestHeader set Accept-Encoding
	ProxyPreserveHost On
	ProxyPass /
	ProxyPassReverse /
	<Proxy *>
	    Order deny,allow
	    Allow from all

Pay particular attention to parts that have the comment (# IMPORTANT: YOU NEED THIS)……Guess what…..YOU NEED THIS. The first one loads libxml2 which is needed. The second block of code makes sure you are in reverse proxy mode (not in forward proxy) and makes sure the main apache instance can’t be used for proxing. The third and fourth block enable reverse proxy for a particular virtual host name. Now we need to reload apache on our proxmox host and do some testing. Reload apache with (service apache2 reload) and for sanity sake change the index.html file in both containers (under /var/www/index.html) to reflect hosta and hostb. I’ve basically just added the words hosta and hostb to the html file. Register and as “A” fields in your DNS and point them at the IP address of the proxmox host (aaa.bbb.ccc.ddd).

If everything is working properly you should be able to use your browser and point at and get the index.html page specific to that container and the same for hostb. At this point you should be more or less good to go. If you need more containers addressable from internet, just keep adding this block of code to the proxmox hosts /etc/apache2/sites-available/default and change the hostname and increment the private IP addresses:

<VirtualHost *:80>
	RequestHeader set Accept-Encoding
	ProxyPreserveHost On
	ProxyPass /
	ProxyPassReverse /
	<Proxy *>
	    Order deny,allow
	    Allow from all

Optionally you can now go back and add a couple more ufw rules to only allow access from a particular IP address space (in my case the university network

ufw allow proto tcp from to any port 80
ufw deny proto tcp from any to any port 80

Again with this setup — since we’re preserving the request header and are passing it through the proxy back and forth — you can have hostd, hoste, hostf, all point to the same private IP address in the proxy and do a named virtual serving on the apache instance in the particular container, just like a standard named virtual host based setup. Hope this helps…..

Proving the Network is Not the Problem With iperf – Packet Life

datePosted on 15:35, November 24th, 2013 by Many Ayromlou

Proving the Network is Not the Problem With iperf – Packet Life: “When an application fails to perform as expected, the network is often the first thing blamed. I suppose this is because end users typically view the network as the sole limiting factor with regard to throughput, unaware of the intricacies of application, database, and storage performance. For some reason, the burden of proof always seems to fall onto networkers to demonstrate that the network is not at fault before troubleshooting can begin elsewhere. This article demonstrates how to simulate user traffic between two given points on a network and measure the achievable throughput.”


Copying large number of files between two Unix/Linux/OSX Servers

datePosted on 14:38, August 15th, 2012 by Many Ayromlou

Here are some quick tip(s) for copying a ton of files between unixy machines really fast. You’re probably thinking “why not use rsync?”…..well rsync can be miserably slow if your source or destination cpu is underpowered. You can always do a rsync after these commands to make 100% certain that everything checks out, but try using one of these methods for the initial copy:

  • One way of doing it is
    tar -c /path/to/dir | ssh user@remote_server 'tar -xpvf - -C /absolute/path/to/remotedir'

    You’ll be prompted for the remote servers password or you can use the private key of the remote server using the -i switch in the ssh command. This has the side benefit of preserving permissions. An alternate version of this command can also be used to locally move folder structures across mount points while preserving permissions: 

    tar -cf - -C srcdir . | tar -xpf - -C destdir


    cd srcdir ; tar -cf - . | (cd destdir ; tar -xpf -)
  • Another way of doing it with netcat (nc) is
    srv1$ tar -cfv - * | nc -w1 4321

    followed by

    srv2$ nc -l -p 4321 |tar -xpfv - 

    Note that you type the first command on the source machine and the second command on the destination machine.

  • Yet another way of doing it with socat utility is
    host1$ tar -cvf - * | socat stdin tcp4:host2:portnum

    followed by

    host2$ socat tcp4-listen:portnum stdout | tar -xvpf - 

    Note that you type the first command on the source machine and the second command on the destination machine.

Once your favourite process (above) is done you can do a quick rsync to tie up any loose ends.

rsync -avW -e ssh /path/to/dir/ remote_server:/path/to/remotedir

Rsync will now fly through the filesystem as 99.9% of the time, 99.9% of the files on the destination are good. And as always make sure you understand the commands before you use them…..and keep backups just in case :-).

Ubuntu Software Centre "No Install Button" problem…..

datePosted on 23:03, November 26th, 2009 by Many Ayromlou

I recently upgraded netbook using the distribution upgrade and didn’t like the results, so I reinstalled Ubuntu Notebook Remix 9.10 Karmic Koala. Well, I’m sorry but I don’t think this Koala was ready for release. First there was the issue of where the heck are all the beloved Ubuntu tools. Gone is the Add/Remove software progy (you have to install manually), now we have Ubuntu Software Centre. Gone is being able to check off multiple packages for batch install, USC installs apps one at a time (which takes two mouse clicks per app).

To top it off — atleast in UNR 9.10 — there is no install button once you click on the arrow beside the packages. No, it’s not a problem with root/admin, I tried running it as root and same thing, NO INSTALL BUTTON on the install screen. Anyways it turns out once you get past the gargantuan Windows XP like update (125 updates) using the following two commands, the Ubuntu Software Centre magically comes back to life and gives you the “oh so important” install button. Come on Ubuntu…..I thought you were friendly. This Koala Bites HARD!!! :-). So the magic commands are….yeah you guessed it:

sudo apt-get update
sudo apt-get upgrade

BTW. If at some point the upgrade asks to replace /etc/defaults/grub say “yes” and go with the newer version. It does not harm the system.

Fix Ctrl-Alt-Backspace problem with Ubuntu 9.10+

datePosted on 13:58, October 31st, 2009 by Many Ayromlou

Downloaded and installed 9.10 yesterday and what do you know, someone decided to take away Ctrl-Alt-Backspace — or what I call “Three Finger Salute for Linux”. Whhhaaattt!!!!

How the heck are you supposed to kill and restart X without that…..A coworker suggested Alt-PrintScreen-K, but that just restarts GDM, not really useful when X decides to go south. Damit!!
The reason given on Ubuntu wiki is that “This is due to the fact that DontZap is no longer an option in the X server and has become an option in XKB instead.”
Well, fear not, whoever disabled it also created a easy way to reenable it again. Here is what you do:

  • In Gnome (Ubuntu):

    * Get to the System->Preferences->Keyboard menu.
    * Select the "Layouts" tab and click on the "Layout Options" button.
    * Then select "Key sequence to kill the X server" and enable "Control + Alt + Backspace".
  • In KDE (Kubuntu):
    * Launch "systemsettings"
    * Select "Regional & Language".
    * Select "Keyboard Layout".
    * Click on "Enable keyboard layouts" (in the Layout tab).
    * Select the "Advanced" tab. Then select "Key sequence to kill the X server" and enable "Control + Alt + Backspace".
  • Using Command-Line:
    You can type the following command to enable Zapping immediately.

    setxkbmap -option terminate:ctrl_alt_bksp

    If you're happy with the new behaviour you can add that command to your ~/.xinitrc in order to make the change permanent.
  • Using HAL:
    You can add the following line in /usr/share/hal/fdi/policy/10osvendor/10-x11-input.fdi (inside the <match key="info.capabilities" contains="input.keys"> section):

    <merge key="input.xkb.options" type="string">terminate:ctrl_alt_bksp</merge>

It’s been a while since I’ve had the pleasure (read: pain) of working with Sloowaris, but now that we have two 48TB Sun X4540 Thumpers in house, I have to…..Uggghhhh :-). Here are some notes:

  • Remember sudo -i does not work. Use “su -” to get the root environment through ssh (login as regular user).
  • The machine has 6 Controllers with 8 Disks each for a total of 48 disks.
  • To find out all the disks that are available on your system and their Labels…..root # format
    Searching for disks...done


  • To see the status of the zpool runroot # zpool status
    pool: pool1
    state: ONLINE
    status: The pool is formatted using an older on-disk format. The pool can
    still be used, but some features are unavailable.
    action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
    pool will no longer be accessible on older software versions.
    scrub: none requested

    pool1 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c0t3d0 ONLINE 0 0 0
    c1t3d0 ONLINE 0 0 0
    c2t3d0 ONLINE 0 0 0
    c3t3d0 ONLINE 0 0 0
    c4t3d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c5t3d0 ONLINE 0 0 0
    c0t7d0 ONLINE 0 0 0
    c1t7d0 ONLINE 0 0 0
    c2t7d0 ONLINE 0 0 0
    c3t7d0 ONLINE 0 0 0
    c4t7d0 AVAIL
    c5t7d0 AVAIL

    errors: No known data errors

  • Our zpool is at version 10 and the latest is version 15, so we upgrade:root # zpool upgrade
    This system is currently running ZFS pool version 15.

    The following pools are out of date, and can be upgraded. After being
    upgraded, these pools will no longer be accessible by older software versions.

    --- ------------
    10 pool1

    Use 'zpool upgrade -v' for a list of available versions and their associated
    root # zpool upgrade -v
    This system is currently running ZFS pool version 15.

    The following versions are supported:

    --- --------------------------------------------------------
    1 Initial ZFS version
    2 Ditto blocks (replicated metadata)
    3 Hot spares and double parity RAID-Z
    4 zpool history
    5 Compression using the gzip algorithm
    6 bootfs pool property
    7 Separate intent log devices
    8 Delegated administration
    9 refquota and refreservation properties
    10 Cache devices
    11 Improved scrub performance
    12 Snapshot properties
    13 snapused property
    14 passthrough-x aclinherit
    15 user/group space accounting
    For more information on a particular version, including supported releases, see:

    Where 'N' is the version number.
    root #
    root # zpool upgrade pool1
    This system is currently running ZFS pool version 15.

    Successfully upgraded 'pool1' from version 10 to version 15

  • zpools are like autonomous raid subsystems that will eventually be added into a pool (which is similar to a LV). There are 3 types of pools raidz (raid-5 like), raidz2 (raid-6 like) and mirror.
  • C0T0D0 and C1T0D0 are kinda special and can’t be included in a zpool…..something about SVM metadb…..blahblahblah. Leave it out.root # metadb -i
    flags first blk block count
    a m p luo 16 8192 /dev/dsk/c0t0d0s7
    a p luo 8208 8192 /dev/dsk/c0t0d0s7
    a p luo 16400 8192 /dev/dsk/c0t0d0s7
    a p luo 16 8192 /dev/dsk/c1t0d0s7
    a p luo 8208 8192 /dev/dsk/c1t0d0s7
    a p luo 16400 8192 /dev/dsk/c1t0d0s7
    r - replica does not have device relocation information
    o - replica active prior to last mddb configuration change
    u - replica is up to date
    l - locator for this replica was read successfully
    c - replica's location was in /etc/lvm/
    p - replica's location was patched in kernel
    m - replica is master, this is replica selected as input
    W - replica has device write errors
    a - replica is active, commits are occurring to this replica
    M - replica had problem with master blocks
    D - replica had problem with data blocks
    F - replica had format problems
    S - replica is too small to hold current data base
    R - replica had device read errors
  • The following commands created the extra zpools needed:root # zpool add pool1 raidz1 c2t0d0 c3t0d0 c4t0d0 c5t0d0 c0t1d0
    root # zpool add pool1 raidz1 c1t1d0 c2t1d0 c3t1d0 c4t1d0 c5t1d0
    root # zpool add pool1 raidz1 c0t2d0 c1t2d0 c2t2d0 c3t2d0 c4t2d0
    root # zpool add pool1 raidz1 c1t4d0 c2t4d0 c3t4d0 c4t4d0 c5t4d0
    root # zpool add pool1 raidz1 c0t5d0 c2t5d0 c3t5d0 c4t5d0 c5t5d0
    root # zpool add pool1 raidz1 c0t6d0 c1t6d0 c3t6d0 c4t6d0 c5t6d0
  • This leaves the following 4 disks to be added to spare:root # zpool add pool1 spare c5t2d0 c0t4d0 c1t5d0 c2t6d0
  • Now for the fun part…..finding out what the heck all this did to the system:root # zpool status
    pool: pool1
    state: ONLINE
    scrub: none requested

    pool1 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c0t3d0 ONLINE 0 0 0
    c1t3d0 ONLINE 0 0 0
    c2t3d0 ONLINE 0 0 0
    c3t3d0 ONLINE 0 0 0
    c4t3d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c5t3d0 ONLINE 0 0 0
    c0t7d0 ONLINE 0 0 0
    c1t7d0 ONLINE 0 0 0
    c2t7d0 ONLINE 0 0 0
    c3t7d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c2t0d0 ONLINE 0 0 0
    c3t0d0 ONLINE 0 0 0
    c4t0d0 ONLINE 0 0 0
    c5t0d0 ONLINE 0 0 0
    c0t1d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c1t1d0 ONLINE 0 0 0
    c2t1d0 ONLINE 0 0 0
    c3t1d0 ONLINE 0 0 0
    c4t1d0 ONLINE 0 0 0
    c5t1d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c0t2d0 ONLINE 0 0 0
    c1t2d0 ONLINE 0 0 0
    c2t2d0 ONLINE 0 0 0
    c3t2d0 ONLINE 0 0 0
    c4t2d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c1t4d0 ONLINE 0 0 0
    c2t4d0 ONLINE 0 0 0
    c3t4d0 ONLINE 0 0 0
    c4t4d0 ONLINE 0 0 0
    c5t4d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c0t5d0 ONLINE 0 0 0
    c2t5d0 ONLINE 0 0 0
    c3t5d0 ONLINE 0 0 0
    c4t5d0 ONLINE 0 0 0
    c5t5d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c0t6d0 ONLINE 0 0 0
    c1t6d0 ONLINE 0 0 0
    c3t6d0 ONLINE 0 0 0
    c4t6d0 ONLINE 0 0 0
    c5t6d0 ONLINE 0 0 0
    c4t7d0 AVAIL
    c5t7d0 AVAIL
    c5t2d0 AVAIL
    c0t4d0 AVAIL
    c1t5d0 AVAIL
    c2t6d0 AVAIL

    errors: No known data errors
    root # zpool list
    pool1 36.2T 222K 36.2T 0% ONLINE -
    root # zfs list
    pool1 161K 28.5T 28.8K /vol1

  • To create a CX special mount point we do:root # zfs create pool1/CX
    root # zfs list
    pool1 201K 28.5T 28.8K /vol1
    pool1/CX 33.6K 28.5T 33.6K /vol1/CX
  • When compiling iRODS on X4540 you might/will get an error like “make: Fatal error in reader: config/………Unexpected end of line seen“. This is caused because by default the system is configured for Sun’s make command (in /usr/ccs/bin/make) rather than the gnu make command which resides in /usr/sfw/bin/gmake under Solaris 10. To fix this add /usr/sfw/bin in front of your $PATH variable, export it and use gmake instead of make (read the INSTALL.txt file that comes with iRODS to find out how you can do the steps manually instead of using irodssetup command).

ASCII Art at it’s finest…..

datePosted on 12:14, October 19th, 2009 by Many Ayromlou

Not sure if this has already been mentioned somewhere…..It’s pretty old, but I happen to come across it today. It’s a great rendition of everyone’s favorite space opera done by Simon Jansen in ASCII. Telenetification (is that even a word?) by Snore, with improvements by Mike Edwards. Anyways, use the following command, sit back and enjoy…..Star Wars in all its ASCII glory :-)

If you don’t know how to telnet, click here to see it in your browser.

Title says it all…..head over to TechPosters (kinda slow right now) and snag your favourite cheat sheet/reference card. There are also more of this kinda stuff over at as well.

So after yesterdays rant, I went back and figured out how to install the Cacti monitoring software (OSS, Free) onto a Ubuntu 9.04 “Jaunty Jackalope” Desktop installation. This guide uses packages only, no compiling, no Makefiles or anything like that…..You should be able to just follow this and get a fully functioning Cacti installation in about 30 minutes. Here are the steps:

  1. install ubuntu 9.04 (“Jaunty Jackalope“) Desktop Edition on your machine
  2. Login, open a Shell window and install ubuntu LAMP (Linux/Apache/MySQL/PHP) server stack on your machine
    “sudo tasksel install lamp-server”.
    Note: Make sure you remember the password for “root” user in mysql Database, write it down somewhere, we will need it later on.
  3. Get a superuser shell started since it will make for less typing.
    “sudo -i”
    followed by your password. Be carefull from now on, you’re ROOT and can literally destroy your system if you issue the wrong command. Follow along by typing the commands in the rest of this document and answering the prompts where appropriate.
  4. Issue:
    “apt-get install rrdtool snmp php5-snmp php5 php5-gd”
    This will get all of the prereqs installed on your system. Answer “yes” when prompted for additional packages. 
  5. Issue:
    “apt-get install cacti-cactid”
    This will get cacti and cacti server installed. Again answer “yes” when prompted for additional packages.
  6. You’ll be presented with a bunch of ANSI screens that ask for information or give you choices to configure “libphp-adodb” package. Follow as per below:
    • Click “Okay” on php.ini update path (screen 1).
    • Choose “Apache 2” from the pull down on next screen (screen 2).
    • Click “Okay” on cacti and spine configuration screen (screen 3).
    • At this point some config scripts will run and you’ll see a bunch on jiberish on the screen. Let it go, don’t touch nothing.
    • Click “yes” on the dbconfig-common screen and provide the password from step 2. (above) for the mysql “root” user (screen 4).
    • Now you’re prompted to choose a password for a new mysql user known as “cacti”. I used the same password as “root” user since my system is single user only. You will need to confirm the password on the next screen (screen 5,6).
    • Almost there……..
  7. Now the hard part is over. Start your browser and point it at http://localhost/cacti — assuming you’re running the browser on the cacti machine — or the appropriate IP address instead of localhost.
  8. Click “Next” on the first screen (might want to read it too).
  9. Select “New Install” on screen 2 and Click “Next”
  10. On the next screen (Path Check screen) make sure everything is found and make 100% sure to select “RRDTool 1.2.x” from the RRDTool utility version pull down. Click “Finish” when you’re done.
  11. You’ll see the login screen. Use Username “admin” and Password “admin” to login. On the next screen you’re forced to change the password for user admin. This is a good thing. Change the password to something complicated and easy to remember (does that exist?). Click “Save”.
  12. Make sure under Configuration Settings/Paths that “Spine Poller file path” is correctly set to “/usr/sbin/spine”, and its found.
  13. Make sure under Configuration Settings/Poller you select “Poller type” and set it to “spine” and Click “Save”. You’re done……Please RTFM for more Cacti info (or come back here and you might potentially find another episode of my ramblings). Have Fun!!

Duplicate your Ubuntu Installation….

datePosted on 13:55, February 18th, 2009 by Many Ayromlou

As good as Ubuntu (and linux) are in general, once in a while you just get to a point where you need a reinstall. That’s when the realization kicks in that you’ve got far too many packages installed since the initial Ubuntu install. It’s okay, there is a way out. Make sure you have a USB key.

On Ubuntu Workstation (with graphical interface):

Run Synaptic package manager. Once inside Synaptic, go to File/Save Markings As menu option and choose a filename and location (USB stick). MAKE SURE YOU ALSO CHECK THE BOX “SAVE FULL STATE, NOT ONLY CHANGES”. This will save a text file that contains every single package installed on your system (through apt system and all it’s variants….manual compile/installs are something else). Now you can go ahead and reinstall the machine and configure your repositories. Once the machine is up and running again, load up Synaptic and go to File/Read Markings and point it at the file you saved on the USB stick and press Apply.

This will start a download process that will set the machine up (as far as installed packages are concerned) just like it used to be. Configurations need to be done manually, but atleast you get all your packages back.

This is also super handy if you’re duping identical systems. Remember that you can not do this to upgrade from one version to another. This is strictly for “Restoring” installed software packages from the same version of Ubuntu.

On Ubuntu Server (command line):

First we need to create a list of all the installed APT packages and configurations and save them:
sudo dpkg --get-selections > /tmp/dpkglist.txt
sudo debconf-get-selections > /tmp/debconfsel.txt

Copy the files from /tmp to your USB stick or save them somewhere else.

Now reinstall the OS, copy your backed up debconfsel.txt and dpkglist.txt file to your fresh system’s /tmp directory and execute the following:
sudo dselect update
sudo debconf-set-selections < /tmp/debconfsel.txt
sudo dpkg --set-selections < /tmp/dpkglist.txt
sudo apt-get -y update
sudo apt-get dselect-upgrade

Don’t worry! This method only adds and upgrades packages, it will not remove packages that do not exist in the list.

We also covered aptoncd program which basically does the same thing (it’s a extra install). Last but not least to make a costum Ubuntu install CD/DVD you want to check out our entry on Reconstructor.