Archive for ‘Unix’ Category

Tunnel to locally running mysql server using ssh

datePosted on 12:35, June 17th, 2008 by Many Ayromlou

Running and administrating mysql can sometimes be a hassle especially if you’re running a semi-secure environment. This usually means that your mysql server will not accept connections from outside and only localhost connections are allowed. There is a quick way of getting around this if you’re stuck somewhere and really need to use that graphical admin/browser tool to get to your DB server. All you really need to do is forward port 3306 on your local machine to port 3306 on the DB server through a ssh tunnel. Here is the ssh command you need to issue to start things up:
ssh -L 3306:
Once you supply the password for the ssh session you’re in business, the encrypted tunnel is up and running. All you need now is to point Mysql Administrator graphical tool at host (localhost) and port 3306 like the picture below:The only thing you want to make sure you get right is the, DO NOT use localhost. The tools you’re using automatically assume a local socket connection to the DB when you use “localhost” as the Server Hostname. Another thing is that all checks that mysql administrator does locally on the server files will not work (ie: the interface will report that the server is down since it can’t find, but all users/schema manipulation works fine since they are network based.

If you have mysql daemon installed on your local machine (the machine you initiated ssh from) you need to change the local port to something else other than 3306 and the command will look something like this:
ssh -L 7777:
In this case I’m using local port 7777 which means I also have to tell mysql administrator to connect through port 7777. You get the idea……

Twitter from Unix/Linux/OSX command line

datePosted on 11:33, June 12th, 2008 by Many Ayromlou

Yep, you can. Here is the recipe:

1) You need to install “curl” for your OS. OSX comes with it by default which is nice. Most unices out there also have it installed or have it available for download (Ubuntu, Debian users can use “sudo apt-get install curl” to install).
2) Edit a text file using your favourite editor and add the following line in there:
curl --basic --user "youruserid:yourpassword" --data-ascii "status=`echo $@|tr ' ' '+'`" "" -o /dev/null
3) Make sure you replace youruserid and yourpassword with appropriate strings.
4) Save the file as something like and make it executable by issuing this command:
chmod 700 ./
5) Twitter away by using the following command line:
./ "Put your twit in here and press Enter"
6) Done.

Have fun commandline twittering :-).

Some Unix/Linux Coolness…..

datePosted on 17:53, June 11th, 2008 by Many Ayromlou

I think every admin must do something stupid atleast once….right? Well my brain fart happened during a System upgrade (another story I’ll be ranting about later). I made backups of all the files I thought were important (/home, /etc, /var/lib/mysql and other userdata we had on the system) and installed Ubuntu 8.04 on the server. Well, of course the second person who walks in to report problems, asks me about his personal crontab……DOOOHHHHH!!!! Yeah I forgot to back that sucker up. Now, the lucky part of all this is that I just deleted the old directories on that partition, I did not format it. So once I realized that, I figured why not just search for it. I mean I knew something about the file, why shouldn’t I be able to just search the raw disk and look for a specific string I know existed in the crontab file. Well guess what you can and it works like a charm….here is how:

grep --binary-files=text -10 "DO NOT EDIT THIS FILE" /dev/sda9 >/tmp/output

This command was issued on a ext3 partition and found the portion of the file I was looking for in about 20 minutes (the partition is about 450GB). The Unix utils are marvelous and just using a single grep command (above) allows me to look for the string “DO NOT DELETE THIS FILE” (which I knew for fact was in my deleted file) and output 10 lines of text above and below that line into a temporary file. Now that’s power kids, don’t try this on your Winblows machine :-).

Hardy Heron is out…..

datePosted on 14:34, April 24th, 2008 by Many Ayromlou

Heha…..Ubuntu’s newest release 8.04 LTS (aka. Hardy Heron) is out and ready for your consumption. This release is major in that it’s LTS. For those of you who don’t know LTS versions of Ubuntu are supported for 3 years for the desktop version and 5 years for server version. ALL FREE….so what are you waiting for…..head over to Ubuntu Land for more info or alternatively just go to the download page.

Twitter….the cool way….

datePosted on 18:51, March 7th, 2008 by Many Ayromlou

To be honest, I’ve had a twitter account for a while, but since I need a browser (or phone) to get access to it and twit, I hadn’t used it. But that’s about to change (maybe), since I found out how you can twit from command line. Yep, twit away from any UNIX, Linux, OSX (and Windows) Command prompt. Here is how:

1) First find the program CURL for your intended platform. It comes built into OSX and most Linux distros and there is a port for windows as well (use google).
2) Setup your twitter account.
3) Use this command when you want to twit:
curl -u yourusername:yourpassword -d status="Your Message Here"
Now one thing to remember is that the username and pass get added to your shell history, so if you’re on a public machine (or friends) you might clear the history file (ie: use history -c in bash to clear the command history).

Okay so this all started with our users not being able to share files on our webserver. We use SSH only for upload/download and interactive access (ie: no ftp). Through trial and error we found out that the default umask (under OSX Server) for sftp uploaded files are 0033 (ie: rwxr–r–) and directories are 0022 (ie: rwxr-xr-x). This creates a problem when one user uploads a file and another user downloads/modifies and tries to re-upload it — they simply can’t because the group permissions are wrong.

If we were using ftp (which we are not) there are some solutions on the net that allow you to modify the startup parameters for the ftp server so that the default umask for all files is 0013 — which would allow a group of people to share/overwrite each others files — but we are using ssh only.

So we came up with two other solutions — a shared upload account and/or a cron job that would modify the group permissions on the website directory to allow group sharing. We went with the second solution and that’s where I ran into so many problems that I decided to create this post. You see normally Unix users know that spaces (and strange characters) in filenames are a no-no. Well that’s not true for Windows and Mac users, they use spaces and other odd characters in their filenames/folders all the time.

I started writing — what I thought was — a simple “for loop” script to go through the website folder and change the group permissions. Of course on the first try things didn’t work nicely because of spaces, so I started compensating for that and came up with:
for i in `find /Path/to/www -type d -print0 |xargs -0 -n 1`
This kinda worked, but the for loop would still split the lines when it hit spaces in filenames. I tried to mess around with it and gave up. After RTFMing a bit more I tried:
for i in `find /Path/to/www -type d -exec echo \"{}\" \;`
The thinking behind this was that the exec would echo the filenames quoted and it should work….well it didn’t, the for loop still split the input lines at spaces.

Finally after a latenight RTFM session (and lots of cursing), I think I’ve found the ultimate file handling loop statement:
find /Path/to/www -type d ! -perm -g=wx -print0 | while IFS= read -rd $'\0' filename
Okay so this version uses “while” rather than “for” but it works like a charm and chews through spaces and all other kinds of weird chars and creates a output stream that’s ready to be used by your choice of commands (chmod in my case).

After trimming and optimizing the script a bit, here is the final product:
# The following find will search for
# all files under /Path/to/www, that
# are NOT symlinks, and do NOT have
# group write permission. The list is
# "\0" seperated and the while portion
# will loop around this character and
# ignore everything else in the path.
find /Path/to/www ! -type l ! -perm -g=w -print0 | while IFS= read -rd $'\0' filename
# We've found a directory with no group
# write permission, so fix it.
if [ -d "$filename" ]then
chmod g+rwx "$filename"
# echo Directory changed
stat -l "$filename"
# We've found a file with no group
# write permission, so fix it.
if [ -f "$filename" ]then
chmod g+rw "$filename"
# echo File changed
stat -l "$filename"

Hopefully you’ll find this code (or portions of it) useful for your own day-to-day hack-and-slash solutions to annoying problems. Let me know if you come up with an even better solution :-)

screen…it’s not just for nerds anymore.

datePosted on 20:26, October 7th, 2007 by Many Ayromlou

So after hearing from people at work how great the “screen” command was (yeah welcome to gnuland boys and girls), I decided to do a short tutorial on screen. This way I can stop telling them to RTFM and instead tell them to RTFB (Blog). Anyways, What is “screen” first of all….From the pages of wikipedia:

GNU Screen is a free terminal multiplexer developed by the GNU Project. It allows a user to access multiple separate terminal sessions inside a single terminal window or remote terminal session. It is useful for dealing with multiple programs from the command line, and for separating programs from the shell that started the program. GNU Screen can be thought of as a text version of graphical window managers, or as a way of putting virtual terminals into any login session. It is a wrapper that allows multiple text programs to run at the same time, and provides features that allow the user to use the programs within a single interface productively.

Think of screen as a Virtual Machine (I know it’s not but bear with me). Once you run the command, the ‘virtual machine’ takes over and allows you to create multiple interactive command line sessions. In each of those sessions you can run commands that are either interactive (menu based) or serialized. Once you’re done you can disconnect the session — keeping in mind that the session is actually alive and running, including all the programs that were spawned inside that session — go to another computer and ‘restore’ the session with all the programs still running. By far one of the coolest things about screen is that it automatically allows you to nohup your commands, by just disconnecting the session and reconnecting to it later. So without any further due here is screen:

Obviously you need to run it, so first step is to type screen at the command line. When you do that you get a new shell window and the adventure starts. Remember that pretty much all screen commands start with Ctrl-a followed usually by a character (ie: you press Ctrl button and c together, let go, and follow it with the character).

So now you have a new shell, run a command (ie: pine, vi or something). Okay so now we can simulate you leaving your machine and detaching your session.

– To Detach : Ctrl-a d (this will detach the session but your command is still running inside that screens shell….you’ll see)
– To Reattach : screen -r (without the quotes. You should get the session back with whatever command you were running in it).

So now you’ve got the very basics of screen. Detaching allows you to run commands, leave them halfway, detach and go somewhere else and use Re-attach to restore the session.

Now, how about multiple sessions. Yeah you can do that too, one screen process with multiple sessions inside it.

– Use screen -r to reattach to your process (If you haven’t done so already). Note that your program is still running (say vi). If you now want to run lynx for example you can use the Ctrl-a c command to create another session (c for create). So now you have two sessions inside your “screen virtual machine”.
– Use Ctrl-a n and Ctrl-a p to flip between sessions (n for next and p for previous). You can also create more screens with Ctrl-a c. Lets create 2-3 more sessions.
– Use Ctrl-a followed by a number between 0-9 to switch between up to 10 recently created sessions.
– Now use Ctrl-a d to detach from the session, logoff (don’t reboot, that will kill the screen process) and log back in. Now reattach to the process using screen -r. Note that all your sessions are still there (you can check using Ctrl-a n and Ctrl-a p to cycle through the sessions).

One last thing before I take away the training wheels, to kill your screen process (and all sessions running inside it) use Ctrl-a Ctrl-\.

Okay, so here is a small list of the many screen options and commands:

Ctrl-a “ : gives you a full screen list of all your sessions and you can scroll down to the one you want to switch to and press Enter (remember to get you have to use Shift-‘ and ESC gets you out of the list).
Ctrl-a A : (that’s a shift-a) allows you to give a meaningful name to your session window.
Ctrl-a k : allows you to kill your current session (not all sessions spawned inside a screen process, just the current session).
Ctrl-a S : will split your current session screen in half. It is easy to confuse Ctrl-a S, which uses a capital ‘S’ with Ctrl-a s, which uses a lower case ‘s’. The upper case command causes screen to be vertically split (that is, with one region on top of the other), while the lower case command causes the parent terminal to freeze (Scroll Lock). To unfreeze the parent terminal, use the Ctrl-a q command.
Ctrl-a : will jump between the regions in a split session. Keep in mind that the new region will have nothing in it until you designate another open session to pop in there using Ctrl-a p and/or Ctrl-a n which will cycle the next or previous session into the new split region.
Ctrl-a X : (that’s a shift-x) will close a region (ie: split region goes back to full).
Ctrl-a + : will enlarge the current region (and shrink the other).
Ctrl-a – : will shrink the current region (and enlarge the other).
Ctrl-a M : (that’s a shift-m) allows you to monitor the current window for output. I use the MSN command line client pebrot occasionally, and always set its window to notify me when something happens (ie: a join message).
Ctrl-a _ : does the same thing as above, but in a opposite way. It switches into the monitoring mode for 15 seconds of silence, which triggers a notification in xterm’s status area. So when our compile finishes, we will be told so even in another session.
Ctrl-a [ : will place you in copy mode. Use this when you need to copy some text from one session to another. Do Ctrl-a [ in the source session to enter copy mode (you can exit copy mode using ESC). Move around using cursor keys to the beginning of where you want to start copying and press Spacebar to mark the beginning. Now move to the end and press Spacebar again to mark the end of your copy block. You can now switch to another session, move to where you want to paste the block and press Ctrl-a ] to paste what was put in the buffer.

Here are a couple of more useful startup screen commands:

screen -ls : will list all the screen processes running under your userid (yes you can run multiple screen processes with multiple sessions inside each).
screen -r screenname : restores a specific screen process.
screen -R : will try to reattach if there is a detached process, if not it will start a new process.
screen -D -RR : this is the “I want control now” command. It will detach already attached clients and attach to the first session listed.

As usual screen is controlled via .screenrc file for configuration parameters (there is a system wide file in /etc/screenrc and the personal one in your home directory, under ~/.screenrc). You can add the following commands in your personal .screenrc to make life a bit simpler:

#kill startup message
startup_message off
# define a bigger scrollback, default is 100 lines
defscrollback 1024
# An alternative hardstatus to display a bar at the bottom listing the
# windownames and highlighting the current windowname in blue. (This is only
# enabled if there is no hardstatus setting for your terminal)
hardstatus on
hardstatus alwayslastline
#hardstatus string "%{.bW}%-w%{.rW}%n %t%{-}%+w %=%{..G} %H %{..Y} %m/%d %C%a "
#hardstatus string "%{= mK}%-Lw%{= KW}%50>%n%f* %t%{= mK}%+Lw%< %{= kG}%-=%D %d %M %Y %c:%s%{-}"
#hardstatus string "%{= kG}%-Lw%{= kW}%50> %n%f* %t%{= kG}%+Lw%< %{= kG}%-=%c:%s%{-}"
#hardstatus string "%{= kG}[ %{G}%H %{g}][%= %{= kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B} %d/%m %{W}%c %{g}]"
hardstatus string "%{gk}[ %{G}%H %{g}][%= %{wk}%?%-Lw%?%{=b kR}(%{W}%n*%f %t%?(%u)%?%{=b kR})%{= kw}%?%+Lw%?%?%= %{g}][%{Y}%l%{g}]%{=b C}[ %m/%d %c ]%{W}"

As usual there is a lot more to screen, so once you’ve got the basics nailed, take a peek at the man pages for more goodies and don’t forget…..Command line is your friend :-).

QNX RTOS is now Open Source….

datePosted on 12:39, September 16th, 2007 by Many Ayromlou

Here is some fantastic news for all you embedded developers. QNX OS is now open source. For those who don’t know QNX is a commercial POSIX-compliant Unix-like real-time operating system, aimed primarily at the embedded systems market.

As a microkernel-based OS, QNX is based on the idea of running most of the OS in the form of a number of small tasks, known as servers. This differs from the more traditional monolithic kernel, in which the operating system is a single very large program composed of a huge number of “parts” with special abilities. In the case of QNX, the use of a microkernel allows users (developers) to turn off any functionality they do not require without having to change the OS itself; instead, those servers are simply not run.

The system is quite small, fitting in a minimal fashion on a single floppy, and is considered to be both very fast and fairly “complete.”

QNX Neutrino (2001) has been ported to a number of platforms and now runs on practically any modern CPU that is used in the embedded market. This includes the x86 family, MIPS, PowerPC, SH-4 and the closely related family of ARM, StrongARM and xScale CPUs.

And for a bit of history, one of the earlieast uses of QNX was on the UNISYS ICON computer (not embeded), and that’s where my exposure to QNX started, when I found one of these machines in the dumpster at Ryerson and rescued it. She’s my baby and I’m still trying to figure out how to boot it up (10 years and going) :-).

Couple of cool remote ssh commands for your UNIX arsenal

datePosted on 18:41, August 4th, 2007 by Many Ayromlou

Here is a easy way to copy an entire directory tree from one Unix machine to another, while retaining the permisssions and ownership, using ssh as the middle man. Assuming that you want to copy everything under source_directory to destination_directory on another machine here is the command you would issue on the source machine (first cd to the directory containing source_directory):
tar -cf - source_directory/ | ssh "cd /somedir/destination_directory ; tar -xvlpf -"
or if you want to copy everything from the remote server’s source directory to the local machine’s destination directory:
ssh "(cd /somedir/source_directory ; tar -cf - .)" |(cd /somedir/destination_directory ; tar -xvlf -)
Here is another similar command that allows you to backup a HD partition to another host via ssh:
dd bs=1M if=/dev/sdb | gzip | ssh "cd /destination/directory ; dd of=sdb.gz"
I have not bothered dissecting these commands, since I assume you are familiar with Unix and Shell commands. Please note that these commands will most likely not work in OSX, unless you’re working on datafiles only (ie: html files, txt files). Program files under OSX could potentially get corrupted if copied via the first command.

One last command which I came across the other day…..If you’re ever in need of a stop watch just use your shell to measure time. Issue the following command and wait a bit. Now interrupt it via Ctrl-C and it will show you how much time has passed. NOTE: Nothing happens when you issue the command, only when you stop it via ctrl-c.

time cat
Have Fun….