Archive for ‘Shell Script’ Category
Browse:
Shell Script »
Subcategories:

Synology NAS and those pesky @eaDir folders

datePosted on 20:22, March 8th, 2012 by Many Ayromlou

If you’ve enabled MediaServer and/or PhotoStation on your Synology NAS you might have noticed a bunch of “@eaDir” folders inside your data folders. You will not normally see this under samba or appletalk connections. I noticed it since I was trying to rsync from synology to a old qnap nas I have lying around. Although you can turn these services off from the Control Panel, it does not get rid of these dumb folders. So here is a quick script to clean all the “@eaDir” folders up from your synology disk. NOTE: I’VE INTENTIONALLY NOT USED THE “rm -rf” COMMAND HERE. I DON’T WANT YOU TO DESTROY YOUR NAS SERVER WITH JUST ONE COMMAND. Run the command below and it will “echo” the names of these “@eaDir” folders to the terminal. Then once you’re satistied that it’s working well (no weird filenames/characters/etc.), then replace the “echo” with “rm -rf” to actually remove those folders. There is no guarantee that this will work for you, DO NOT USE THIS IF YOU DON’T UNDERSTAND WHAT THE COMMAND DOES. THIS CAN HARM YOUR FILES.

find . -name "@eaDir" -type d -print |while read FILENAME; do echo "${FILENAME}"; done

Make sure you login via ssh first and “cd” to where your files are stored. This command starts looking for “@eaDir” folders recursively from the current directory.

Basic APT commands

datePosted on 16:37, September 22nd, 2008 by Many Ayromlou

Okay, now for a bit of CLI goodness. Here is a quick list of basic apt commands.  Debian and most derivatives (Ubuntu) use these for package maintenance.
#search
apt-cache search packagename

#package info
apt-cache show packagename

#clean
sudo apt-get clean
sudo apt-get autoclean #old packages

#check
apt-get check

#get source code
apt-get source packagename

#get dep
apt-get build-dep packagename

#update/install/remove/upgrade
sudo apt-get update
sudo apt-get install packagename
sudo apt-get remove packagename
sudo apt-get upgrade
sudo apt-get dist-upgrade

Okay so this all started with our users not being able to share files on our webserver. We use SSH only for upload/download and interactive access (ie: no ftp). Through trial and error we found out that the default umask (under OSX Server) for sftp uploaded files are 0033 (ie: rwxr–r–) and directories are 0022 (ie: rwxr-xr-x). This creates a problem when one user uploads a file and another user downloads/modifies and tries to re-upload it — they simply can’t because the group permissions are wrong.

If we were using ftp (which we are not) there are some solutions on the net that allow you to modify the startup parameters for the ftp server so that the default umask for all files is 0013 — which would allow a group of people to share/overwrite each others files — but we are using ssh only.

So we came up with two other solutions — a shared upload account and/or a cron job that would modify the group permissions on the website directory to allow group sharing. We went with the second solution and that’s where I ran into so many problems that I decided to create this post. You see normally Unix users know that spaces (and strange characters) in filenames are a no-no. Well that’s not true for Windows and Mac users, they use spaces and other odd characters in their filenames/folders all the time.

I started writing — what I thought was — a simple “for loop” script to go through the website folder and change the group permissions. Of course on the first try things didn’t work nicely because of spaces, so I started compensating for that and came up with:
for i in `find /Path/to/www -type d -print0 |xargs -0 -n 1`
This kinda worked, but the for loop would still split the lines when it hit spaces in filenames. I tried to mess around with it and gave up. After RTFMing a bit more I tried:
for i in `find /Path/to/www -type d -exec echo \"{}\" \;`
The thinking behind this was that the exec would echo the filenames quoted and it should work….well it didn’t, the for loop still split the input lines at spaces.

Finally after a latenight RTFM session (and lots of cursing), I think I’ve found the ultimate file handling loop statement:
find /Path/to/www -type d ! -perm -g=wx -print0 | while IFS= read -rd $'\0' filename
Okay so this version uses “while” rather than “for” but it works like a charm and chews through spaces and all other kinds of weird chars and creates a output stream that’s ready to be used by your choice of commands (chmod in my case).

After trimming and optimizing the script a bit, here is the final product:
# The following find will search for
# all files under /Path/to/www, that
# are NOT symlinks, and do NOT have
# group write permission. The list is
# "\0" seperated and the while portion
# will loop around this character and
# ignore everything else in the path.
find /Path/to/www ! -type l ! -perm -g=w -print0 | while IFS= read -rd $'\0' filename
do
# We've found a directory with no group
# write permission, so fix it.
if [ -d "$filename" ]then
chmod g+rwx "$filename"
# echo Directory changed
stat -l "$filename"
fi
# We've found a file with no group
# write permission, so fix it.
if [ -f "$filename" ]then
chmod g+rw "$filename"
# echo File changed
stat -l "$filename"
fi
done

Hopefully you’ll find this code (or portions of it) useful for your own day-to-day hack-and-slash solutions to annoying problems. Let me know if you come up with an even better solution :-)

Couple of cool remote ssh commands for your UNIX arsenal

datePosted on 18:41, August 4th, 2007 by Many Ayromlou

Here is a easy way to copy an entire directory tree from one Unix machine to another, while retaining the permisssions and ownership, using ssh as the middle man. Assuming that you want to copy everything under source_directory to destination_directory on another machine here is the command you would issue on the source machine (first cd to the directory containing source_directory):
tar -cf - source_directory/ | ssh userid@your.destination.machine.com "cd /somedir/destination_directory ; tar -xvlpf -"
or if you want to copy everything from the remote server’s source directory to the local machine’s destination directory:
ssh userid@your.source.machine.com "(cd /somedir/source_directory ; tar -cf - .)" |(cd /somedir/destination_directory ; tar -xvlf -)
Here is another similar command that allows you to backup a HD partition to another host via ssh:
dd bs=1M if=/dev/sdb | gzip | ssh userid@your.destination.machine.com "cd /destination/directory ; dd of=sdb.gz"
I have not bothered dissecting these commands, since I assume you are familiar with Unix and Shell commands. Please note that these commands will most likely not work in OSX, unless you’re working on datafiles only (ie: html files, txt files). Program files under OSX could potentially get corrupted if copied via the first command.

One last command which I came across the other day…..If you’re ever in need of a stop watch just use your shell to measure time. Issue the following command and wait a bit. Now interrupt it via Ctrl-C and it will show you how much time has passed. NOTE: Nothing happens when you issue the command, only when you stop it via ctrl-c.

time cat
Have Fun….

Couple of quick shell tips

datePosted on 14:54, July 28th, 2007 by Many Ayromlou

Okay these are bash goodies, so they’ll work in any environment. If you’re in a situation where you’re switching between two different directory paths over and over again, here is a quick tip
cd -
Another little annoyance that I’ve gotten around is when you want to edit a system file and you type in the command (ie: vi /etc/this/is/a/really/long/path/config.cfg), only to realize that you forgot to sudo. This used to mean that I would quit vi, curse, recall the command, insert a sudo infront of the vi command and try again….well here is the quicker way.
sudo !!
This will (re)sudo your last command. And if that’s not enough you can actually narrow the sudo down to the last command that started with a certain string.
sudo !apache
Which will look in the history file and (re)sudo the first command that contains the string “apache”.

Generic Scripts to add Google Analytics code to HTML pages

datePosted on 16:42, July 23rd, 2007 by Many Ayromlou

Before I start, this tip is Unix friendly (not just OSX), but requires you to know what shell scripts are and how you create/run them. Additionally you should be familiar with the workings of the “find” command in Unix.

I had lots of trouble getting the Google Analytics code onto my gallery site. The problem is that I use iWeb to create a front end that links to a discrete back end (ie: specific subdirectories generated by Photoshop, iPhoto or Aperture). I found this Automator script earlier, but it seems like every time I run the script on a Folder, the script only changes .html pages created by iWeb….Weird. So after some head scratching and googling, I found the following complementary scripts on RSVP – Xnews site. There is a certain amount of detail about what the script is actually doing on that site, but I just wanted to extract the meat and add a little garnish (yeah I made a couple of mistakes, that I hope you’ll avoid).

  • First you need a new Analytics account or if you have one (with a existing profile) you might need to create a new profile for this new site you want to track. The mistake I made was that I had a existing account that tracks nerdlogger.com and I (by mistake) used it’s analytics ID in the script. Since my gallery site is a different domain, analytics creates a new ID when you add the domain (it actually increments the last digit).
  • Then you need to create a script called insert (or whatever you like) and put this in it (just cut and paste from here):
#!/bin/bash

#INSERT SCRIPT
#Your Google Analytics Code goes below.
googleAnalyticsCode='UA-XXXXXXX-2'
textToInsert="<script src=\"http:\/\/www.google-analytics.com\/urchin.js\" type=\"text\/javascript\"><\/script><script type=\"text\/javascript\">_uacct = \"$googleAnalyticsCode\";urchinTracker();<\/script>"
textToReplace="<\/[Bb][Oo][Dd][Yy]>"
#You need to substitute the path to the top of your webdirectory below.
WebPath='/Volumes/idiskname/Web/Sites'

# this is where the actual work happens
find $WebPath -iname '*.html' -exec sed -i .bak -e "/$textToInsert/!s/$textToReplace/$textToInsert&/g" {} \; -print
  • Now create another script called remove (or whatever you like) and put this in it (just cut and paste):
#!/bin/bash

#REMOVE SCRIPT
#Your Google Analytics Code goes below.
googleAnalyticsCode='UA-XXXXXXX-2'
textToRemove="<script src=\"http:\/\/www.google-analytics.com\/urchin.js\" type=\"text\/javascript\"><\/script><script type=\"text\/javascript\">_uacct = \"$googleAnalyticsCode\";urchinTracker();<\/script>"
#You need to substitute the path to the top of your webdirectory below.
WebPath='/Volumes/idiskname/Web/Sites'

# this is where the actual work happens
find $WebPath -iname '*.html' -exec sed -i .bak -e "s/$textToRemove//g" {} \; -print
  • Now we’re almost there. Create one last script called delback (or whatever) and put the folowing one liner in it:
#DELBACK SCRIPT
#You need to substitute the path to the top of your webdirectory below.
find /Volumes/idiskname/Web/Sites -iname '*.bak' -exec rm {} \; -print

At this point you should have three scripts insert, remove and delback. Use insert to insert the code into all the HTML files under a certain path ($WebPath). This will create .bak files and once you’ve verified the insert scripts operation you can delete/clean them using the delback script. Use remove to remove the analytics code from your HTML pages (if you decide later that you don’t like google analytics or something). Again this process creates .bak files that can be removed/cleaned using the delback script.

Keep in mind also that if you use iWeb to generate your pages and they are sitting on a OSX server that by default your web addresses get expanded after the browser requests them (ie: My gallery is http://www.rcc.ryerson.ca/~mayromlo but gets expanded and rewritten as http://www.rcc.ryerson.ca:16080/~mayromlo/Site/Welcome.html). So you need to get google analytics to go to the expanded version by editing the profile information after initial entry and changing the website URL. This last issue is very mac/osx specific.