Archive for ‘Linux’ Category

Build your own smartphone…..from scratch.

datePosted on 23:13, May 23rd, 2009 by Many Ayromlou

 Yep, you can do it now…..The open source hacker community GizmoForYou is shipping a Linux hardware/software kit for building a modular touchscreen smartphone. Using the OMAP35x-based Gumstix Overo Earth single-board computer (SBC), the Flow phone offers numerous customization modules including GPS, 3.5G cellular, Bluetooth, WiFi, and a camera. At around $1300 for the complete kitchen sink version, it’s not exactly cheap, but since they offer multiple choices for each component, you can pick and choose what you like to have inside your smartphone. Really neat stuff.

For those of you who are not tuned into Gumstix, the Overo line is a new line of Computer-on-Module devices designed by Gumstix based on TI’s OMAP Processor. Overo Earth comes with  the following specs:

Processor: OMAP 3503 Application Processor with ARM Cortex-A8 CPU
Clock(MHz): 600 MHz
Performance: Up to 1200 Dhrystone MIPS
Memory: 256MB RAM , 256MB Flash
Features:

  • Pin-out compatible with future OMAP 35x-based Overo motherboards
  • on-board microSD card slot
  • I2C, PWM lines (6), A/D (6), 1-wire, UART, SPI, Camera in, Extra MMC lines.
  • Headset, Microphone, backup battery,
  • USB OTG signals, USB HS Host

Connections:

  • (2) 70-pin AVX connectors
  • (1) 27-pin flex ribbon connector

Size: 17mm x 58mm x 4.2mm (0.67 in. x 2.28 in. 0.16 in.)
Expansion: Expansion boards for Overo motherboards. Or, custom design from open specifications.

The core of the Flow phone is the Flow motherboard, which is designed to integrate the separately available Overo Earth module. You can also use the more expensive Overo Water, Air or Fire modules. Other modules attach to the motherboard, including a 3.7-inch 640 x 480 Sharp LS037V7DW01 touchscreen LCD and Flow Sharp LCD module.

Connectivity modules include GPS, USB, and a choice between a plain GSM cellular module and a HSDPA-ready 3.5G/GPS/GSM/GPRS module. (WiFi and Bluetooth are already supplied by the Overo SBC.) Additional options include a 1GB MicroSD card, camera, power supply, battery, and enclosure, with various options available on several of the modules. Flow motherboard features include:

  • 2 x 70 pin connectors for the Overo module from Gumstix
  • 80-pin connector for the GSM, GPS, and 3G modules
  • Stereo amplifiers
  • 2 x speakers and GSM audio amplifier for speakers
  • Microphone and GSM preamp for Mic
  • PIC16LF877A UI Init (with Bootloader preloaded)
  • 2 x general-purpose buttons linked to the UI Unit
  • Orientation sensor
  • Light sensor
  • Level translation for GSM serial connections
  • 3G USB HS power supply
  • Luxeon 1W LED for the camera flash features
  • Dual SIM/MicroSD slot (experimental)
  • Camera connector and camera power management
  • Power management circuits fully controllable by the UI unit
  • Additional pins for connecting external power sources
  • Dimensions — 3.0 x 2.6 inches (76 x 65mm)
  • Operating system — Linux

GizmoForYou does not say much about software, but there are a growing number of Linux development platforms supporting the Overo Earth and OMAP35x platforms, and according to a project member, the group is working on an Android implementation.

How to mount your Journalized HFS+ disk in Linux….

datePosted on 20:41, May 23rd, 2009 by Many Ayromlou

This is something that people who deal with OSX and Linux come across everyday. Yes you can format your USB stick or removable HD using FAT32. The problem is that FAT32 does not support large sized files which can cause problems. So how do you solve this…..Easy. Attach the Journalized HFS+ disk to your MAC and startup disk utility. Inside disk utility find the disk in question and click on the partition(s) while holding down the “ALT” key. Keep holding the key down and go to the File menu and choose “Disable Journaling” (command-J). Eject the disk, move it over to your linux machine and hook it up. Linux can now read and write to the disk. Once you’re done, move the disk back to the apple machine and after selecting it in disk utility click on “Enable Journaling” button. Done.

This used to be a pain in the butt. Lots of manual apt-get lines and config edits to get it to work. Weŕe talking about installing the LAMP stack onto a preexisting Ubuntu Desktop Edition installation. I used to do this backwards in the old days by installing the Server edition first (with LAMP) and then getting the graphical desktop goodies installed on top of that. That method still works, but I found out that LAMP stack install on a Desktop edition is a simple one command affair. As of the 7.04 release, the Ubuntu base system includes Tasksel. You can install LAMP using tasksel.
sudo tasksel install lamp-server

Writing Moblin (and Ubuntu) USB images using dd in OSX

datePosted on 15:52, May 20th, 2009 by Many Ayromlou

I came across this problem this morning, while writing the newly downloaded moblin USB image file. The concept is straight forward, plugin a 1GB+ USB stick into a functioning Linux or Windows box, make sure the stick is not mounted and use dd to write the disk image to the stick. Under OSX however the instructions for unmounting are slightly different, so here are the quick steps:

  1. Download the desired .img file
  2. Open a Terminal (under Utilities)
  3. Run diskutil list to get the current list of devices
  4. Insert your flash media
  5. Run diskutil list again and determine the device node assigned to your flash media (e.g. /dev/disk2)
  6. Run diskutil unmountDisk /dev/diskN (replace N with the disk number from the last command; in the previous example, N would be 2)
  7. Execute sudo dd if=/path/to/downloaded.img of=/dev/diskN bs=1m (replace /path/to/downloaded.img with the path where the image file is located.
  8. Run diskutil eject /dev/diskN and remove your flash media when the command completes

That should do it…..

Moblin OS rocks…..

datePosted on 21:49, May 19th, 2009 by Many Ayromlou

Heard of Moblin yet….Intel’s foray into designing a Linux distro. Moblin is an open source project focused on building a Linux-based platform optimized for the next generation of mobile devices including Netbooks, Mobile Internet Devices, and In-vehicle infotainment systems. I came across the promo video below and I have to say I’m impressed. I’m downloading the beta image file right now to give it a try on my brand new Aspire One D150. More to come soon…..

I ran into this a couple of weeks ago and it’s been driving me bonkers. I finally figured out what’s wrong. I was just trying to get my feet wet using the Sun Grid Engine and figured I follow their instruction page and try out the example shell script and submit it using “qsub” command. I was doing this on the frontend machine that’s been configured properly as a ROCKS cluster frontend. This was not working and the error I kept getting was “Unable to run job: denied: host “name_of_computer” is no submit host. Exiting.”

After googling around for a couple of days I found the answer (atleast the answer in my case). Issuing the following command solved my problem:

qconf -as frontend-name

Apparently the SGE roll does not setup the frontend node as a “submit host” during install. After this (the above command) everything seems to work properly. Now I can do “qstat -f” and “qsub”.

Okay so I’ve been playing around with openfiler for the past couple of months. We’re trying to setup a Student homedirectory NAS device and have a mirror machine that would take over if our primary dies. Our machines are hand built 13-TB NAS servers using 16 x 1TB Seagate disks and a 16 channel sata2 raid controller from 3Ware. There are several problems that one needs to overcome in this type of setup so I will try to cover it, bit by bit as I finish confirming it at work. As I said we’re using a Super Micro case and motherboard (Dual Quadcore Xeon) and we’ve stuffed a 16 channel 3ware 9650 controller in there. The first issue we had was with hardware and the fact that we had some screwy new firmware on the controller that was not working nicely with our 16 x 1TB seagate drives. We downgraded the firmware and got the machine to post. Then we created a (roughly) 14 TB container in raid-6 mode (16 drives, less 2). We further devided up the space into a 20GB boot partition (using the bios setting in 3ware bios) and a giant (rougly) 13TB partition that will hold our student data. The 20 GB partition will later on hold our swap space and non essential (frequently updated) folders under /var (lock, log, etc.)

We have physically 2 separate machines that are exact copies of each other hardware-wise. The plan initiallly was to use DRBD and heartbeat service to create a High availability NAS cluster, but since we are tying to authenticate (for smb) with our Windows system, we could not get that configuration working (and frankly I still don’t trust DRBD, as good as it is). So we decided to create two USB sticks images. One for master and another for slave. The master will be a machine enrolled into our Active directory domain and the slave will be a passive (private) rsync server. The master USB image is configured with all the AD stuff and two interfaces. One interface serves as the NAS and another runs rsync against our slave/rsync server. When/If the master fails (ie: motherboard failure) beyond recognition, we simply plug the master USB stick into our slave machine and reboot it. Since the machines are exact copies of one another the (old slave) will now be master and once the (old) master is fixed, it will become the new slave/rsync server. Real simple.

So here is Chapter one – How do you get Openfiler 2.3 to boot off a USB stick:

Before you start you’ll need the following:

  1. Four USB Sticks 2GB+ that are the same brand, size.
  2. Openfiler 2.3 install CD
  3. A non openfiler rescue disk (I used a Ubuntu LiveCD) used to fix (reinstall) grub on the USB stick.

Insert your USB stick, and boot from the OpenFiler 2.3 installation CD. At the boot prompt, type expert (for text mode type expert text, I used graphical mode). Manually configure your partitions. I just had one 2Gb partition (ext2) on /. I used ext2 since it has no journal and won’t constantly write to the USB stick. No Swap partition at this point. After the install I noticed that something between 600 and 700 Mb was used for the system, so you might be able to use about 200-300Mb for swap if really needed (however, I doubt the use for a swap partition, as USB storage is really slow). The installer will breeze through to the end. Note that it is realllyyy slow. It took more than an hour on my config. Reboot at the end and get the OF2.3 CD booting again in rescue mode by typing “linux rescue” at the prompt. Once you’re at the prompt mount the USB stick manually (fdisk -l might help as it will print out info about all the disks). My USB stick was /dev/sdc, hence the commands below:

mount /dev/sdc1 /mnt/source
chroot /mnt/source

Now you’ve got the partition mounted and your shell chrooted to the root of the USB stick. We next copy the initrd on the USB stick into a temporary directory (on the stick) and uncompress it so we can modify it. You need to do this so that grub can initialize the bootloader ram disk off the USB stick (ie: makes OF installation bootable from USB).

cp /boot/initrd-2.X.X.img /tmp/initrd.gz
gunzip /tmp/initrd.gz
mkdir /tmp/a
cd /tmp/a
cpio -i < /tmp/initrd

At this point we need to edit the “init” file (text file containing kernel module listings that are required during boot). I used vi to do this, not sure if there is another editor available during rescue mode. Find the line containing “insmod /lib/sd_mod.ko” and insert the following snippet under it:

insmod /lib/sr_mod.ko
insmod /lib/ehci-hcd.ko
insmod /lib/uhci-hcd.ko
sleep 5
insmod /lib/usb-storage.ko
sleep 8

Save the file and follow along with the following commands to physically copy the appropriate kernel modules to the temp directory.

cd /lib/modules/insert-kernel-folder-here-or-just-use-TAB-key/kernel/drivers
cp usb/storage/usb-storage.ko /tmp/a/lib
cp usb/host/ehci-hcd.ko /tmp/a/lib
cp usb/host/uhci-hcd.ko /tmp/a/lib
cp scsi/sr_mod.ko /tmp/a/lib
cd /tmp/a
find . | cpio -c -o | gzip -9 > /boot/usbinitrd.img

IMPORTANT – Now adjust grub config (/boot/grub/grub.conf) to reflect the change to initrd filename. You should also repeat this on kernel upgrades (but then again, never touch a working system ;)).
Reboot.

More than likely it’s a no go, since the installer did not install grub properly. Now take out your Ubuntu (or other favourite rescue CD) out and boot from it. Don’t use the OF2.3 CD in rescue mode…..IT DOES NOT WORK. Once booted, mount the USB stick on the system and use the following commands to re-install grub:

mount /dev/sdc1 /mnt/source
grub-install --root-directory=/mnt/source /dev/sdc

Reboot and you should be good to go (you will get a couple of Errors during boot about modules already loaded stuff…..ignore). At some point you do want to move some of those auxiliary directories (/tmp/ /var/log /var/lock and others) and swap file off the stick and onto the 20GB portion of our raid-6 we prepped earlier on. Below you find the fdisk -l listing of that “logical disk” (/dev/sdb in our system):

Disk /dev/sdb: 21.4 GB, 21474835968 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 609 4891761 83 Linux
/dev/sdb2 610 621 96390 83 Linux
/dev/sdb3 622 671 401625 83 Linux
/dev/sdb4 672 2610 15575017+ 5 Extended
/dev/sdb5 672 673 16033+ 83 Linux
/dev/sdb6 674 2610 15558921 82 Linux swap / Solaris

here is a breakdown of what goes where (/dev/sdb6 is obviously swap which was prepared with “mkswap” command):

tmpfs /tmp tmpfs defaults,noatime 0 0
tmpfs /var/tmp tmpfs defaults,noatime 0 0
/dev/sdb1 /var/log ext2 defaults 1 1
/dev/sdb2 /var/run ext2 defaults 1 1
/dev/sdb3 /var/cache ext2 defaults 1 1
/dev/sdb5 /var/lock ext2 defaults 1 1
/dev/sdb6 swap swap defaults 0 0

You need to make the above changes to your USB stick’s /etc/fstab, but before rebooting you need to use “cp -a” command to copy all the folders from the appropriate location on the USB stick to the above partitions (by mounting the partitions temporarily/one-at-a-time), just to make sure no process would go crazy if it didn’t find the lock directory (or cache, run, etc.).

Next we want to make four copies of this stick. You can use a Mac or Win (using rawrite) or better yet Linux. It’s important that the stick your copying is not booted. Use the Ubuntu/whatever CD you used ealier and boot it into rescue mode. Go to command line and use “dd” command to create three more copies of the stick you just preped.

Two copies (one for safe keeping) will become your Master USB sticks to boot the machine in Master mode (as described earlier in this article). The other two copies (one for safe keeping) will become your Slave sticks.

NOTES:

These notes have nothing to do with the installation. I’m just putting them down here for safe keeping. Only use these if you’re in trouble.

– If you want to create a “Home Share” and you don’t get the “Make Home Share” button on the interface, something has gone wrong with one of the xml config files. No worries, find and edit the file /opt/openfiler/etc/homespath.xml . Inside it will look something like this:
<?xml version="1.0"?>
<homespath value="/mnt/bigvg/studentvol/studenthome/"/>

This is where the problem is. The php code that drives the interface for sharing thinks that there already is a “homes” directory defined, but you know that’s not the case. Since only one homes entry is allowed, the web interface will not give you the option to make your new share the “Home Share”. To fix this we need to take out what’s inside the quotes as the value of homespath. So once that’s done the file will look like this:
<?xml version="1.0"?>
<homespath value=""/>

Save this file and go back to the share tab in the web interface and you will now get a “Make Home Share” button again.

– If you have upgraded to a Windows 2008 R2 (Win2k8 r2) AD domain and you’re getting authentication errors when accessing your openfiler shares (although everything was working fine under R1) like the ones below:
/var/log/messages shows:

Nov 16 08:42:02 openfiler winbindd[3316]: [2009/11/16 08:42:02, 0] rpc_client/cli_pipe.c:rpc_api_pipe(789)
Nov 16 08:42:02 openfiler winbindd[3316]: rpc_api_pipe: Remote machine dc.domain.tld pipe \NETLOGON fnum 0x4005 returned critical error. Error was NT_STATUS_PIPE_DISCONNECTED

and
/var/log/samba/winbind.log shows:

[2009/11/16 08:43:12, 1] winbindd/winbindd_util.c:trustdom_recv(269)
Could not receive trustdoms

then your problem (more than likely) is the version of Samba that comes with openfiler 2.3. You need to upgrade to 3.4.5. Run “conary updateall” or do “System Update” from the interface, let it update everything and reboot your machine. Once your machine is back up, leave the AD domain and rejoin it and everything should be fine.

– If you’re having problems accessing a samba share you just created on your brand new openfiler, you might want to check the following. Lets say you have a Volume Group called “bigvg” and a Volume inside that called “studentvol” where you have a share called “test”. If you’re having problems accessing the share by just using something like smb://openfiler-servername/test you might want to try connecting to the following instead:
smb://openfiler-servername/bigvg.studentvol.test
This is because by default openfiler tries to be smart and adds the volume group and volume name infront of the sharename you give it. Now, if you have a small installation this can be a pain. The easy way to fix this is to use the “Override SMB/Rsync share name:” field under the “Shares/Edit share” screen. I tend to use the same sharename I initially used (ie: “test” in this case), just to keep it simple. The only thing to remember here is that you want to make sure you don’t override with a duplicate name…..that’s gonna blowup real good.

– Couple of useful commands for Samba troubleshooting…..
To see a list of shares on your openfiler server (note that the unix command will give you those long sharenames:
Unix: smbclient -L OpenfilerServername -U domainloginid
Win: net view \\OpenfilerServername

– There is another issue with this master/slave setup and that is UID/GID synchronization for samba. This comes into play since we’re rsyncing our files from master to slave. This process also transfers their respective UID/GID to the slave machine. If the master fails, our procedure is to turn if off and reboot the slave using the masters USB stick. This works, but all those rsync’ed UID/GID’s will not match when the slave machine is booted using the masters USB stick (samba voodoo that translates windows UID/GID’s to linux UID/GID is kinda random)…..UNLESS YOU DO THE FOLLOWING (taken from Samba How-To):

The idmap_rid facility is a new tool that, unlike native winbind, creates a predictable mapping of MS Windows SIDs to UNIX UIDs and GIDs. The key benefit of this method of implementing the Samba IDMAP facility is that it eliminates the need to store the IDMAP data in a central place. The downside is that it can be used only within a single ADS domain and is not compatible with trusted domain implementations.

This alternate method of SID to UID/GID mapping can be achieved using the idmap_rid plug-in. This plug-in uses the RID of the user SID to derive the UID and GID by adding the RID to a base value specified. This utility requires that the parameter “allow trusted domains = No” be specified, as it is not compatible with multiple domain environments. The idmap uid and idmap gid ranges must be specified.

The idmap_rid facility can be used both for NT4/Samba-style domains and Active Directory. To use this with an NT4 domain, do not include the realm parameter; additionally, the method used to join the domain uses the net rpc join process.

An example smb.conf file for and ADS domain environment is shown below:
# Global parameters
[global]workgroup = KPAK
netbios name = BIGJOE
realm = CORP.KPAK.COM
server string = Office Server
security = ADS
allow trusted domains = No
idmap backend = idmap_rid:KPAK=500-100000000
idmap uid = 500-100000000
idmap gid = 500-100000000
template shell = /bin/bash
winbind use default domain = Yes
winbind enum users = No
winbind enum groups = No
winbind nested groups = Yes
printer admin = "Domain Admins"

In a large domain with many users it is imperative to disable enumeration of users and groups. For example, at a site that has 22,000 users in Active Directory the winbind-based user and group resolution is unavailable for nearly 12 minutes following first startup of winbind. Disabling enumeration resulted in instantaneous response. The disabling of user and group enumeration means that it will not be possible to list users or groups using the getent passwd and getent group commands. It will be possible to perform the lookup for individual users, as shown in the following procedure.

The use of this tool requires configuration of NSS as per the native use of winbind. Edit the /etc/nsswitch.conf so it has the following parameters:
...
passwd: files winbind
shadow: files winbind
group: files winbind
...
hosts: files wins
...

The following procedure can use the idmap_rid facility:

1. Create or install an smb.conf file with the above configuration.
2. Edit the /etc/nsswitch.conf file as shown above.
3. Execute:
root# net ads join -UAdministrator%password
Using short domain name -- KPAK
Joined 'BIGJOE' to realm 'CORP.KPAK.COM'

An invalid or failed join can be detected by executing:
root# net ads testjoin
BIGJOE$@'s password:
[2004/11/05 16:53:03, 0] utils/net_ads.c:ads_startup(186)
ads_connect: No results returned
Join to domain is not valid

The specific error message may differ from the above because it depends on the type of failure that may have occurred. Increase the log level to 10, repeat the test, and then examine the log files produced to identify the nature of the failure.
4. Start the nmbd, winbind, and smbd daemons in the order shown.
5. Validate the operation of this configuration by executing:
root# getent passwd administrator
administrator:x:1000:1013:Administrator:/home/BE/administrator:/bin/bash

Please note that the update version of SAMBA that gets installed after you do “conary updateall” (see above) has a option for this under “Advance” tab of the Accounts section.

If you’re using Ubuntu and have recently upgraded your IpodTouch or Iphone to 2.x firmware, you might be interested in this detailed tutorial. It basically outlines how you can setup syncing under Ubuntu with your 2.x device. The guide assumes that you have jailbroken your ipod/iphone . There is also a nice section for older 1.x devices.

Linux Server-in-a-Plug is here…..only $100

datePosted on 18:40, February 24th, 2009 by Many Ayromlou

Marvell Semiconductors is now shipping their SheevaPlug linux machines. Little tiny Linux boxes the size of a plugin adapter. The SheevaPlug draws about 5 Watts of power, comes with Linux, and boasts completely open hardware and software designs.

At $100 the platform is available in single quantities, and is priced within reach of students, hobbyists, and tinkerers. This looks like the perfect embeded platform for all sorts of stuff. Think home automation, security monitoring, ultra low powered file servers, ad-hoc mini clusters, not to mention robots and such …..there is no end to it.

Its hardware design is completely open — everything from schematics to Gerber files will be available on a marvell’s website — and ARM ports of several popular Linux distributions are already running, and included. More importantly, Marvell has committed to do everything it can to ensure the best Linux support for SheevaPlug going forward.

The $100 SheevaPlug development platform and Plug Computer designs are built around the Marvell 88F6000, or “Kirkwood” SoC, which was introduced last year. The Plug Computer is based on the high-end 88F6281 version of the Kirkwood, with a Sheeva CPU core clocked to 1.2GHz. The Sheeva core combines elements of Marvell’s earlier Feroceon and XScale architectures, both of which implemented ARM Ltd.’s ARMv5 architecture, similar to ARM Ltd.’s own “ARM9” cores.

The SheevaPlug Plug Computer is further equipped with 512MB of DRAM and 512MB of flash. The tiny embedded PC also includes gigabit Ethernet and USB 2.0 ports. One early product based on the design is listed as measuring 4.0 x 2.5 x 2.0 inches. Plugging directly into a standard wall socket, the Plug Computer draws less than five watts under normal operation, compared to 25-100 watts for a PC being used as a home server, claims Marvell.

Early supporters of the SheevaPlug Plug Computer design include the following companies, each with links to their respective websites:

  • Cloud Engines Pogoplug — The Pogoplug enables remote viewing of external storage devices via a web browser. The device connects to an external hard drive or memory stick via USB, and to a router via gigabit Ethernet, says Cloud Engines. The 4.0 x 2.5 x 2.0-inch device plugs directly into a wall socket, and enables remote uploading of multimedia, including access from an Apple iPhone. Regularly $100, it is now available for pre-order at a special price of $80, says the company.
  • Ctera Networks CloudPlug — This Plug Computer device converts any USB drive into a NAS device, and provides secure offsite backup, says Ctera. The CloudPlug is aimed primarily at service provider OEMs that want to offer online backup services to consumers and small businesses. Equipped with gigabit Ethernet and USB 2.0 ports, the device offers features including automatic and secure online backup, and data snapshot restore, says the company.
  • Axentra HipServ — Axentra has ported its home media server application to the SheevaPlug platform, providing applications for storing, managing, sharing, viewing, or listening to digital media content remotely over the web or across a home network, says the company. HipServ for SheevaPlug is said to enable connection to third-party services such as online backup and photo print apps, as well as social networking sites like Facebook and Flickr. Recently upgraded to HipServ 2.0, the software is built on Red Hat Linux Enterprise, and is said to support UPnP-AV, DLNA, WMC, and iTunes media standards.
  • Eyecon Technologies Eyecon — This “media companion” application enables remote mobile users, including iPhone users, to discover content from sources including the Internet, DVRs, PCs, and NAS devices. The Eyecon software can then direct the media files to any connected device in the home, says the company.

Fun,fun,fun…..

Sugar interface on a USB stick…..

datePosted on 23:25, February 18th, 2009 by Many Ayromlou

Thanks to Sugar Labs you can now have your Ubuntu 8.10 or Fedora 10 linux distributions with sugar on it…..Shweet :-).  Yep you heard right, go here and grab your OLPC XO inspired 1GB USB stick image and boot all those old PC’s into sugar. According to Walter Bender (the creator of Sugar OS) a new version dubbed Sucrose 0.84 is on its way soon. Complete article (including interview with Mr. Bender) is over at XConomy.

12345678PreviousNext