Thursday, August 21, 2008

How to link C programs to PVM Library

All PVM applications must be linked with the libpvm library to allow them to communicate with other entities in the PVM system. The base library (libpvm3.a) is written in C and directly supports C and C++ applications. Applications written in C must be linked with at least the base PVM library, libpvm3.a. Programs that use group functions must also be linked with libgpvm3.a. Compile the program for an architecture and place the executable at a location accessible from all the machines in the host pool. Manually run an initiating task or “master”, which spawns all other tasks that interact with one another to solve the problem. For compiling the source program, we use the following compilation options:

gcc -o -I$PVM_ROOT/include -L$PVM_ROOT/lib/LINUX –lpvm3 – lgpvm3

Configuring PVM (Parallel Virtual Machine)

This will help u people who want to configure the PVM daemon....

Before building or running PVM, we have to set some environment variables, as listed below in the $HOME/.bash_profile or $HOME/.bashrc file.

# SET PVM ENVIRONMENTAL VARIABLES
# SET PVM_ROOT
PVM_ROOT=/use/share/pvm3
# SET PVM_ARCH
PVM_ARCH=LINUX
# SET PATH
PATH=$PATH:$PVM_ROOT/lib:$PVM_ROOT/bin/$PVM_ARCH:$PVM_ROOT/lib/$PVM_ARCH
# SET PVM_DPATH: PVM DAEMON PATH
PVM_DPATH=$PVM_ROOT/lib/pvmd
# ADDING MANPATH FOR HELP
MANPATH=$MANPATH:$PVM_ROOT/man
# SET XPVM_ROOT
XPVM_ROOT=$PVM_ROOT/xpvm
# XPVM EXECUTABLE DIRECTORY ADDED TO SHELL PATH
PATH=$PATH:$XPVM_ROOT/src/$PVM_ARCH
# SETTING TCL & TK LIBRARIES
TCL_LIBRARY=/usr/lib
TK_LIBRARY=/usr/lib
export USERNAME BASH_ENV PATH TCL_LIBRARY TK_LIBRARY MANPATH PVM_ROOT PVM_ARCH PVM_DPATH XPVM_ROOT

After setting the environment variables, build PVM for each host architecture(if there are different architectures present in the cluster). Create .rhosts on each host, listing all the hosts that are to be used.

To start PVM, run $PVM_ROOT/lib/pvm. This starts the console task, which in turn starts a pvmd if one is not already running. More hosts can be started and added to your "virtual machine" by using the console "add" command. To start the pvmd without starting the console, you can also run the $PVM_ROOT/lib/pvmd directly.

A number of hosts can all be started at once by supplying the pvmd with a host file, as in:
$PVM_ROOT/lib/pvmd my_hosts
where "my_hosts" contains the names of the hosts you wish to add, one host per line.

WINE-For running Windows application in Linux

The goal of the Wine project is to develop a "translation layer" for Linux and other POSIX compatible operating systems that enables users to run native Microsoft Windows applications on those operating systems. This translation layer is a software package that "emulates" the Microsoft Windows API (Application Programming Interface).

Wine provides alternative DDLs (Dynamic Link Libraries) that are needed to run the applications. These are native software components that, depending on their implementation, can be just as efficient or more efficient than their Windows counterparts. That is why some MS Windows applications run faster on Linux than on Windows.

he Wine development team has made significant progress towards achieving the goal to enable users to run Windows programs on Linux. One way to measure that progress is to count the number of programs that have been tested. The Wine Application Database currently contains more than 8500 entries. Not all of them work perfectly, but most commonly used Windows Applications run quite well, such as the following software packages and games: Microsoft Office 97, 2000, 2003, and XP, Microsoft Outlook, Microsoft Internet Explorer, Microsoft Project, Microsoft Visio, Adobe Photoshop, Quicken, Quicktime, iTunes, Windows Media Player 6.4, Lotus Notes 5.0 and 6.5.1, Silkroad Online 1.x, Half-Life 2 Retail, Half-Life Counter-Strike 1.6, and Battlefield 1942 1.6.

After installing Wine, Windows applications can be installed by placing the CD in the CD drive, opening a shell window, navigating to the CD directory containing the installation executable, and entering "wine setup.exe", if setup.exe is the installation program.
When executing programs in Wine, the user can choose between the "desktop-in-a-box" mode and mixable windows. Wine supports both DirectX and OpenGL games. Support for Direct3D is limited. There is also a Wine API that allows programmers to write software that runs source and binary compatible with Win32 code.

Changing the GRUB LOADER screen

To change the screen of the Grub loader....

1.get an image of your choice:- The image should be in 640x480 resolution. We can only use files of the extension 'xpm.gz'. You can either take your desired image , go to GIMP and make the necessary changes and save it with the extension '.xpm' and then compress it using gzip to get 'YOUR_IMAGE.xpm.gz' or you can download lots of splash screens at http://www.schultz-net.dk/grub

2.Copy YOUR_IMAGE.xpm.gz to /boot/grub/

3.Go to /boot/grub/ and edit menu.lst

add a line

splashimage=(hd0,4)/boot/grub/YOUR_IMAGE.xpm.gz

where, (hd0,4) refers to the drive where the Linux installation is made
You can find out the drive by looking the last part of the 'menu.lst' file.

4.Save the file and reboot the system..

If everything turns out well......you get yourself a new screen with your desired image instead of the boring GRUB loader screen..


ENJOY...................

Building Up a Cluster Computer......

Intel Pentium IV 3.2GHz with HT technology - 4
Intel 915G Motherboard - 4
512 MB DDR (400 MHz FSB) - 4
Power Supply 300W - 4
Seagate 7200RPM 80 GB HDD - 1
DVD/CD Drive - 1
Ethernet Cards (10/100 ) - 4
8 Port Ethernet Switch - 1


Software Requirements
------------------------

As for the Operating System we used Debian Etch (kernel version 2.6.18-4-686)
Its your choice. You can even use UBUNTU. The configuration is somewhat same for both.
But we will be referring Debian Etch in this article.


Arrangement
--------------

We will have one master node and three slave nodes. The master node will be the only one having access to a Hard disk drive and a DVD/CD drive. The slave nodes will have their processors, motherboards, RAM and Power supply. All installations have to be made in the master node and exported to the slave nodes via NFS. All the nodes are assembled separately and wired using CAT6e cables via the Switch and ethernet cards. That's up to you....



1.Installation
-----------------------

First step is to install the OS to the head node(master node).
Put the Debian Etch CD in to the Drive and boot into the Installation Procedure.
The partitioning of the was done as follows...

/ - 20 GB
/home - 26 GB
Node partitions - 11 GB x 3 (these are the ones thats gonna be exported via NFS)
Swap - 1 GB


You can change these according to your wish.


Finish the Installation Procedure for the Master Node.


Now for the slave node installation...
The OS for the slave nodes are installed in the Master Node..
Start the Installation Procedure for the slave nodes by booting up from the Debian Etch CD/DVD. This time the installation is to be made by keeping one of those 11 GB partitions we made as the root(/) Partition....

"DO NOT INSTALL THE BOOT LOADER THIS TIME......VERY IMPORTANT "

You can repeat this installation procedure two more times to install the OS for the two other nodes....(Pls donot install the GRUB BOOT LOADER)....

So you are done installing the OS...



2.CONFIGURATION OF NODES
---------------------------------------------------


a)Mounting the three / partitions

Mount the partitions for the slave nodes to following directories by editing /etc/fstab.lst

drives are mounted to /nodes/nfs/node1 , /nodes/nfs/node2 , /nodes/nfs/node3

Our /etc/fstab is as follows:


/dev/sda7 /nodes/nfs/node1 ext3 defaults 0 0
/dev/sda8 /nodes/nfs/node2 ext3 defaults 0 0
/dev/sda9 /nodes/nfs/node3 ext3 defaults 0 0


make sure the /etc/fstab of the nodes(/nodes/nfs/node'x'/etc/fstab) are also edited.....

IMPORATANT : The option part must be set to 'defaults' .


b)Installing NFS Utilities
--------------------------------------

apt-get install nfs-common nfs-kernel-server


Edit /etc/exports

add following lines:


/nodes/nfs/node1 192.168.0.0/24(rw,no_root_squash,sync,no_subtree_check)
/nodes/nfs/node2 192.168.0.0/24(rw,no_root_squash,sync,no_subtree_check)
/nodes/nfs/node3 192.168.0.0/24(rw,no_root_squash,sync,no_subtree_check)
/nodes 192.168.0.0/24(rw,no_root_squash,sync,no_subtree_check)
/home 192.168.0.0/24(rw,no_root_squash,sync,no_subtree_check)


c)DHCP Server Installation
---------------------------------------------

apt-get install dhcp

Configure /etc/dhcpd.conf

authoritative;
option domain-name "project"
option subnet mask 255.255.255.0
next server 192.168.0.20; #TFTP Server
filename "/tftpboot/pxelinux.0";

subnet 192.168.0.0 netmask 255.255.255.0{
range 192.168.0.20 192.168.0.25;
option domain-name-servers 192.168.0.1
option broadcast-address 192.168.0.255;
}

host node1{
hardware ethernet MAC_ID OF NODE1; #give mac id of node 1's ethernet card
fixed address 192.168.0.21;
option root-path "/nodes/nfs/node1";
}
host node2{
hardware ethernet MAC_ID OF NODE2; #give mac id of node 2's ethernet card
fixed address 192.168.0.22;
option root-path "/nodes/nfs/node2";
}
host node3{
hardware ethernet MAC_ID OF NODE3; #give mac id of node 3's ethernet card
fixed address 192.168.0.23;
option root-path "/nodes/nfs/node3";
}



d)Installing TFTP
-----------------------------

apt-get install atftpd xinetd

Configure atftpd

open a file an add the following lines.

service tftp
{
disable=no
socket_type=dgram
protocol=udp
wait=yes
use=nobody
server=/usr/sbin/in.tftpd
server_args = --tftpd-timeout 300 --retry-timeout 5
--mcast-port 1758 --mcast-addr 239.239.239.0-255
--mcast-ttl 1 --maxthread 100 --verbose=5 /tftpboot
}



e)Create /tftpboot folder

mkdir /tftpboot
chmod 777 /tftpboot


f)Configuring PXE
-------------------------------

A. (i) Download PXElinux. from http://syslinux.zytor.com/pxe.php.
(ii) Find the file pxelinux.0 from the tarball and copy it to /tftpboot.

B. Generate initial ramdisk and put it along with the kernel image in the /tftpboot directory using following steps:
(i) Change root to /nodes/nfs/node1 : sudo chroot /nodes/nfs/node1 /bin/bash
(ii) Goto /etc/initramfs-tools and edit initramfs.conf
Set 'BOOT=nfs ' in the file.
(iii) Update initramfs by : sudo update-initramfs -u
(iv) Exit from the changed root(type 'exit') and copy the kernel image and initramfs from /nodes/nfs/node1/boot to /tftpboot (Here KERNEL_VERSION is to be repalced with the version of the kernel(can be found out using 'uname -r')):
sudo cp /nodes/nfs/node1/boot/initrd.img-KERNEL_VERSION /tftpboot/
sudo cp /nodes/nfs/node1/boot/vmlinuz-KERNEL_VERSION /tftpboot/

C. Create the folder /tftpboot/pxelinux.cfg
Create files in this folder with name as 01-xx-xx-xx-xx-xx-xx, where xx-xx-xx-xx-xx-xx is the Mac id of the network card of the nodes.

Edit the contents of the files as follows.(KERNEL_VERSION is to be the version of the kernel and 'x' should be replaced with the node number.

default linux
label linux
kernel vmlinuz-KERNEL_VERSION
append initrd=initrd.img-KERNEL_VERSION
nfsroot=192.168.2.1:/nodes/nfs/nodex


All the configurations are performed. Now reboot the system and change the boot device to Network boot in the BIOS.

If all goes right the cluster should be up and running.

Then you could install the PVM libraries or Open MPI to do the parallel processings.

ROCK ON!!!!!