22.214.171.124/8 Internet Assigned Numbers Authority
126.96.36.199/8 Internet Assigned Numbers Authority
188.8.131.52/8 General Electric Company
184.108.40.206/8 Level 3 Communications, Inc.
220.127.116.11/8 Internet Assigned Numbers Authority
18.104.22.168/8 Headquarters, USAISC
22.214.171.124/8 DoD Network Information Cente
126.96.36.199/8 Level 3 Communications, Inc.
10.0.0.0/8 Internet Assigned Numbers Authority (Private Special Use)
188.8.131.52/8 DoD Network Information Center
184.108.40.206/8 AT&T WorldNet Services
220.127.116.11/8 Xerox Corporation
18.104.22.168/8 Internet Assigned Numbers Authority
22.214.171.124/8 Hewlett-Packard Company
126.96.36.199/8 Hewlett-Packard Company
188.8.131.52/8 Apple Computer, Inc.
184.108.40.206/8 Massachusetts Institute of Technology
220.127.116.11/8 Ford Motor Company
18.104.22.168/8 Computer Sciences Corporation
22.214.171.124/8 DoD Network Information Center
126.96.36.199/8 DoD Network Information Center
188.8.131.52/8 Internet Assigned Numbers Authority
184.108.40.206/8 Comcast Cable Communications, Inc.
220.127.116.11/8 Not Found (or info hidden)
18.104.22.168/8 DoD Network Information Center
22.214.171.124/8 Internet Assigned Numbers Authority
126.96.36.199/8 DoD Network Information Center
188.8.131.52/8 DoD Network Information Center
184.108.40.206/8 DoD Network Information Center
220.127.116.11/8 Merit Network Inc.
18.104.22.168/8 Eli Lilly and Company
22.214.171.124/8 Interop Show Network
126.96.36.199/8 AT&T Internet Services
188.8.131.52/8 RIPE Network Coordination Centre
184.108.40.206/8 RIPE Network Coordination Centre
220.127.116.11/8 RIPE Network Coordination Centre
18.104.22.168/8 RIPE Network Coordination Centre
22.214.171.124/8 - 126.96.36.199/8 Internet Assigned Numbers Authority
188.8.131.52/8 - 184.108.40.206/8 Asia Pacific Network Information Centre
What is the Apache Web Server?*
- is a powerful, flexible, HTTP/1.1 compliant web server
- implements the latest protocols, including HTTP/1.1 (RFC2616)
- is highly configurable and extensible with third-party modules
- can be customised by writing 'modules' using the Apache module API
- provides full source code and comes with an unrestrictive license
- runs on Windows 2003/XP/2000/NT/9x, Netware 5.x and above, OS/2, and most versions of Unix, as well as several other operating systems is actively being developed
- encourages user feedback through new ideas, bug reports and patches
adapted from www.apache.org
Download The Apache Source from www.httpd.apache.org
Current Stable Version available Apache 2.2.6
Make sure you have at least 50 MB of temporary free disk space available. After
installation Apache occupies approximately 10 MB of disk space. The actual disk space
requirements will vary considerably based on your chosen configuration options and any thirdparty modules.
ANSI-C Compiler and Build System
Make sure you have an ANSI-C compiler installed. The GNU C compiler (GCC) from
the Free Software Foundation (FSF) is recommended. If you don't have GCC then at least make
sure your vendor's compiler is ANSI compliant. In addition, your PATH must contain basic build
tools such as make.
Configuring & Installing Apache
Download From http://httpd.apache.org/download.cgi
Extract # gzip -d httpd.2.2.6.tar.gz
# tar -xvf httpd.2.2.6.tar
# cd httpd.2.2.6
Configure # ./configure –prefix=PREFIX
Compile # make
Install # make install
Test # PREFIX/bin/apachectl start
PREFIX must be replaced with the filesystem path under which the server should
be installed. If PREFIX is not specified, it defaults to /usr/local/apache2.
PREFIX = /apache is used in this document from here on
Starting & Stopping the Apache Server
1. Starting # /apache/bin/apachectl start
2. Stopping # /apache/bin/apachectl stop
3. Restarting # /apache/bin/apachectl restart
Configuring the Apache Server
Server Side: httpd.conf
This is the primary configuration file. It traditionally contains configuration settings for
the HTTP protocol and for the operation of the server. This file is first processed
There is as such no configuration required in the client side the only requisite is that the
client should have a browser such as Mozilla Firefox or Links installed and should be able to
connect to the network.
Important Directives of the httpd.conf
2) Server Root
3) Server Admin
4) Document Root
6) User / Group
9) Directory Index
10)Alias / Script Alias
11) Virtual Hosts
Sample Configuration File (httpd.conf) with explanation of each Statement/Directive/Block
Apache Version Apache 2.2.6
Apache is installed/configured in /apache
Location of HTML pages /home/user/apachedocroot/htdocs
Location of CGI-BIN /home/user/apachedocroot/cgi-bin
IP Address 192.168.10.3
# Sample Configuration file httpd begins
// Server Root is the top of the directory tree under which configuration, error and log files are kept.
// This statement allows apache to bind to specific IP address or Port No
//Server Admin is the mail address of the Server Administrator this address is used in some of the server generated pages such as error documents
// is the name and port that the server uses to identify itself
// Server Name is usually the registered DNS name of the host in case there is no registered name specify the IP Address
//It is usually a good practice to create a dedicated user and group for running httpd
//httpd is first run as the root user and then it automatically switches
//This is the directory which contains the documents (HTML files etc) by default all requests are taken to this directory
Order allow, deny
Deny for all
// Each directory to which apache has access can be configured individually at first we should configure the
//default to be very restrictive
Options Indexes FollowSymLinks
Order allow, deny
Allow from all
// Under this block we are configuring the directory which contains the main documents
// Options varies for directory depending on its contents and can be any combinations of
// “None”, “All”, Indexes, FollowSymlinks, SymLinksifOwnerMatch ExecCGI
// AllowOverride controls what directives may be placed in .htaccess file it can be
// “None”, “All”, FileInfo, AuthConfig Limit.
// Order & Allow controls who should get access to files in a particular Directory
// This tells apache which file to look for when a directory is accessed
// Specifies the location of errorlog file
ScriptAlias /cgi-bin/ “/home/user/apachedocroot/cgi-bin”
// Script Alias controls which directory contain the Scripts, when www.server.com/cgi-bin/ is called for, it searches for scripts in /home/user/apachedic/cgi-bin
//Virtual Host Section
//Virtual Host can be of 2 types 1) Name Bases 2) IP Based (Server using more then one NIC)
//Usually the IP Address in used in place of name for making virtual hosts
//Declaration of a Name Based Virtual Host when www.example.org is requested from the server it displays the content of /www/example2 the end user never knows from where the content came. was it from the same server or from a different one this is only possible when a strong DNS exist in the server to resolve names to IP addresses.
//Declaration of IP Based Virtual Host. Apache Web Server can also listen to more then one IP at a time and is able to differentiate between the requests coming to it using the above configuration it provides different content to different IP's
UserDir disabled root
AllowOverride FileInfo AuthConfig Limit Indexes
Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
Allow from all
//User Directory Module, Used to specify the directory of each user which could be used as users Document Root inthe above configuration public_html directory in each users home is declared as his Document Root when a user requests for www.server.com/~username the server forwards the request to the public_htnl directory of the user.
Firewall should be configured in a way that clients can get access to port 80 of the server
Network File System
Share files in a linux network
Operating System: LINUX
Packages required: nfs server and nfs client
Version used : nfs client: 1.1.0-8 -i586
nfs server: 1.3.2 -7 -i586Download source for nfs server package: http://pkgsrc.se/wip/linux-nfs-utils
Clients : 192.168.0.2 & 192.168.0.3
CONFIGURING NFS SERVER AND CLIENT:
There are three main configuration files you will need to edit to set up an NFS server: /etc/exports, /etc/hosts.allow, and /etc/hosts.deny .
Strictly speaking, you only need to edit /etc/exports to get NFS to work, but this would lead to an extremely insecure setup.
The exports file contains a list of entries that are to be shared and how it is to be shared . For a nfs setup
this is the most important file.
Open the file using the following comand as root user:
Make the following entry:
/home 192.168.0.2(RW) 192.168.0.3(RO)
Then save and exit the file.
start the nfs server service on the server machine , use the following command as root:
service nfs start
if it is already running then :
service nfs restart
check if the following demons are running:
portmapper: tells requesting clients how to find all nfs services on server.
mountd: handles mounting functionality.
nfs:the network file sharing daemon.
Use the command rpcinfo -p
Ensure that firewalls are not running as this may restrict the clients from accessing the server.
Step 1. start nfs service by using the following command:
service nfs start
Step2: Check if the following daemons are running
At least port mapper should be running in order for nfs to work .
Use command rpcinfo -p
Step 3: create mount point on client where the nfs directory will be mounted from server.
e.g mkdir nfs
check for shared files using the following command:
showmount -e serverip
e.g showmount -e 192.168.0.1
this will show a list of directories or files that are being shared over nfs.
Step4. Finally we need to mount the shared directory on the client machine by using the following command:
mount ip adrress of server:/shared directories /mountpoint on client machine
e.g mount 192.168.0.1:/home /nfs
once mounted all contents of the shared directory will be accessible by the client.
TESTING THE SETUP:
1.> Run the rpcinfo -p command on both server and client to check whether all required services for NFS are running.
2.> Once setup is done run the showmount -e command from the client side to ensure which NFS files/directories are shared.
ADDING SECURITY TO NFS:
The basic setup of nfs does not add any kind of security to the files being shared over the network thus these
files can be accessed by an unwanted person. In order to add security to the above nfs setup there are two other files that need to be
/etc/hosts.allow and /etc/hosts.deny
These two files specify which computers on the network can use services on your machine. Each line of the file contains a single entry
listing a service and a set of machines. When the server gets a request from a machine, it does the following:
1. It first checks hosts.allow to see if the machine matches a rule listed here. If it does, then the machine is allowed access.
2. If the machine does not match an entry in hosts.allow the server then checks hosts.deny to see if the client matches a rule listed
there. If it does then the machine is denied access.
3. If the client matches no listings in either file, then it is allowed access.
The first step in doing this is to add the following entry to / etc/hosts.deny :
By adding the above entry we ensure that the portmapper daemon cannot be accesssed by any other client other than those specified in the
Or we can also specify the ip addresses or hostnames of the clients whose access needs to be restriced .
N ext, we need to add an entry to h osts.allow to give any hosts access that we want to have access. (If we just leave the above lines in
h osts.deny then nobody will have access to NFS.)
service:hostnamee.g portmap: 192.168.0.2 , 192.168.0.3
Website up and running
for more info visit www.gnunify.in
GNUnify, an annual technical extravaganza is organized by the students of Symbiosis institute of computer studies and research (SICSR) GNUnify, the name symbolizes GNU / Linux, the philosophy behind Free/Open source, unifying and strengthening the free / open source movement, sharing and spreading knowledge with the help of IT. GNUnify initiated in the year 2003 is an international convergence of open minds who aspire to unfold their knowledge for the benefit of widespread I.T., providing a platform for students and IT professionals all over the world of free / open software. It is an effort to explore the abundant information of a domain, which believes in free/open source software and has no bounds. Techie-talk, GNU/Linux install fest, workshops, boot up and Q&A forum, this festival has it all.
Gnunify is scheduled to be in Feb 2008
Symbiosis Institute of Computer Studies and Research,
7th Floor, Atur Centre, Gokhale Cross Road,
Model Colony, Pune -16.
Computer Science is at the intellectual forefront of the Digital Revolution that will define the 21st Century. That revolution is in its infancy but is visible all around us. New scientific, economic and social paradigms are arising from computing science and being felt across all sectors of the economy and society at large. The Symbiosis Institute of Computer Sciences (SICSR) is a recognized leader in the creation of Scientific Knowledge and Practical Technologies that are defining this historic transformation. Our mission is to facilitate ideas that will shape this new frontier. Innovation requires dedication to learning, in the classroom, in the research laboratory, and throughout one's professional career. At SICSR, we offer a unique educational opportunity for students to achieve excellence in both through rigorous classes and participation in cutting edge research.
Configure and Change Management
Middleware and platforms
Quality Assurance/Testing tools
Kernel Development/Embedded Systems:
Writing application with KDE4
Porting the FreeBSD Kernel
Pushing drivers to user-space
Sahi for web testing/ Selenium
Eclipse Plug-in Development, and the Eclipse eco-system
Storage / Back up
Please feel free to discuss on these topics at http://groups.google.com/group/gnunify08
You can also add your own topic here
Process: A process us a sequential program in execution. The components of a process are the following
The Object Program (or code) to be executed.
The data on which the program will execute
Resources required by the program
The status of Program execution
CPU Burst (Usage Time): The amount of time a process needs to be in the running state before it is completed
Turnaround Time : The amount of time between the moment of process first enters the ready state and the moment the process exits the running state for the last time.
Waiting Time : The time the process spends waiting in the ready state before its first transition to the running state.
Time Quantum (time slice): The amount of time between timer interrupts. (Used when process manager use an interval timer).
The scheduling policy determines when it is time for a process to be removed from the CPU and which ready process should be allocated the CPU next.
Preemptive: A strategy for time-multiplexing the processor whereby a running process is removed from the processor whenever a higher-priority process becomes ready to execute.
Non Preemptive: A strategy for time-multiplexing the processor whereby a process does not release the processor until it has completed its work.
First Come First Served: FCFS, also known as First-In-First-Out (FIFO), is the simplest scheduling policy. Arriving jobs are inserted into the tail (rear) of the ready queue and the process to be executed next is removed from the head (front) of the queue.
Shortest Job First : SJF policy selects the job with the shortest (expected) processing time first. Shorter jobs are always executed before long jobs. long running jobs may starve, because the CPU has a steady supply of short jobs.
Round Robin: RR reduces the penalty that short jobs suffer with FCFS by preempting running jobs periodically. The CPU suspends the current job when the reserved quantum (time-slice) is exhausted. The job is then put at the end of the ready queue if not yet completed.
Priority : Each process is assigned a priority. The ready list contains an entry for each process ordered by its priority. The process at the beginning of the list (highest priority) is picked first. A variation of this scheme allows preemption of the current process when a higher priority process arrives.