What is the Apache Web Server?*
adapted from www.apache.org
Download The Apache Source from www.httpd.apache.org
Current Stable Version available Apache 2.2.6
Make sure you have at least 50 MB of temporary free disk space available. After
installation Apache occupies approximately 10 MB of disk space. The actual disk space
requirements will vary considerably based on your chosen configuration options and any thirdparty modules.
ANSI-C Compiler and Build System
Make sure you have an ANSI-C compiler installed. The GNU C compiler (GCC) from
the Free Software Foundation (FSF) is recommended. If you don't have GCC then at least make
sure your vendor's compiler is ANSI compliant. In addition, your PATH must contain basic build
tools such as make.
Configuring & Installing Apache
Download From http://httpd.apache.org/download.cgi
Extract # gzip -d httpd.2.2.6.tar.gz
# tar -xvf httpd.2.2.6.tar
# cd httpd.2.2.6
Configure # ./configure –prefix=PREFIX
Compile # make
Install # make install
Test # PREFIX/bin/apachectl start
PREFIX must be replaced with the filesystem path under which the server should
be installed. If PREFIX is not specified, it defaults to /usr/local/apache2.
PREFIX = /apache is used in this document from here on
Starting & Stopping the Apache Server
1. Starting # /apache/bin/apachectl start
2. Stopping # /apache/bin/apachectl stop
3. Restarting # /apache/bin/apachectl restart
Configuring the Apache Server
Server Side: httpd.conf
This is the primary configuration file. It traditionally contains configuration settings for
the HTTP protocol and for the operation of the server. This file is first processed
There is as such no configuration required in the client side the only requisite is that the
client should have a browser such as Mozilla Firefox or Links installed and should be able to
connect to the network.
Important Directives of the httpd.conf
2) Server Root
3) Server Admin
4) Document Root
6) User / Group
9) Directory Index
10)Alias / Script Alias
11) Virtual Hosts
Sample Configuration File (httpd.conf) with explanation of each Statement/Directive/Block
Apache Version Apache 2.2.6
Apache is installed/configured in /apache
Location of HTML pages /home/user/apachedocroot/htdocs
Location of CGI-BIN /home/user/apachedocroot/cgi-bin
IP Address 192.168.10.3
# Sample Configuration file httpd begins
// Server Root is the top of the directory tree under which configuration, error and log files are kept.
// This statement allows apache to bind to specific IP address or Port No
//Server Admin is the mail address of the Server Administrator this address is used in some of the server generated pages such as error documents
// is the name and port that the server uses to identify itself
// Server Name is usually the registered DNS name of the host in case there is no registered name specify the IP Address
//It is usually a good practice to create a dedicated user and group for running httpd
//httpd is first run as the root user and then it automatically switches
//This is the directory which contains the documents (HTML files etc) by default all requests are taken to this directory
Order allow, deny
Deny for all
// Each directory to which apache has access can be configured individually at first we should configure the
//default to be very restrictive
Options Indexes FollowSymLinks
Order allow, deny
Allow from all
// Under this block we are configuring the directory which contains the main documents
// Options varies for directory depending on its contents and can be any combinations of
// “None”, “All”, Indexes, FollowSymlinks, SymLinksifOwnerMatch ExecCGI
// AllowOverride controls what directives may be placed in .htaccess file it can be
// “None”, “All”, FileInfo, AuthConfig Limit.
// Order & Allow controls who should get access to files in a particular Directory
// This tells apache which file to look for when a directory is accessed
// Specifies the location of errorlog file
ScriptAlias /cgi-bin/ “/home/user/apachedocroot/cgi-bin”
// Script Alias controls which directory contain the Scripts, when www.server.com/cgi-bin/ is called for, it searches for scripts in /home/user/apachedic/cgi-bin
//Virtual Host Section
//Virtual Host can be of 2 types 1) Name Bases 2) IP Based (Server using more then one NIC)
//Usually the IP Address in used in place of name for making virtual hosts
//Declaration of a Name Based Virtual Host when www.example.org is requested from the server it displays the content of /www/example2 the end user never knows from where the content came. was it from the same server or from a different one this is only possible when a strong DNS exist in the server to resolve names to IP addresses.
//Declaration of IP Based Virtual Host. Apache Web Server can also listen to more then one IP at a time and is able to differentiate between the requests coming to it using the above configuration it provides different content to different IP's
UserDir disabled root
AllowOverride FileInfo AuthConfig Limit Indexes
Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
Allow from all
//User Directory Module, Used to specify the directory of each user which could be used as users Document Root inthe above configuration public_html directory in each users home is declared as his Document Root when a user requests for www.server.com/~username the server forwards the request to the public_htnl directory of the user.
Firewall should be configured in a way that clients can get access to port 80 of the server
Network File System
Share files in a linux network
Operating System: LINUX
Packages required: nfs server and nfs client
Version used : nfs client: 1.1.0-8 -i586
nfs server: 1.3.2 -7 -i586Download source for nfs server package: http://pkgsrc.se/wip/linux-nfs-utils
Clients : 192.168.0.2 & 192.168.0.3
CONFIGURING NFS SERVER AND CLIENT:
There are three main configuration files you will need to edit to set up an NFS server: /etc/exports, /etc/hosts.allow, and /etc/hosts.deny .
Strictly speaking, you only need to edit /etc/exports to get NFS to work, but this would lead to an extremely insecure setup.
The exports file contains a list of entries that are to be shared and how it is to be shared . For a nfs setup
this is the most important file.
Open the file using the following comand as root user:
Make the following entry:
/home 192.168.0.2(RW) 192.168.0.3(RO)
Then save and exit the file.
start the nfs server service on the server machine , use the following command as root:
service nfs start
if it is already running then :
service nfs restart
check if the following demons are running:
portmapper: tells requesting clients how to find all nfs services on server.
mountd: handles mounting functionality.
nfs:the network file sharing daemon.
Use the command rpcinfo -p
Ensure that firewalls are not running as this may restrict the clients from accessing the server.
Step 1. start nfs service by using the following command:
service nfs start
Step2: Check if the following daemons are running
At least port mapper should be running in order for nfs to work .
Use command rpcinfo -p
Step 3: create mount point on client where the nfs directory will be mounted from server.
e.g mkdir nfs
check for shared files using the following command:
showmount -e serverip
e.g showmount -e 192.168.0.1
this will show a list of directories or files that are being shared over nfs.
Step4. Finally we need to mount the shared directory on the client machine by using the following command:
mount ip adrress of server:/shared directories /mountpoint on client machine
e.g mount 192.168.0.1:/home /nfs
once mounted all contents of the shared directory will be accessible by the client.
TESTING THE SETUP:
1.> Run the rpcinfo -p command on both server and client to check whether all required services for NFS are running.
2.> Once setup is done run the showmount -e command from the client side to ensure which NFS files/directories are shared.
ADDING SECURITY TO NFS:
The basic setup of nfs does not add any kind of security to the files being shared over the network thus these
files can be accessed by an unwanted person. In order to add security to the above nfs setup there are two other files that need to be
/etc/hosts.allow and /etc/hosts.deny
These two files specify which computers on the network can use services on your machine. Each line of the file contains a single entry
listing a service and a set of machines. When the server gets a request from a machine, it does the following:
1. It first checks hosts.allow to see if the machine matches a rule listed here. If it does, then the machine is allowed access.
2. If the machine does not match an entry in hosts.allow the server then checks hosts.deny to see if the client matches a rule listed
there. If it does then the machine is denied access.
3. If the client matches no listings in either file, then it is allowed access.
The first step in doing this is to add the following entry to / etc/hosts.deny :
By adding the above entry we ensure that the portmapper daemon cannot be accesssed by any other client other than those specified in the
Or we can also specify the ip addresses or hostnames of the clients whose access needs to be restriced .
N ext, we need to add an entry to h osts.allow to give any hosts access that we want to have access. (If we just leave the above lines in
h osts.deny then nobody will have access to NFS.)
service:hostnamee.g portmap: 192.168.0.2 , 192.168.0.3
Website up and running
for more info visit www.gnunify.in
Process: A process us a sequential program in execution. The components of a process are the following
The Object Program (or code) to be executed.
The data on which the program will execute
Resources required by the program
The status of Program execution
CPU Burst (Usage Time): The amount of time a process needs to be in the running state before it is completed
Turnaround Time : The amount of time between the moment of process first enters the ready state and the moment the process exits the running state for the last time.
Waiting Time : The time the process spends waiting in the ready state before its first transition to the running state.
Time Quantum (time slice): The amount of time between timer interrupts. (Used when process manager use an interval timer).
The scheduling policy determines when it is time for a process to be removed from the CPU and which ready process should be allocated the CPU next.
Preemptive: A strategy for time-multiplexing the processor whereby a running process is removed from the processor whenever a higher-priority process becomes ready to execute.
Non Preemptive: A strategy for time-multiplexing the processor whereby a process does not release the processor until it has completed its work.
First Come First Served: FCFS, also known as First-In-First-Out (FIFO), is the simplest scheduling policy. Arriving jobs are inserted into the tail (rear) of the ready queue and the process to be executed next is removed from the head (front) of the queue.
Shortest Job First : SJF policy selects the job with the shortest (expected) processing time first. Shorter jobs are always executed before long jobs. long running jobs may starve, because the CPU has a steady supply of short jobs.
Round Robin: RR reduces the penalty that short jobs suffer with FCFS by preempting running jobs periodically. The CPU suspends the current job when the reserved quantum (time-slice) is exhausted. The job is then put at the end of the ready queue if not yet completed.
Priority : Each process is assigned a priority. The ready list contains an entry for each process ordered by its priority. The process at the beginning of the list (highest priority) is picked first. A variation of this scheme allows preemption of the current process when a higher priority process arrives.
1) XHTML - a stricter, cleaner rendering of HTML into XML
2) CSS for marking up and adding styles.
4) The XMLHttpRequest object which exchanges data asynchronously with the Web server reducing the need to continually fetch resources from the server.
Since data can be sent and retrieved without requiring the user to reload an entire Web page, small amounts of data can be transferred as and when required. Moreover, page elements can be dynamically refreshed at any level of granularity to reflect this. An AJAX application performs in a similar way to local applications residing on a user's machine, resulting in a user experience that may differ from traditional Web browsing.
Powered by Drupal 9