My First Outdoor Workshop

Eagerly waiting for my first outdoor workshop.
I and Mr. Gaurav Pant would be conducting a workshop in Kohlapur (Maharashtra) this wednesday we have not yet decided what all we would be covering top the lists would be PHP, Symphony and atleast a Content Management System.
hoping for a good time.

Configuring The Apache Web Server

What is the Apache Web Server?*

  • is a powerful, flexible, HTTP/1.1 compliant web server
  • implements the latest protocols, including HTTP/1.1 (RFC2616)
  • is highly configurable and extensible with third-party modules
  • can be customised by writing 'modules' using the Apache module API
  • provides full source code and comes with an unrestrictive license
  • runs on Windows 2003/XP/2000/NT/9x, Netware 5.x and above, OS/2, and most versions of Unix, as well as several other operating systems is actively being developed
  • encourages user feedback through new ideas, bug reports and patches

adapted from www.apache.org

Download The Apache Source from www.httpd.apache.org

Current Stable Version available Apache 2.2.6

Requirements

Disk Space:
Make sure you have at least 50 MB of temporary free disk space available. After
installation Apache occupies approximately 10 MB of disk space. The actual disk space
requirements will vary considerably based on your chosen configuration options and any thirdparty modules.

ANSI-C Compiler and Build System

Make sure you have an ANSI-C compiler installed. The GNU C compiler (GCC) from
the Free Software Foundation (FSF) is recommended. If you don't have GCC then at least make
sure your vendor's compiler is ANSI compliant. In addition, your PATH must contain basic build
tools such as make.


Configuring & Installing Apache
Download From http://httpd.apache.org/download.cgi
Extract # gzip -d httpd.2.2.6.tar.gz
# tar -xvf httpd.2.2.6.tar
# cd httpd.2.2.6

Configure # ./configure –prefix=PREFIX
Compile # make
Install # make install
Test # PREFIX/bin/apachectl start

PREFIX must be replaced with the filesystem path under which the server should
be installed. If PREFIX is not specified, it defaults to /usr/local/apache2.
PREFIX = /apache is used in this document from here on
Starting & Stopping the Apache Server

1. Starting # /apache/bin/apachectl start
2. Stopping # /apache/bin/apachectl stop
3. Restarting # /apache/bin/apachectl restart

Configuring the Apache Server
Server Side: httpd.conf

This is the primary configuration file. It traditionally contains configuration settings for
the HTTP protocol and for the operation of the server. This file is first processed

Client Side:

There is as such no configuration required in the client side the only requisite is that the
client should have a browser such as Mozilla Firefox or Links installed and should be able to
connect to the network.

Important Directives of the httpd.conf
1) ServerName
2) Server Root
3) Server Admin
4) Document Root
5) Listen
6) User / Group
7) Include
8) Directory
9) Directory Index
10)Alias / Script Alias
11) Virtual Hosts
12)User Directory

Sample Configuration File (httpd.conf) with explanation of each Statement/Directive/Block
Apache Version Apache 2.2.6
Apache is installed/configured in /apache
Location of HTML pages /home/user/apachedocroot/htdocs
Location of CGI-BIN /home/user/apachedocroot/cgi-bin
Port 80
IP Address 192.168.10.3

# Sample Configuration file httpd begins
ServerRoot “/apache”
// Server Root is the top of the directory tree under which configuration, error and log files are kept.

Listen 80
// This statement allows apache to bind to specific IP address or Port No

ServerAdmin admin@server.com
//Server Admin is the mail address of the Server Administrator this address is used in some of the server generated pages such as error documents

ServerName www.server.com:80

// is the name and port that the server uses to identify itself
// Server Name is usually the registered DNS name of the host in case there is no registered name specify the IP Address


User apache
Group www


//It is usually a good practice to create a dedicated user and group for running httpd
//httpd is first run as the root user and then it automatically switches

DocumentRoot “/home/user/apachedocroot/htdocs”

//This is the directory which contains the documents (HTML files etc) by default all requests are taken to this directory


Options FollowSymLinks
AllowOverride None
Order allow, deny
Deny for all


// Each directory to which apache has access can be configured individually at first we should configure the
//default to be very restrictive



Options Indexes FollowSymLinks
AllowOverride None
Order allow, deny
Allow from all


// Under this block we are configuring the directory which contains the main documents
// Options varies for directory depending on its contents and can be any combinations of
// “None”, “All”, Indexes, FollowSymlinks, SymLinksifOwnerMatch ExecCGI
// AllowOverride controls what directives may be placed in .htaccess file it can be
// “None”, “All”, FileInfo, AuthConfig Limit.
// Order & Allow controls who should get access to files in a particular Directory


DirectoryIndex index.html


// This tells apache which file to look for when a directory is accessed


ErrorLog logs/error_log

// Specifies the location of errorlog file


ScriptAlias /cgi-bin/ “/home/user/apachedocroot/cgi-bin”

// Script Alias controls which directory contain the Scripts, when www.server.com/cgi-bin/ is called for, it searches for scripts in /home/user/apachedic/cgi-bin

//Virtual Host Section
//Virtual Host can be of 2 types 1) Name Bases 2) IP Based (Server using more then one NIC)

NameVirtualHost 192.168.10.3

//Usually the IP Address in used in place of name for making virtual hosts


DocumentRoot /www/example2
ServerName www.example.org


//Declaration of a Name Based Virtual Host when www.example.org is requested from the server it displays the content of /www/example2 the end user never knows from where the content came. was it from the same server or from a different one this is only possible when a strong DNS exist in the server to resolve names to IP addresses.

# IP-based


DocumentRoot /www/example4
ServerName www.example4.edu


//Declaration of IP Based Virtual Host. Apache Web Server can also listen to more then one IP at a time and is able to differentiate between the requests coming to it using the above configuration it provides different content to different IP's


UserDir public_html
UserDir disabled root

AllowOverride FileInfo AuthConfig Limit Indexes
Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec

Order allow,deny
Allow from all




//User Directory Module, Used to specify the directory of each user which could be used as users Document Root inthe above configuration public_html directory in each users home is declared as his Document Root when a user requests for www.server.com/~username the server forwards the request to the public_htnl directory of the user.

Please Note:
Firewall should be configured in a way that clients can get access to port 80 of the server

Network File System

Network File System

Share files in a linux network

Software Details:

Operating System: LINUX

Packages required: nfs server and nfs client

Version used : nfs client: 1.1.0-8 -i586

nfs server: 1.3.2 -7 -i586

Download source for nfs server package: http://pkgsrc.se/wip/linux-nfs-utils

Scenario :

Server 192.168.0.1

Clients : 192.168.0.2 & 192.168.0.3

CONFIGURING NFS SERVER AND CLIENT:

There are three main configuration files you will need to edit to set up an NFS server: /etc/exports, /etc/hosts.allow, and /etc/hosts.deny .

Strictly speaking, you only need to edit /etc/exports to get NFS to work, but this would lead to an extremely insecure setup.

>>/etc/exports>>

The exports file contains a list of entries that are to be shared and how it is to be shared . For a nfs setup

this is the most important file.

SERVER SIDE:

Step 1:

Open the file using the following comand as root user:

vi /etc/exports.

Make the following entry:

/home 192.168.0.2(RW) 192.168.0.3(RO)

Then save and exit the file.

Step 2:

start the nfs server service on the server machine , use the following command as root:

service nfs start

if it is already running then :

service nfs restart

Step 3:

check if the following demons are running:

portmapper: tells requesting clients how to find all nfs services on server.

mountd: handles mounting functionality.

nfs:the network file sharing daemon.

Use the command rpcinfo -p

Step 4:

Ensure that firewalls are not running as this may restrict the clients from accessing the server.

CLIENT SIDE:

Step 1. start nfs service by using the following command:

service nfs start

Step2: Check if the following daemons are running

portmapper

nfs

At least port mapper should be running in order for nfs to work .

Use command rpcinfo -p

Step 3: create mount point on client where the nfs directory will be mounted from server.

e.g mkdir nfs

check for shared files using the following command:

showmount -e serverip

e.g showmount -e 192.168.0.1

this will show a list of directories or files that are being shared over nfs.

Step4. Finally we need to mount the shared directory on the client machine by using the following command:

mount ip adrress of server:/shared directories /mountpoint on client machine

e.g mount 192.168.0.1:/home /nfs

once mounted all contents of the shared directory will be accessible by the client.

TESTING THE SETUP:

1.> Run the rpcinfo -p command on both server and client to check whether all required services for NFS are running.

2.> Once setup is done run the showmount -e command from the client side to ensure which NFS files/directories are shared.

ADDING SECURITY TO NFS:

The basic setup of nfs does not add any kind of security to the files being shared over the network thus these

files can be accessed by an unwanted person. In order to add security to the above nfs setup there are two other files that need to be

configured :

/etc/hosts.allow and /etc/hosts.deny

These two files specify which computers on the network can use services on your machine. Each line of the file contains a single entry

listing a service and a set of machines. When the server gets a request from a machine, it does the following:

1. It first checks hosts.allow to see if the machine matches a rule listed here. If it does, then the machine is allowed access.

2. If the machine does not match an entry in hosts.allow the server then checks hosts.deny to see if the client matches a rule listed

there. If it does then the machine is denied access.

3. If the client matches no listings in either file, then it is allowed access.

The first step in doing this is to add the following entry to / etc/hosts.deny :

By adding the above entry we ensure that the portmapper daemon cannot be accesssed by any other client other than those specified in the

/etc/hosts.allow

Or we can also specify the ip addresses or hostnames of the clients whose access needs to be restriced .

N ext, we need to add an entry to h osts.allow to give any hosts access that we want to have access. (If we just leave the above lines in

h osts.deny then nobody will have access to NFS.)

portmap:all

service:hostname

e.g portmap: 192.168.0.2 , 192.168.0.3

gnunify 08

What it is?
GNUnify, an annual technical extravaganza is organized by the students of Symbiosis institute of computer studies and research (SICSR) GNUnify, the name symbolizes GNU / Linux, the philosophy behind Free/Open source, unifying and strengthening the free / open source movement, sharing and spreading knowledge with the help of IT. GNUnify initiated in the year 2003 is an international convergence of open minds who aspire to unfold their knowledge for the benefit of widespread I.T., providing a platform for students and IT professionals all over the world of free / open software. It is an effort to explore the abundant information of a domain, which believes in free/open source software and has no bounds. Techie-talk, GNU/Linux install fest, workshops, boot up and Q&A forum, this festival has it all.
When is it?
Gnunify is scheduled to be in Feb 2008
Where is it?

Symbiosis Institute of Computer Studies and Research,
7th Floor, Atur Centre, Gokhale Cross Road,
Model Colony, Pune -16.
About SICSR

Computer Science is at the intellectual forefront of the Digital Revolution that will define the 21st Century. That revolution is in its infancy but is visible all around us. New scientific, economic and social paradigms are arising from computing science and being felt across all sectors of the economy and society at large. The Symbiosis Institute of Computer Sciences (SICSR) is a recognized leader in the creation of Scientific Knowledge and Practical Technologies that are defining this historic transformation. Our mission is to facilitate ideas that will shape this new frontier. Innovation requires dedication to learning, in the classroom, in the research laboratory, and throughout one's professional career. At SICSR, we offer a unique educational opportunity for students to achieve excellence in both through rigorous classes and participation in cutting edge research.
Some of the topics of the last year's Gnunify were as follows
Software Development:
Build Management
Configure and Change Management
Developer tools
Middleware and platforms
Quality Assurance/Testing tools
Web Frameworks
Languages/database

Kernel Development/Embedded Systems:
Writing application with KDE4
Porting the FreeBSD Kernel
ext4 Development
iSCSI
Puppet
Scalability
Pushing drivers to user-space

Workshops:
Rails
Sahi for web testing/ Selenium
Watir
Eclipse Plug-in Development, and the Eclipse eco-system

Sys Admin
Network security
Network monitoring
Load balancing
Storage / Back up
DHCP
DNS
MAIL
SAMBA


Please feel free to discuss on these topics at http://groups.google.com/group/gnunify08
You can also add your own topic here

The Origins of AJAX

Recent examples of AJAX usage include Gmail and  Flickr. It is largely due to these and other prominent sites that AJAX has become popular only relatively recently – the technology has been available for some time. One precursor was dynamic HTML (DHTML), which twinned HTML with CSS and JavaScript but suffered from cross-browser compatibility issues. The major technical barrier was a common method for asynchronous data exchange, many variations are possible, such as the use of an "iframe" for data storage or JavaScript Object Notation for data transmission, but the wide availability of the XMLHttpRequest object has made it a popular solution. AJAX is not a technology, rather, the term refers to a proposed set of methods using a number of existing technologies. As yet, there is no firm AJAX standard, although the recent establishment of the Open AJAX group, supported by major industry figures such as IBM and Google, suggests that one will become available soon.

 

            AJAX applications can benefit both the user and the developer. Web applications can respond much more quickly to many types of user interaction and avoid repeatedly sending unchanged information across the network. Also, because AJAX technologies are open, they are supported in all JavaScript-enabled browsers, regardless of operating system – however, implementation differences of the XMLHttpRequest between browsers cause some issues, some using an ActiveX object, others providing a native implementation. The upcoming W3C 'Document Object Model (DOM) Level 3 Load and Save Specification' provides a standardised solution, but the current solution has become a de facto standard and is therefore likely to be supported in future browsers. 

 

            Although the techniques within AJAX are relatively mature, the overall approach is still fairly new and there has been criticism of the usability of its applications. One of the major causes for concern is that JavaScript needs to be enabled in the browser for AJAX applications to work. This setting is out of the developer's control and statistics show that currently 10% of browsers have JavaScript turned off . This is often for accessibility reasons or to avoid scripted viruses.

Process Scheduling

    In multiprogramming systems, when there is more than one ready process, the operating system must decide which one to activate.
    The decision is made by the part of the operating system called the scheduler, using a scheduling algorithm.
    The scheduler is concerned with deciding policy, not providing a mechanism
 
The Keywords:

Process: A process us a sequential program in execution. The components of a process are the following

  1. The Object Program (or code) to be executed.

  2. The data on which the program will execute

  3. Resources required by the program

  4. The status of Program execution


CPU Burst (Usage Time): The amount of time a process needs to be in the running state before it is completed

Turnaround Time : The amount of time between the moment of process first enters the ready state and the moment the process exits the running state for the last time.

Waiting Time : The time the process spends waiting in the ready state before its first transition to the running state.

Time Quantum (time slice): The amount of time between timer interrupts. (Used when process manager use an interval timer).

Scheduling Policies


The scheduling policy determines when it is time for a process to be removed from the CPU and which ready process should be allocated the CPU next.


Preemptive: A strategy for time-multiplexing the processor whereby a running process is removed from the processor whenever a higher-priority process becomes ready to execute.


Non Preemptive: A strategy for time-multiplexing the processor whereby a process does not release the processor until it has completed its work.


Scheduling Algorithms


First Come First Served: FCFS, also known as First-In-First-Out (FIFO), is the simplest scheduling policy. Arriving jobs are inserted into the tail (rear) of the ready queue and the process to be executed next is removed from the head (front) of the queue.

Shortest Job First : SJF policy selects the job with the shortest (expected) processing time first. Shorter jobs are always executed before long jobs. long running jobs may starve, because the CPU has a steady supply of short jobs. 

Round Robin: RR reduces the penalty that short jobs suffer with FCFS by preempting running jobs periodically. The CPU suspends the current job when the reserved quantum (time-slice) is exhausted. The job is then put at the end of the ready queue if not yet completed.

 Priority : Each process is assigned a priority. The ready list contains an entry for each process ordered by its priority. The process at the beginning of the list (highest priority) is picked first. A variation of this scheme allows preemption of the current process when a higher priority process arrives.

What is AJAX?

AJAX (Asynchronous JavaScript and XML) is an umbrella term for a collection of Web development technologies used to create interactive Web applications, mostly W3C standards (the XMLHttpRequest specification is developed by WHATWG):

1) XHTML - a stricter, cleaner rendering of HTML into XML

2) CSS for marking up and adding styles.

3) The Javascript Document Object Model (DOM) which allows the content, structure and style of a document to be dynamically accessed and updated.

4) The XMLHttpRequest object which exchanges data asynchronously with the Web server reducing the need to continually fetch resources from the server.

Since data can be sent and retrieved without requiring the user to reload an entire Web page, small amounts of data can be transferred as and when required. Moreover, page elements can be dynamically refreshed at any level of granularity to reflect this. An AJAX application performs in a similar way to local applications residing on a user's machine, resulting in a user experience that may differ from traditional Web browsing.