Sunday, November 29, 2009

BIOGRAPHY

Apon A. & Baker M. (2000) Network technologies. In M. Baker (Ed.), Cluster computing white paper (pp.
13-26). United Kingdom.
Apon, A., Buyya, R., Jin, H., & Mache, J. (2001). Cluster computing in the classroom: Topics, guidelines,
and experiences. First IEEE/ACM International Symposium on Cluster Computing and the Grid
(CCGrid 2001), Brisbane, Australia.
Baker, M. (2000). Cluster computing white paper. United Kingdom: University of Portsmouth.
Baker, M., Apon, A., Buyya, R., & Jin, H. (2002). Cluster computing and applications. In A. Kent & J.
Williams (Eds.), Encyclopedia of Computer Science and Technology (pp. 87-125). New York: Marcel
Dekker. Retrieved from http://www.gridbus.org/~raj/papers/encyclopedia.pdf
Baker, M. & Buyya, R. (1999). Cluster computing at a glance. In R. Buyya (Ed.), High performance cluster
computing: Architecture and systems (pp. 3-47). Prentice Hall.
Baker, M., Fox, G. C. & Yau, H. W. (1996). Review of cluster management software. NHSE Review, 1 (1).
Boud, D. (1995). Enhancing learning through self-assessment. London: Kogan.
Childers, R. C., Kitchens, F. L. & Sharma, S. K. (2003a). Cluster-computing as a multidisciplinary classroom
tool. Proceedings of Southwest Decision Science Institute (SWDSI) 34th Annual Conference,
Houston, Texas: pp. 440-445.
Childers, R. C., Kitchens, F. L. & Sharma, S. K. (2003b). Integrating IS curriculum knowledge through a
cluster-computing project - A successful experiment. Hawaii International Conference on Education,
Honolulu, Hawaii.

Dennis, A. & Wixom, B. H. (2000). System analysis and design: An applied approach. New York: John
Wiley & Sons.
Harvey, B. Z., Sirna, R. T. & Houlihan, M. B. (1998). Learning by design: Hands-on learning. The American
School Board Journal, 182 (2), 22-25.
Hoffman, F., Hargrove, W. & Schultz, A. (1999). The Stone Soupercomputer, Ornl's First Beowulf. Retrieved
2004 from http://www.esd.ornl.gov/facilities/beowulf
Hyde, D. C. (2000). Education. In M. Baker (Ed.), Cluster computing white paper (pp. 110-119). United
Kingdom.
Korwin, A. R. & Jones, R. E. (1990). Do hands-on technology based activities enhance learning by reinforcing
cognitive knowledge and retention? Journal of Technology Education, 1 (2), 26-33.
Lane, M. G. (1981). Teaching operating systems and machine architecture - More on the hands-on laboratory
approach. Technical Symposium on Computer Science Education, Proceedings of he Twelfth
SIGCSE Technical Symposium on Computer Science Education, AMC Press.
Los Alamos News Letter (2002). Smaller slower, supercomputers someday may win the race. Retrieved
from http://www.lanl.gov/worldview/news/releases/archive/02-058.shtml
National Safety Council. (1999). Electronic product recovery and recycling baseline report. Retrieved July
1, 2002 from http://www.nationalsafetycouncil.org
Read, B. (2002, August 7). Cornel U. bolsters its cluster-supercomputing program with $60 million deal.
The Chronicle of Higher Education. Retrieved from
http://chronicle.com/free/2002/08/2002080701t.htm
Rice, M. L., Wilson, E. K. & Bagley, W. (2001). Transforming learning with technology: Lessons from the
field. Journal of Technology and Teacher Education, 9 (2), 211-230.
Rochester Computer Recycling. (2002) Benchmarks for computer obsolescence. Regional Computer Recycling
& Recovery. Retrieved July 1, 2002 from
http://www.rochestercomputer.com/regionalcomputerrecycling/benchmarks.htm
Scanlon, E., Tounglu, C. & Jones, A. (1998). Learning with computers: Experiences of evaluation. Computers
& Education, 30 (1), 9-14.
Sterling, T. (2000). An Introduction to PC Clusters for High Performance Computing In M. Baker (Ed.),
Cluster computing white paper (pp. 3-12). United Kingdom.
Sterling, T. L., Salmon, J., Becker, D. & Savarese, D. F. (1999). How to build a Beowulf: A guide to the
implementation and application of PC clusters. Cambridge, MA: The MIT Press.
Wilkinson, B. & Michael, A. (1999). A state-wide senior parallel programming course. IEEE Transactions
on Education, 43 (3), 167-173.
Wrege, R. (1982). Hands on computing. Popular Computing, 1, 110-123

Conclusion (Summary)

Cluster computing is an emerging answer to several technological dilemmas. The competitive
nature of the business world has established a need for scalable, flexible, and reliable computing
systems. Advanced applications are now requiring computing power not available in a standalone
PC. Due to the rapid influx of new PCs on the market and ever increasing obsolescence
rates businesses are left with excess equipment gathering dust or filling landfills. Cluster computing
adds value to businesses while helping alleviate these problems. The future of high performance
computing will be found in cluster computing as a substitute for traditional supercomputers,
and at a reasonable cost. As the future draws near, preparing students becomes an important issue
in an ever-changing environment. Today’s students will face cluster-computing applications tomorrow;
they need to be prepared for their careers after graduation. Cluster computing has already
begun appearing as a part of university curriculums in the areas of computer science and
engineering. The challenge remains, to introduce cluster computing into MIS programs in business
schools. Business technology students need to be prepared with hands-on experience when
they enter the job market. Cluster computing is a great opportunity to provide students with a
unique learning experience.
As an experimental design, the authors developed a cluster-computing project at their institution,
to expose business students to the process of building and operating a cluster computer for use in
supercomputing applications. The project helped students to learn the technologies of distributed
networking, securing networked environments and designing an application for distributed computing.
A cluster is a resource for teaching various IS concepts through hands-on experience, application
development and research, but it is a resource not limited to the computer science department
or business school students. Anyone who needs serious computing power to conduct
research or solve complex problems can use the services of a cluster. This would include, but not
be limited to: chemists, physicists, biomedical scientists, architects, and engineers.

PROJECT MEMBERSHIP AND ROLES

In conclution of our project project, we would like to appreciate our Lecturer Tim, for the support and guidance durring this project, our fellow students and everybody who helped us durring the preparation pf the project.

Bellow is a breskdown of the Membership, roles and hours worked.


WINSTON, played the role of general secretary and reserch. His main role was about reserching hardware requirements for the project,54 hours toward the project.

HIEU, he did the role of network desinger but due to health issues he was unable to continue with the rest of the progamme, tatal hours worked added upto about 18 hours..

RAVI,He did the role of server intsalation and desingning, tatal hours worked added to 54 hours.

PATRICK, HE was our wed desingner and also did a role of web sever instalation and other roles, hours worked added to about 54 hours.

MICAH, He came about a month after the project was started due to family responsibilities, did several roles including software and hardware reserch. hours added to about 40 hours.

CHRIS, He played the role of software and hardware reserch. his total number of hours contributed to the project added to about 51 hours.


Thankx one again to TIM our supervisor..

Exchange Server 2007

Microsoft - Exchange server 2007
Prices: $1,125.95
Link:

--------------------------------------------------------------------------------
http://www.buypcsoft.com/product.asp?ProductID=2288&gclid=CM_phpSg3pwCFc0vpAodYXZcKQ




Exchange Server Requirement:

===============
Processor:

--------------------------------------------------------------------------------
x64 architecture-based computer with Intel processor that supports Intel 64 architecture (formerly known as Intel EM64T)
AMD processor that supports the AMD64 platform
Intel Itanium IA64 processors not supported
Intel Pentium or compatible 800-megahertz (MHz) or faster 32-bit processor (for testing and training purposes only; not supported in production)

RAM:
---
Minimum: 2 gigabytes (GB) of RAM
Recommended: 2 GB of RAM per server plus 5 megabytes (MB) of RAM per mailbox

Domain:

--------------------------------------------------------------------------------
Make sure Domain and DNS run properly before install exchange server 2007.




=====================================================

Design Exchange 2007:

========
http://ptgmedia.pearsoncmg.com/images/9780672329203/samplechapter/0672329204_Chapter_03.pdf


Configure Exchange 2007:

======
http://www.msexchange.org/tutorials/Installing-Configuring-Testing-Exchange-2007-Cluster-Continuous-Replication-Based-Mailbox-Server-Part1.html

SOFTWARE MAINTENANCE

Students received good experiences in basic configuration of servers in system administration
classes, but a Beowulf cluster illustrated the demands of managing a LAN. Software configurations
change over time by upgrading software packages, introducing new ones, removing old
ones, and tweaking the existing packages. Every machine in a cluster must be able to work with
the other machines. Maintaining the software on a cluster consists of administrative work multiplied
by ‘n’ nodes - each of which is potentially dependent on other nodes. Students learned the
skills necessary to identify the needs of the users of the cluster (and the needs of the cluster itself)
and successfully fulfilled those needs

DHCP CLUSTER PROJECTS

Beowulf clusters are high-performance computers built from off-the-shelf commodity components. They usually consist of a cluster of PCs running Linux and connected using a Fast Ethernet network, however some clusters use high-end Unix workstations (such as Compaq Alpha or Sun UltraSPARC machines) and/or high-end gigabit networks (such as Myrinet, ServerNet or Giganet).

Beowulfs have become very popular over the past couple of years, due to the rapid improvements in the performance of commodity processors and networking infrastructure, and the development of Linux, a free Unix-like operating system for PCs. For most applications, clusters offer much better price/performance than standard supercomputers such as vector or shared memory machines.

We have been building and experimenting with Beowulf systems for two reasons: to construct cost-effective high-performance computers for use by computational scientists; and to explore some of the many research issues in parallel and cluster computing with Beowulf systems.

The DHPC group has designed and installed two large clusters which are among the most powerful supercomputers in Australia

DEVELOPING A CLUSTER SERVER

Given the experience of developing this cluster-computing course along with the advantage of
hindsight, the authors propose a blueprint for developing a cluster computer to be used for MIS
instruction and faculty research. To develop a working cluster requires time - time to gather
equipment donations, support, and to develop a working hands-on knowledge of cluster computing
as the project evolves. Rather than suggest that a faculty member gain a working hands-on
knowledge before attempting to teach cluster computing for the first time, the authors suggest a
developmental course of action. By following a developmental blueprint, faculty can begin by
teaching project management or systems analysis and design at first – slowly converting course
curriculum to clustering topics as the project and the cluster are developed over several semesters.
The authors’ recommendation is to plan on spending two semesters developing a stable cluster
computer as a student team project. Then, use the developed cluster to teach cluster-computing
beginning in the third semester. This gives students experience in project management and system
analysis & design, while allowing them to develop a sense of ownership in the project. This plan
also removes the pressure from the faculty members who might otherwise feel compelled to
quickly learn the material and develop a cluster on their own time, before attempting to teach a
course.

SMALL(MINI) CLUSTER

Early supercomputers used parallel processing and distributed computing and to link processors together in a single machine. Using freely available tools, it is possible to do the same today using inexpensive PCs - a cluster. Glen Gardner liked the idea, so he built himself a massively parallel Mini-ITX cluster using 12 x 800Mhz nodes.

The machine runs FreeBSD 4.8, and MPICH 1.2.5.2. After working with his machine and running some basic tests, Glen's cluster looks to be equivalent to at least 4 (maybe 6) 2.4Ghz Pentium IV boxes in parallel on a similar network - achieving a performance of around 3.6 GFLP. With the exception of the metalwork, power wiring, and power/reset switching, everything is off the shelf. Rather impressive we'd say - though he *is* root on a 1.1 TFLP 528 CPU monster, the 106th fastest computer in the world...

The Mini-Cluster

I built a Mini-ITX based massively parallel cluster named PROTEUS. I have 12 nodes using VIA EPIA V8000, 800 MHz motherboards. The little machine is running FreeBSD 4.8, and MPICH 1.2.5.2. Troubles installing and configuring Free BSD and MPICH were few. In fact, there were no major issues with either FreeBSD or MPICH.

The construction is simple and inexpensive. The motherboards were stacked using threaded aluminum standoffs and then mounted on aluminum plates. Two stacks of three motherboards were assembled into each rack. Diagonal stiffeners were fabricated from aluminum angle stock to reduce flexing of the rack assembly.

The controlling node has a 160 GB ATA-133 HDD, and the computational nodes use 340 MB IBM microdrives in compact flash to IDE adapters. For file I/O, the computational nodes mount a partition on the controlling node's hard drive by means of a network file system mount point.

Each motherboard is powered by a Morex DC-DC converter, and the entire cluster is powered by a rather large 12V DC switching power supply.

With the exception of the metalwork, power wiring, and power/reset switching, everything is off the shelf.

ALL ABOUT CLUSTERING

A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.

High-availability (HA) clusters
High-availability clusters (also known as Failover Clusters) are implemented primarily for the purpose of improving the availability of services which the cluster provides. They operate by having redundant nodes, which are then used to provide service when system components fail. The most common size for an HA cluster is two nodes, which is the minimum requirement to provide redundancy. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure.
There are many commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for the Linux operating systems.
Load-balancing clusters
Load-balancing when multiple computers are linked together to share computational workload or function as a single virtual computer. Logically, from the user side, they are multiple machines, but function as a single virtual machine. Requests initiated from the user are managed by, and distributed among, all the standalone computers to form a cluster. This results in balanced computational work among different machines, improving the performance of the cluster system.

Compute clusters
Often clusters are used for primarily computational purposes, rather than handling IO-oriented operations such as web service or databases. For instance, a cluster might support computational simulations of weather or vehicle crashes. The primary distinction within compute clusters is how tightly-coupled the individual nodes are. For instance, a single compute job may require frequent communication among nodes - this implies that the cluster shares a dedicated network, is densely located, and probably has homogenous nodes. This cluster design is usually referred to as Beowulf Cluster. The other extreme is where a compute job uses one or few nodes, and needs little or no inter-node communication. This latter category is sometimes called "Grid" computing. Tightly-coupled compute clusters are designed for work that might traditionally have been called "supercomputing". Middleware such as MPI (Message Passing Interface) or PVM (Parallel Virtual Machine) permits compute clustering programs to be portable to a wide variety of clusters.

MANAGING YOUR PROJECT

1 Define scope of work to achieve individual and group goals
2 Identify stakeholders and decision makers
3 Identify escalation procedures
4 Develop work breakdown structures
5 Evaluate project requirements
6 Identify required resources and budget
7 Estimate time requirements
8 Develop initial project management flow chart
9 Identify interdependencies
10 Identify and track critical milestones
11 Evaluate risks and prepare contingency plan
12 Participate in project phase review and report project status
13 Identify project management software
14 Develop method of evaluation
15 Formulate a task strategy
16 Prioritize tasks according to customer needs
17 Devise plan of action

APPLICATION OF HARDWARE ISSUES

Students were asked not only to take care of system administration but also to manage hardware.
The jobs that students handled were: hardware acquisition and replacement; space, heat, and
power management; and most importantly, network management.
Dealing with this process helped future system administrators develop system design skills. Students
also managed the location of shelving, machines, wires, and peripherals, all within the constraints
of the room available. Because of space constraints, cluster nodes were packed tightly
Integrating IS Curriculum Knowledge
270
together and students even went as far as developing a plan for proper cooling of all of the machines.
Networking is a unique problem with Beowulf clusters. It can be divided into two major, related
areas: cabling and topology. Cabling is a problem because the large number of wires and their
length constrains the space in which the cluster can be placed and limits signal quality. Topology
is a consideration because of its effect on performance. Each computer will use a power, mouse,
keyboard, video and potentially multiple network cables. It is difficult to service a cluster when
there is a cascading mess of wires covering everything on the cluster's backside. The cables
quickly become unmanageable if a coherent labeling and bundling plan is not thought of ahead of
time. Another factor is cable cost. It is cheap and educational for students to create custom cables
from a box of cable parts and crimping tools.

NET DESIGN CONCEPTS

1. Identify customer/organisational requirements
2. Conduct need analysis
3. Identify power consuption requirements
4. Identify the importance of UPS
5. Identify physical requirements for network implementation
6. Identify system requirements for network implimention inluding specialised servers.
7. Identify application requirements for network implimentation

Windows Cluster Information

Demand for cost-effective , high perfomance computing is increasingly being addressed by using CLUSTERING solutions.

The most cost effective manufacture today is DELL, it delivers affordable Cluster sulutions.