Blog Archive

Thursday, December 20, 2012

刘亦婷进入哈佛的真相



刘亦婷进入哈佛的真相
http://edu.sina.com.cn/l/2004-11-12/ba91110.shtml

 正是由于美国大学漏洞多多,一些在国内连普通大学都考不进去,根本就没有什么“不寻常的优秀素质和综合能力”的学生,竟也能够堂而皇之地进入美国名校。因为他们只要将自己包装成具有“不寻常的优秀素质和综合能力”的申请者就可以了。这点跟那些歌手、写手、演员的包装策略没有本质的不同。可以说,如果你不擅长包装,即使你真的具有“不寻常的优秀素质和综合能力”,你也未见得能进入美国大学。


许多留学成功者总结的成功秘诀是,申请者得想尽办法,巧妙利用美国大学的漏洞或称游戏规则,将自己包装成一个具有“不寻常的优秀素质和综合能力”的申请者。
  那么,我们可以想像一下,如果这些学生的家长都模仿刘亦婷的家长去写什么“哈佛男孩”、“MIT女孩”、“耶鲁boy”、“普林斯顿girl”,我相信他们一定会更受欢迎。因为这些学生可能本来就特别贪玩,不怎么热爱学习,学习成绩也不理想,所以他们更像用素质教育理念培养出来的学生。说不定某天,我们的书店里会出现一批《我家淘淘上哈佛》、《我家闹闹上耶鲁》、《上哈佛不必太聪明》这类所谓的素质教育纪实。
  可与这些“素质教育纪实”炮制者的名利双收相对应的,是不明就里的家长被愚弄,是无辜可怜的孩子们被误导、被伤害。所以我一直认为,帮助这些家长和学生弄明白男孩女孩们进入美国名校的真相,是一件颇有意义的事情。
  既然一些中国学生可以利用美国大学招生制度的漏洞堂而皇之地进入美国名校,那么,我们是不是有理由怀疑,刘亦婷进入哈佛的关键因素,并不是其家长标榜的什么“不寻常的优秀素质和综合能力”,而是抓住了美国大学在中国招生的漏洞呢?
  接下来就以美国名校招生所需的软硬条件为线索,看看刘亦婷究竟是靠什么进入哈佛的。
  SAT
  美国没有全国统一的大学入学考试。由ETS主持的“学术水平测验考试”(Scholastic Assessment Test,简称SAT)被多数大学用做比较不同地区、不同高中、不同评分制度的标准。而且SAT是美国大学所能够得到的、惟一可以比较来自不同地区和学校的学生成绩的标准,所以它对录取与否的作用非常之大。
  这样一个重要的考试,刘亦婷并没有参加。可以说,这是刘亦婷在申请中的一个很大的劣势。
  托福(TOEFL)
  托福成绩对于美国大学来说作用不大,旨在消除校方对你语言交流能力的一些疑问。你只要达到一定标准就可以了。比如刘亦婷的640分就是一个标准。
  学校成绩(GPA)
  至于GPA,对于高中生申请者来说作用有限。主要原因是各学校成绩判分松紧不同,很难将此作为学生横向比较的依据,例如普林斯顿大学物理系就是明确表示GPA不重要而学习物理的动机和兴趣才最重要。
  课外活动
  对于美国大学来说,这个项目很重要,能够看出一个学生多方面的素质。美国中学生的课外活动不下四五十种,可分为学术活动、娱乐活动、体育活动和社区活动。可谓多种多样,丰富多彩。
  学术性的课外活动包括自然科学、数学、电脑、写作、编辑、辩论等学生社团;娱乐性的有话剧社、合唱团、乐队、舞蹈队、摄影社、桥牌社、未来农民社、少年企业家社等;体育的包括各种运动校队、体操队、啦啦队等等。各种课外活动都有一个辅导老师。
  美国各学校把课外活动作为帮助学生增长才干、适应社会人生的重要措施,经常进行考核,认为从中可以看出学生的竞争心理、责任感、领导能力和人际关系。据专家研究,课外活动表现突出的学生,很可能将来是学术或政治方面的优秀人物。时下英国大学已把学生课外活动的表现作为大学人学评价标淮之一。有些名牌大学甚至将考生课外活动表现作为总评分的25%。各大学竞相录取学科成绩优良而课外活动表现突出的学生。
  (引自中国中小学信息技术教育网《美国中学生课外活动多种多样》文/陈跃)
  那么,刘亦婷参加课外活动情况如何呢?她在日记中写道:
  我计算过我们每天在教室里学习的时间有9个多小时,还有时在吃饭的空余时间、课间,甚至寝食里学习。我们每天的时间有一大半泡在书本里。老师们或许要摇头,说我太夸张,其实不然。每个老师都只布置了适量作业,确实每科都不算多,但7门加在一起就能把人压得喘不过气来。除了每天的作业外,还有周记、练习册、习题集、小结等。就算哪天布置的作业少,也不敢贪恋玩耍,因为后面还有一大堆常规作业像一条无形的锁链在锁着我们。我曾试过每天抽出半小时打乒乓球,结果感到时间一下不够了,常要做到晚自习结束,根本没时间复习预习。
  (引自《刘亦婷的学习方法和培养细节》 第283页)
  一个连打乒乓球的时间都没有的学生,能够指望她在课外活动方面有什么杰出表现吗?
  竞赛
  按照一些留学专家的观点,竞赛的成绩除了国际竞赛外其他的基本上不会有什么用处。中国每年会诞生数十个国际竞赛的奖牌获得者,在这点上,刘亦婷的竞争力明显不够。
  个人艺术特长
  申请人的艺术作品,如视觉艺术、舞蹈、戏剧或音乐作品,可以由大学的教师进行评价,并决定是否在评审录取时对申请人的特长进行特殊考虑。这方面刘亦婷明显欠缺,我相信,她是不会将自己在《苍天在上》这部电视剧中当群众演员的经历当作艺术特长去炫耀的。
  个人陈述(PS)
  我仔细看过刘亦婷写的PS,比较空洞,思考方法也有比较明显的错误。在后面的章节中,我会详细讨论这个问题。
  推荐信
  一些留学咨询专家也认为,能够找到和申请者有关的名人推荐作用极大,甚至会起到一锤定音的作用。这个环节可以说是刘亦婷最能打动招生委员会的地方。
  刘亦婷的推荐人是拉瑞·席慕思。在《哈佛女孩刘亦婷》那本书中,作者以异常仰慕的文字介绍了这位“传奇人物”。
  拉瑞早年毕业于美国著名的“常春藤联校”之一的达特茅茨学院法学院。凭借着过人的才华和勤奋,很快在美国法律界初露头角。1974-1975年期间,他曾任美国最高法院一位大法官的助手。1976-1985年期间,他又担任了美国司法部总检察长助理的要职。1985年之后,拉瑞辞去公职,专心从事律师工作,取得了巨大的成功。他不但是一位出色的律师,还是美国全国律师协会中国法委员会主席,同时也是世界第六大律师事务所———“格信”律师事务所的高级合伙人———老板之一。
  (引自《哈佛女孩刘亦婷》增订本 第322页)
  我想,看了拉瑞先生的经历后,任何一个中国申请者都会羡慕刘亦婷的运气。因为以这个人的资历推荐刘亦婷,在其他条件平平的情况下也有可能被录取。更何况,拉瑞先生与一些美国名校有着一定的人脉关系,就在他推荐刘亦婷之前,他已经推荐了两个中国学生,这两个学生的申请都获得了成功。
  在《哈佛女孩刘亦婷》一书中,有一个细节引起了我的注意。
  1998年6月,婷儿正忙着高中会考时,收到拉瑞的电子邮件,他以惯常的简洁方式,开门见山地说:
  “艾米,告诉你一个好消息,我得知哥伦比亚和威尔斯利都有专为中国学生设立的全额奖学金,当然,他们只接受最棒的中国学生。不知你是否愿意接受这个挑战:直接申请到美国大学读本科?”
  (引自《哈佛女孩刘亦婷》增订本 第359页)
  拉瑞在这封电子邮件中提到的“哥伦比亚大学、威尔斯利学院”正是刘亦婷中学时访问美国时参观过的大学,可见,这两所大学很可能与拉瑞保持着良好的关系。最后的结果也似乎证明了这一点。我们知道,刘亦婷一共申请了11所学校,仅有4所录取了她。而在这4所录取她的学校中,就有哥伦比亚大学以及威尔斯利学院。拉瑞推荐信的份量可见一斑。
  面谈
  面谈很重要。起码哈佛大学是这样对外宣称的。在《哈佛女孩刘亦婷》一书中,作者也不吝笔墨地对面谈的重要性作了大段铺陈。
  不少美国大学都在招生指南中反复强调———“建议面谈及访问校园”,“强烈建议面谈”,等等。对有经验的招生人员来说,短短半小时的面谈,有时可能比几十页的材料还能说明问题。
  面谈人(Interviewer),是每次面谈时,与申请人直接谈话的人,也是面对活生生的申请人直接下结论的人,他们是招生委员会延伸的耳朵和眼睛,对能否被录取,有着不可忽视的影响。所以,如果你申请留学时得到了面谈的机会,一定要认真对待!
  与申请人面谈的一般都是些什么人呢?如果申请人直接访问大学校园,面谈人当然就是该校招生办公室的官员。但是很多情况下,面谈地点并不在大学校园,甚至不在美国本土。如果申请人分布的地点比较集中,美国的大学还有可能派几名老师出来跑一圈,可是如果申请人遍布全球,学校直接派老师来面谈就很难做到了。于是,很多美国大学形成了一个传统———利用本校的毕业生来当面谈人。这些毕业生既熟悉本校的招生要求,又对母校怀着深厚感情,一般都会忠实执行母校的使命,确实是非常恰当的人选。不言而喻,客观公正是必不可少的前提之一。
  (引自《哈佛女孩刘亦婷》 第380页)
  不过,就是这么重要的面谈,哈佛大学招生委员会很潦草地进行了。可以说这次对刘亦婷的面谈极大地违反了哈佛大学招生委员会一贯标榜的客观、公正原则。
  而让哈佛大学招生委员会始料不及的是,曝光这件事的恰恰是受益者自己———刘亦婷的家人。在《哈佛女孩刘亦婷》那本书中我们可以完整看到事情的经过。
  2月初的一天,哈佛招生办突然来了一封电子邮件,很抱歉地通知婷儿说,他们无法在成都找到一个能做面谈的哈佛毕业生,并问婷儿是否可以到上海或北京去面谈,并要求婷儿补充一份能让招生委员会了解她学业水平的论文。
  这个邮件让我们又惊又喜,看来,婷儿已经在初步筛选中引起了哈佛招生办的兴趣!
  我们马上把这个邮件转发给拉瑞,拉瑞的反应比我们还兴奋。他立即行动起来,拜托他在北京和成都两地认识的美国人,帮助婷儿查找在中国西南地区工作的哈佛毕业生。
  我们也到处托亲拜友,希望在较近的城市找到一个哈佛毕业生当面谈人。
  就在我们的长途电话快打出点眉目的时候,拉瑞来了一封电子邮件,带来了令人振奋的好消息。
  “我找到了一个哈佛毕业生……”
  迫不及待地赶快往下看:哈!这位哈佛毕业生竟然就在成都,是一位在成都工作的美国人,是由拉瑞的好朋友、热心的鲍勃找到的。这真是太巧了,太棒了!
  拉瑞找到的这位哈佛毕业生,竟然就在成都!他就是做新闻文化工作的乔(Joe)。
  (引自《哈佛女孩刘亦婷》增订本 第383页)
  面谈人其实某种意义上来说就是一个考官,对他的一个重要要求就是客观、公正。请读者注意,确定这位面谈人的关键人物并非哈佛招生委员会,而是刘亦婷的推荐人拉瑞。
  我们仅从常识上就能推断:由推荐人为自己的推荐对象寻找考官在程序上是不合法的。
  接下来我们看到,拉瑞不仅为刘亦婷跑前跑后找到这个考官,还亲自征询这个人的意见,并迅速地把他的情况和通讯地址告诉了哈佛招生办,哈佛招生办也以最快的速度给他寄去了面谈所需要的一切材料。哈佛招生委员会真是给足了拉瑞面子。
  看着拉瑞忙前忙后的一幕,我们是不是有理由怀疑:这个哈佛毕业生乔对刘亦婷进行的面试将毫客观性可言。
  选校
  刘亦婷选择了11所大学,当然,这11所大学里肯定有二流的大学也有一流的大学,因为要拉开档次。按照后来的报道,如果以通知书为准的话,最终有4所大学录取了刘亦婷,也就说,有7所大学拒绝了刘亦婷。
  有的读者也许会有这样的疑问:为什么刘亦婷能被排名第一的哈佛录取,而被排名靠后的二流甚至三流学校拒绝了呢?
  我认为,刘亦婷被哈佛录取,起关键作用的便是她的推荐人拉瑞。刚才提到,在成都面试刘亦婷的那个考官,就是拉瑞与哈佛联系之后确定下来的。所以我们不妨大胆推测,拉瑞在哈佛有比较好的人脉关系。或者说,在哈佛招生委员会那里有着很高的声望。
  而那些二三流的美国大学,可能与拉瑞一点都不熟悉。也就是说,拉瑞的推荐在他们那里没有一点作用。而刘亦婷在其他方面的表现乏善可陈,根本就让他们提不起兴趣。
  前面提到,最终录取刘亦婷的有四所大学,除了哈佛之外,还包括哥伦比亚大学和威尔斯利学院。而我们知道,拉瑞在这两所大学也同样有着一定的声望。
  至此,我们可以看到,在刘亦婷的推荐人拉瑞的努力下,刘亦婷在没有艺术特长,没有大赛奖牌,没有SAT成绩,课外活动贫乏,没有骄人经历,其他方面都乏善可陈的情况下被哈佛等大学录取了。我们有理由相信,她走入哈佛的最大原因就是她结识了拉瑞这个推荐人,而这个推荐人又是这样的有能量、如此地热心助人。
  也许上面这段文字会给读者这样的提醒:如果你真的按照“刘亦婷培养模式”克隆出了第二个、第三个刘亦婷,你若想把她弄进哈佛大学还要有两个条件:
  第一,没有更强的对手与你竞争这一两个名额,就像当初的刘亦婷一样。不过这样的时代已经一去不复返了,经过《哈佛女孩刘亦婷》一书的启蒙,中学生申请美国本科生名额的竞争已经越来越激烈。
  第二,必须得有足够的运气结识拉瑞,或者比拉瑞更有能量的推荐人。

Saturday, December 1, 2012

Building Your Own Super Computer

http://www.webstreet.com/super_computer.htm

  
Building Your Own Super Computer ( 1 )

Why pay $10 million for a supercomputer when this article can show you how to build your own supercomputer cluster with just a handful of Windows/Linux PC's...

James Cameron’s Titanic (the movie) special effects crew couldn't afford a supercomputer to do the critical rendering, and anything less would take far too long.
Like all high-end animators and special effects houses, the Titanic team had a slew of SGI Indigo workstations (as well as a pile of new Windows NT workstations for the low end jobs), but Titanic romance and tragedy was far more demanding than most projects.
A much greater degree of realism was required than for the typical science fiction epic, and realism is expensive. Rendering the water scenes was obviously a job for a supercomputer, but with Titanic already far over budget, a $10,000,000 computer wasn't realistic.
The performance problem was solved by assembling a cluster of DEC Alpha based computers into a Linux cluster, an instant supercomputer at a small fraction of the cost, which produced a large number of extraordinarily challenging visual effects for this demanding film.
In this article, although a bit off topic, I will discuss how to build a generic Linux or Windows supercomputer with the clustered computing concept. You will find out just how easy it is to build a supercomputer with Linux clusters. In this article we will limit our discussion to building Linux and Windows clusters to obtain supercomputer computational power. It is out of scope of this article to discuss how to solve any computational intensive algorithmic problem and how to code those algorithms for cluster architecture.
 
 
  
Building Your Own Super Computer - Definitions and Benefits of Clustering ( 2 )Greg Pfister, in his wonderful book In Search of Clusters, defines a cluster as "a type of parallel or distributed system that: consists of a collection of interconnected whole computers, and is used as a single, unified computing resource".
Therefore, a cluster is a group of computers bound together into a common resource pool. A given task can be executed on all computers or on any specific computer in the cluster. Lets look into the benefits of clustering:
  • Scientific applications: Enterprises running scientific applications on supercomputers can benefit from migrating to a more cost effective Linux cluster .
  • Large ISPs and E-Commerce enterprises with a large databaseInternet service providers or e-commerce web sites that require high availability and load balancing and scalability.
  • Graphics rendering and animation: A Linux cluster has become important in the film industry for rendering quality graphics. In the movie Titanic, a Linux cluster was used to render the background in the ocean scenes. The same concept was used in the movies True Lies and Interview with the Vampire.
We can also characterize clusters by their function:
  • Distributed processing cluster: Tasks (small piece of executable code) are broken down and worked on by many small systems rather than one large system, often deployed for a task previously handled by supercomputers. This type of cluster is very suitable for scientific or financial analysis.
  • Fail-over clusters: Clusters are used to increase the availability and serviceability of network services. When an application or server fails, its services are migrated to another system. The identity of the failed system is also migrated. Failover servers are used for database servers, mail servers or file servers:

     
  • High availability load balancing clusters: A given application can run on all computers and a given computer can host multiple applications. The “outside world” interacts with the cluster and individual computers are “hidden”. It supports large cluster pools and applications do not need to be specialized. High availability clustering works best with stateless application and those that can be run concurrently:

     

Building Your Own Super Computer - Building Windows Clusters ( 3 )Hardware
Before starting, you should have the following hardware and software:

  • At least two computers with Windows XP, Windows NT, SP6 or Windows 2000 networked with some sort of LAN equipment (hub, switch etc.).
  • Ensure during the Windows set up phase that TCP/IP, and NETBUI are installed, and that the network is started with all the network cards detected and the correct drivers installed.
We will call these two computers a Windows cluster. You now you need some sort of software that will help you to develop, deploy and execute applications over this cluster. This software is the core of what makes a Windows cluster possible.
Software
The Message Passing Interface (MPI) is an evolving de facto standard for supporting clustered computing based on message passing. There are several implementations of this standard.

In this article, we will use mpich2, which is freely available and you can download it here for Windows clustering, and find related documentation here . Please read the PDF before starting the following steps.
Step 1: Download and unzip mpich2 into any folder and share this folder with write permission.
Step 2: Copy all files with the .dll extension from C:\MPICH2\lib to the C:\Windows\system32 folder.
Step 3: Install the Cluster Manager Service on each host you want to use for remote execution of MPI processes. For installation, start rcluma-install.bat (located in the C:\MPICH2\bin directory) by double-clicking from the local or network-drive. You must have administrator rights on the hosts to install this service.
Step 4: Follow step 1 and 2 for each node in the cluster (we will name each computer in the cluster as node).
Step 5: Now Start RexecShell  (from folder C:\MPICH2\bin) by double-clicking it:


Open the configuration dialog by pressing F2. The distribution contains a precompiled example MPI program named cpi.exe (located in MPICH2/bin). Choose it as the actual program. Make sure that each host can reach cpi.exe at the specified path.
Choose ch_wsock as the active plug-in. Select the hosts to compute on. On the tab 'Account', enter your username, domain and password, which need to be valid on each host chosen. Press OK to confirm your selections. The Start Button (from the Window RexecShell) is now enabled and can be pressed to start cpi.exe on all chosen hosts. The output will be displayed in separate windows.
Congratulations -- your supercomputer (Windows cluster) is ready to run MPI programs!
 
Building Your Own Super Computer - Building a Linux Cluster ( 4 )Linux clusters are generally more common, robust, efficient and cost effective than Windows clusters. We will now look at the steps involved in building up a Linux cluster. For more information go here .
Step 1
Install a Linux distribution (I am using Red Hat 7.1 and working with two Linux boxes) on each computer in your cluster. During the installation process, assign hostnames and of course, unique IP addresses for each node in your cluster.

Usually, one node is designated as the master node (where you'll control the cluster, write and run programs, etc.) with all the other nodes used as computational slaves. We name one of our nodes as Master and the other as Slave.
Our cluster is private, so theoretically we could assign any valid IP address to our nodes as long as each has a unique value. I have used IP address 192.168.0.190 for the master node and 192.168.0.191 for the slave node.
If you already have Linux installed on each node in your cluster, then you don't have to make changes to your IP addresses or hostnames unless you want to. Changes (if needed) can be made using your network configuration program Linuxconf in Red Hat.
Finally, create identical user accounts on each node. In our case, we create the user DevArticle on each node in our cluster. You can either create the identical user accounts during installation, or you can use the adduser command as root.
Step 2
We now need to configure rsh on each node in our cluster. Create .rhosts files in the user and root directories. Our .rhosts files for the DevArticle users are as follows:

Master DevArticle
Slave DevArticle

Moreover, the .rhosts files for root users are as follows:
Master root
Slave root

Next, we create a hosts file in the /etc directory. Below is our hosts file for Master (the master node):
192.168.0.190 Master.home.net Master
127.0.0.1               localhost
192.168.0.191    Slave

Step 3
Do not remove the 127.0.0.1 localhost line. The hosts.allow files on each node was modified by adding ALL+ as the only line in the file. This allows anyone on any node permission to connect to any other node in our private cluster. To allow root users to use rsh, I had to add the following lines to the /etc/securetty file:

rsh, rlogin, rexec, pts/0, pts/1.

Also, I modified the /etc/pam.d/rsh file:
#%PAM-1.0
# For root login to succeed here with pam_securetty, "rsh" must be
# listed in /etc/securetty.
auth       sufficient   /lib/security/pam_nologin.so
auth       optional     /lib/security/pam_securetty.so
auth       sufficient   /lib/security/pam_env.so
auth       sufficient   /lib/security/pam_rhosts_auth.so
account  sufficient   /lib/security/pam_stack.so service=system-auth
session   sufficient   /lib/security/pam_stack.so service=system-auth

Step 4
Rsh, rlogin, Telnet and rexec are disabled in Red Hat 7.1 by default. To change this, I navigated to the /etc/xinetd.d directory and modified each of the command files (rsh, rlogin, telnet and rexec), changing the disabled = yes line to disabled = no.

Once the changes were made to each file (and saved), I closed the editor and issued the following command: xinetd –restart -- to enable rsh, rlogin, etc.
Step 5
Next, download the latest version of MPICH (UNIX all flavors) to the master node from here. Untar the file in either the common user directory (the identical user you established for all nodes "DevArticle" on our cluster) or in the root directory (if you want to run the cluster as root).
Issue the following command:

tar zxfv mpich.tar.gz
Change into the newly created mpich-1.2.2.3 directory. Type ./configure, and when the configuration is complete and you have a command prompt, type make.
The make may take a few minutes, depending on the speed of your master computer. Once make has finished, add the mpich-1.2.2.3/bin and mpich-1.2.2.3/util directories to your PATH in .bash_profile or however you set your path environment statement.
The full root paths for the MPICH bin and util directories on our master node are /root/mpich-1.2.2.3/util and /root/mpich-1.2.2.3/bin. For the DevArticle user on our cluster, /root is replaced with /home/DevArticle in the path statements. Log out and then log in to enable the modified PATH containing your MPICH directories.
Step 6
Next, make all of the example files and the MPE graphic files. First, navigate to the mpich-1.2.2.3/examples/basic directory and type make to make all the basic example files.

When this process has finished, you might as well change to the mpich-1.2.2.3/mpe/contrib directory and make some additional MPE example files, especially if you want to view graphics.
Within the mpe/contrib directory, you should see several subdirectories. The one we will be interested in is the mandel directory. Change into the mandel directory and type make to create the pmandel exec file. You are now ready to test your cluster.
 
Building Your Own Super Computer - Testing Your Linux Cluster ( 5 )The first program we will run is cpilog. From within the mpich-.2.2.3/examples/basic directory, copy the cpilog exec file (if this file isn't present, use make command again) to your top-level directory. On our cluster, this is either /root (if we are logged in as root) or /home/DevArticle, if we are logged in as DevArticle (we have installed MPICH both places).
Next, from your top directory, rcp the cpilog file to each node in your cluster, placing the file in the corresponding directory on each node. For example, if I am logged in as DevArticle on the master node, I'll issue rcp cpilog Slave:/home/ DevArticle to copy cpilog to the DevArticle directory on Slave. I'll do the same for each node (if there are more than two nodes). If I want to run a program as root, then I'll copy the cpilog file to the root directories of all nodes on the cluster.
Congratulation your supercomputer (Linux cluster) is ready to run MPI programs!
Once the files have been copied, I'll type the following from the top directory of my master node to test my cluster:
mpirun -np 1 cpilog
This will run the cpilog program on the master node to see if the program works correctly. Some MPI programs require at least two processors (-np 2), but cpilog will work with only one. The output looks like this:
pi is approximately 3.1415926535899406,
Error is 0.0000000000001474
Process 0 is running on Server.home.net
wall clock time = 0.360909

Now try all two nodes (or however many you want to try) by typing: mpirun -np 2 cpilog and you'll see something like this:
pi is approximately 3.1415926535899406,
Error is 0.0000000000001474
Process 0 is running on Master.home.net
Process 1 is running on Slave.home.net
wall clock time = 0.0611228

The number following the -np parameter corresponds with the number of processors (nodes) you want to use in running your program. This number may not exceed the number of machines listed in your machines.LINUX file plus one (the master node is not listed in the machines.LINUX file).
To see some graphics, we must run the pmandel program. Copy the pmandel exec file (from the mpich-1.2.2.3/mpe/contrib/mandel directory) to your top-level directory and then to each node (as you did for cpilog). Then, if X isn't already running, issue a startx command. From a command console, type xhost + to allow any node to use your X display, and then set your DISPLAY variable as follows:
DISPLAY=Server:0 (be sure to replace Server with the hostname of your master node). Setting the DISPLAY variable directs all graphics output to your master node. Run pmandel by typing: mpirun -np 2 pmandel
The pmandel program requires at least two processors to run correctly. You should see the Mandelbrot set rendered on your master node.

Adding more processors (mpirun -np 10 pmandel) should increase the rendering speed dramatically. The mandelbrot set graphic has been partitioned into small rectangles for rendering by the individual nodes. You can actually see the nodes working as the rectangles are filled in. If one node is a bit slow, then the rectangles from that node will be the last to fill in. It is quite fascinating to watch.

This article was not written by Web Street. One of our customers found it in a news room. We tested it and found it credible. We now wish to share it with you. We take no responsibility, credit, fee or referral from this article.