2008年3月29日星期六
Guitar訓練,3-01-練習 2008-03-29
1. 节拍器速度:192,108,224 练习次数:2遍
II.卡尔卡西25首练习曲 No.3
1. 节拍器速度:160,176,192 练习次数:2遍
III卡尔卡西25首练习曲 No.7
1. 节拍器速度:132,144 练习次数:3遍
IV卡尔卡西25首练习曲 No.19
1. 节拍器速度:104,112 练习次数:3遍
2.a指不靠弦,p指靠弦
V 卡尔卡西25首练习曲 No.13
1. 节拍器速度:66,72 练习次数:3编
VI. guita etude Fernando Sor Op.31 No.20 前4句(1-16.5)小节 节拍器速度:96,104 三遍
VII. 基本练习 圆滑音 在5 4 3 2弦上练习
1. 四指组合练习(打后连、连后打) 加上节拍器,速度:120 132 144 160 176 192
VIII. 左右手指操
弯曲手指,少许阻力,抬手指,注意是大关节运动。
1. 左右手各单指独立练习
2. 双指配合练习,每个动作为:123 321
2008年3月11日星期二
Giuliani-Guitar concertos No1&3
专辑歌手:佩佩·罗梅罗
发行日期:
专辑类型:古典吉他
唱片公司:Philips
专辑介绍: 莫罗·朱利亚尼(Mauro Giuliani 1781~1827) 被誉为是永远活着的最伟大的吉他音乐家。十九世纪初,六弦吉他出现后,这种新乐器的出现随之刮起了一阵强烈的旋风。作为这股风潮的中心,在西班牙和意大利出现了一群新生代的大师级人物,这些最早期的专业的吉他演奏和作曲家中,Sor(索尔)、Aguado(阿瓜多)、Carulli(卡鲁里)和Carcassi(卡尔卡西)等人都是其中的佼佼者,而朱利亚尼这个名字最为突出。早年他在意大利接受吉他训练,然而,由于当时整个意大利完全沉浸在对歌剧的狂热之中,1806年朱利亚尼怀抱着梦想动身前往维也纳。这时的维也纳正处于后古典主义时期,在西方世界音乐家的心目中无疑是一座熠熠生辉的音乐天堂。此时维也纳音乐界,海顿仍然在世,贝多芬已经创作出了《英雄交响曲》和《热情奏鸣曲》。
在维也纳,朱利亚尼惊人的才华充分释放了出来,引起了人们对于吉他的热情和追逐,以及由此在整个音乐世界引发的轰动效应,这也使得朱利亚尼成为吉他史上至关重要的人物。在他在世的黄金时期,对于吉他音乐他所代表的意义,正如同李斯特或肖邦对于钢琴。1808年的仲夏,朱利亚尼的“大协奏曲”(op.30)在维也纳首演,从此,朱利亚尼这个名字为维也纳人所牢记,甚至当时严苛的舆论也不得不赞同吉他音乐可能有更多的表现形式,而不仅仅只是作为一种伴奏音乐存在。这次首演在音乐史上是一次名副其实的划时代事件,它也是世界上首部吉他协奏曲,其影响是不可估量的。此后,吉他在维也纳迅速风行,朱利亚尼声望也与日俱增,直至今日,他对古典吉他的影响仍处处可见。而后朱利亚尼还继续创作了两部吉他协奏曲(op.36&op.70),以及大量吉他室内乐和独奏作品。作为最伟大的在世的吉他音乐家,很快的他享誉世界。哪怕他放弃在维也纳的优越生活回到意大利,不论是在罗马还是那不勒斯,都有皇家贵族资助他的生活。甚至在他去世后十年,在伦敦的一批追随者还以他的名字为吉他期刊命名作为怀念。
A大调第一吉他协奏曲(op.30) 的第一乐章以古典协奏曲特有的风格拉开帷幕,管弦乐团首先奏响基本主旋律,这一段从意大利歌剧中借鉴得灵感,其风格近似人声且热情奔放。在接下来的吉他独奏中,大量的华彩乐章和蕴涵丰富的表现手法渲染了这一主题。一段短小洗练的小调将该乐章过度到小奏鸣曲的形式,而后扼要的主题重复也使得乐章更为丰满。在第二乐章行板中运用了西西里舞曲的体裁,意大利民族独特的灵韵气质显露无遗。在这一略显忧郁的乐章中,吉他以咏叹调交替阐释主题,但朱利亚尼并不希望制造一种模糊不清的意旨,而是将整个乐章停留在如晨曦般温暖的曲调中,扣合序奏部分的主旋律进入本章尾声。终乐章是带有波洛奈兹舞曲风格的回旋曲,具典型的18及19世纪十分受欢迎的该舞曲的充满活力的节奏风格。由此处可见,朱利亚尼执着地探索一种适合吉他演奏的节奏素材。在进入两段主题回旋之前,由精彩如炫技巧般的华彩乐段导入。在如同凯旋似壮阔的吉他与乐团的对白里迎来了本协奏曲的尾声。
F大调第三吉他协奏曲(op.70) 创作于1816年或稍早一些。该作品朱利亚尼是为一种名为terz-guitar的吉他而作,这种吉他的音高比普通的古典吉他还高出三度。为了重现该协奏曲的原来的音色,现代的吉他音乐家在演奏时则使用品位卡作为辅助。朱利亚尼的第三部协奏曲作品,显示出与他的首次努力值得注意的改进。这种成熟很容易发现在首乐章中发现,轻快多变的主旋律,管弦乐团恰到好处的默契配合,更多的复杂的和声,还有对表现形式的熟练掌握。在首乐章里,尽管有许多吉他独奏的段落,但华彩乐段并不很多,这也是作品家出于对乐曲的连贯性和更和谐的全奏方面的考虑。第二乐章包含有三个非常优美西西里舞曲体裁的变奏,在主题与第一变奏中的转换中吉他的独奏与整个乐团的对比鲜明;第二变奏以小调的形式来表现,舒缓温和的吉他在管弦乐的烘托下情致动人;节奏流畅生动是最后一个变奏的特点,在独奏与乐团间的精彩对话中本乐章完美煞尾。就象第一部协奏曲一样,此处的终乐章也是采用波洛奈兹舞曲风格的行板。吉他的独奏和乐团交替呈现主旋律,无论是管弦乐的全奏还是自然而然过渡到吉他大师独奏的插部,衔接完美了无痕迹。在加快的节奏与激昻的尾声里整部协奏曲落下帷幕。(以上唱片内附资料翻译,仅供参考)
佩佩·罗梅罗(Pepe Romero) 是罗梅罗吉他世家最为杰出的佼佼者。不论是他在吉他表现力上显示出的随心所欲的技巧,还是对作品极具说服力的诠释,都让人叹服不已。在古典吉他领域,他的名字象征着一种高度、一个时代。由于他对古典吉他贡献也激发了多位着名的作曲家为他创作,世界上多位着名的指挥家及乐团都曾与他合作,包括本张唱片中的指挥马里纳阁下(Sir Neville Marriner)和圣马丁学院室内乐团。这张唱片从未在唱片架上撤下,因为吉他、佩佩还有朱利亚尼三者火花四溅的碰撞,企鹅三星带花的评价似乎也可以证明对她的喜爱有充分的理由。

01.Guitar Concerto No.1 in A,Op.30- Allegro maestoso
02.Guitar Concerto No.1 in A,Op.30- Andantino(Siciliano)
03.Guitar Concerto No.1 in A,Op.30- Polonaise(Allegretto)
04.Guitar Concerto No.3 in F,Op.70- Allegro moderato
05.Guitar Concerto No.3 in F,Op.70- Andantino alla Siciliana con variazioni
06.Guitar Concerto No.3 in F,Op.70- Polonaise(Allegretto)
2008年3月5日星期三
Benchmarking Research Performance in Department of Computer Science
Benchmarking Research Performance in Department of Computer Science,
School of Computing, National University of Singapore
Philip M. Long Tan Kian Lee Joxan Jaffar
April 12, 1999
In April 1999, the Department of Computer Science at the National University of Singapore conducted a study to benchmark its research performance. The study shows, using publication counts along, that NUS ranks between 21 and 28 in comparison to a list of the top 70 departments of CS in the US. In this article, we present the methodology adopted and report our findings.
1. Background
As part of its self-assessment effort, the Department of Computer Science at the National University of Singapore conducted a study to benchmark its research performance. The study used publication statistics to estimate where it would have been placed in an authoritative ranking of CS departments.
We chose to use statistics of conference publications instead of journal publications because in computer science, conferences are the primary means for communicating research results, and they are refereed and some are very selective. We used papers published from 1995-1997; we stopped at 1997 so that the proceedings from the most recent year would be likely to be available in the library. Prior to this exercise, our department had divided conferences into three categories based on their prestige level: rank 1(the most prestigious conferences), rank 2, and rank 3. Since we felt publications in rank 1 and rank 2 conferences are far more relevant to the standing of a department, and to save on data collection costs, we omitted consideration of conferences of rank 3. In fact, a few of the proceedings were not available in the library: in our study we used the 109 conferences of rank 2 and above whose proceedings were available. We divided the rank 2 conferences into two groups, picking out a small collection of the better rank 2 conferences, which we will refer to as rank 2A conferences and the remainder as rank 2B conferences. This was done by consulting faculty in different areas and asking their opinions: they could support their case for a conference using the usual arguments like small acceptance ratio, publication of prominent results, participation by famous researchers in the conference or on the program committee, and so forth.
As our "authoritative ranking" of CS departments, we used the ranking published by the National Research Council [1]. To save on data collection costs, we used only the top 70 universities in that ranking (see Table 1). We note that our estimate is obtained only from publication statistics, whereas the original ranking done by the NRC took into account other factors.
| 1 Stanford University | 26 Purdue University | 51 University of Illinois at Chicago |
| 2 Massachusetts Inst of Technology | 27 Rutgers State Univ-New Brunswick | 52 Washington University |
| 3 University of California-Berkeley | 28 Duke University | 53 Michigan State University |
| 4 Carnegie Mellon University | 29 U of North Carolina-Chapel Hill | 54 CUNY - Grad Sch & Univ Center |
| 5 Cornell University | 30 University of Rochester | 55 Pennsylvania State University |
| 6 Princeton University | 31 State U of New York-Stony Brook | 56 Dartmouth College |
| 7 University of Texas at Austin | 32 Georgia Institute of Technology | 57 State Univ of New York-Buffalo |
| 8 U of Illinois at Urbana-Champaign | 33 University of Arizona | 58 University of California-Davis |
| 9 University of Washington | 34 University of California-Irvine | 59 Boston University |
| 10 University of Wisconsin-Madison | 35 University of Virginia | 60 North Carolina State University |
| 11 Harvard University | 36 Indiana University | 61 Arizona State University |
| 12 California Institute Technology | 37 Johns Hopkins University | 62 University of Iowa |
| 13 Brown University | 38 Northwestern University | 63 Texas A&M University |
| 14 Yale University | 39 Ohio State University | 64 University of Oregon |
| 15 Univ of California-Los Angeles | 40 University of Utah | 65 University of Kentucky |
| 16 University of Maryland College Park | 41 University of Colorado | 66 Virginia Polytech Inst & State U |
| 17 New York University | 42 Oregon Graduate Inst Sci & Tech | 67 George Washington University |
| 18 U of Massachusetts at Amherst | 43 University of Pittsburgh | 68 Case Western Reserve Univ |
| 19 Rice University | 44 Syracuse University | 69 University of South Florida |
| 20 University of Southern California | 45 University of Pennsylvania a | 70 Oregon State University B. |
| 21 University of Michigan | 46 University of Florida | |
| 22 Univ of California-San Diego | 47 University of Minnesota | |
| 23 Columbia University | 48 Univ of California-Santa Barbara | |
| 24 University of Pennsylvania b | 49 Rensselaer Polytechnic Inst | |
| 25 University of Chicago | 50 Univ of California-Santa Cruz |
2. Basic Method and Result
Once we had divided the conferences into three categories (rank 1 only, ranks 1+2A, and ranks 1+2A+2B), we counted the number of papers published in the selected conferences by NUS and the 70 US computer science departments, and checked how well counting the number of papers published in conferences of some rank or higher agreed with the ranking published by the NRC. To measure the degree of disagreement, we counted the number of pairs of universities that had the property that University A was ranked above University B, but University B had a higher paper count (considering conferences at some rank or higher). We took the prestige threshold with the fewest disagreements (this turned out to be rank 1 conferences only), and looked at its ranking. Using this method, NUS's estimated ranking among US universities was 26th. Counting rank 1 conference papers agreed with 80% of the relative rankings of the NRC study. Using the other prestige thresholds yielded similar results, with slightly higher rankings for NUS, and slightly more disagreements with the NRC ranking.
3. Other Methods and Results
Despite the best intentions of the members of the computer science department, it is natural to suspect that some bias might creep into our departmental rankings. To address this potential problem, we tried a variety of different methods, which balanced our prior knowledge about the prestige of conferences with information obtained by looking at where members of well-respected universities published.
3.1 Choosing conferences using the NRC ranking only
In this method, we used the NRC ranking to choose the subset of conferences to use for paper counting. For this, we used "simulated annealing" [3], a standard optimization technique, to choose a subset of conferences to approximately minimize the number of disagreements with the NRC ranking. There was no prior bias toward any particular subset, and all of the conferences for which we collected data were considered for membership in the final subset. Using this method, NUS's estimated ranking among US universities was 21st. Counting publications in the chosen subset of conferences agreed with 86% of the relative rankings of the NRC study.3.2 Weighting the conferences
Instead of choosing a subset of conferences and simply counting publications in that subset, a more refined measure of the overall research output of a department could be obtained by assigning a weight to each conference that reflects how prestigious it is, and then calculating the total weight of the publications of the department. How to use the NRC ranking to estimate appropriate weights? One reasonable goal is to try to find a weighting such that the relative "total publication weight" of departments agrees with their relative NRC rankings to the greatest extent possible. An iterative algorithm, called the perceptron algorithm, can be proved to find a weighting which agrees with the ranking exactly if there is one (see [2]). We applied the perceptron algorithm using all of the conferences. It found a weighting that agreed with all of the relative rankings of the NRC study. Using this weighting, NUS's rank was 28th. We were concerned that the number of parameters being adjusted by this method might be too large relative to the amount of data used to set them. So we ran the experiment again, using only the rank 1 conferences. Using this weighting, NUS's rank was 22nd. The agreement with the NRC ranking was 91%.4. Validating the Methods
To assess the quality of the different methods that we considered, we used a variant of a standard technique, called "cross validation". Note that each method can be viewed as using the NRC ranking to estimate a weighting on conferences, and then using the weighting to rank the departments again (the ones which choose a subset of conferences can be viewed as assigning weights of 0 to conferences that are not chosen, and 1 to those that are). We performed the following experiment to estimate the quality of the different weighting methods: First, we applied whatever method to choose a weighting of the conferences using only the departments in the NRC ranking with odd-number ranks. Then, we took the resulting weights, and counted the number of disagreements that they had with pairs of de- partments with even-numbered ranks. We found that the algorithm which was based on the departmental rankings agreed with 80% of the pairs of even-numbered-ranked universities. The algorithm which used simulated annealing to choose a subset of the conferences, when applied to the odd- numbered-ranked universities, found a subset of conferences whose paper count also agreed with 80% of the pairs of even-numbered-ranked univer- sities. The perceptron algorithm applied to all the conferences and the odd-numbered-ranked universities, yielded a weighting which agreed with 67% of the pairs of even-numbered-ranked universities. However, when the perceptron algorithm was restricted to rank 1 conferences, its performance improved to 90%.5. Discussion
Looking at the details of the weightings computed by these methods and where the disagreements occurred uncovered some possible biases that one should keep in mind when interpreting these results. First, the methods which chose weightings purely based on the NRC ranking appeared to be strongly biased in favor of conferences held outside the US. For example, the simulated annealing method left out the two best conferences in theoretical computer science (STOC and FOCS), but included theory conferences held outside the US (ICALP and ASIAN) which are widely regarded to be much less prestigious. One possible explanation for this is that members of universities near the top of the NRC ranking are more likely to have sufficient research funding to be able to more easily afford to travel overseas to attend conferences (note that the NRC ranking only includes universities in the US). Since members of NUS are more likely to publish in conferences held in Asia than members of US universities, this bias favors NUS. Second, counting the total research output resulted in a bias toward large departments. This was evident when looking at where disagreements with the NRC ranking occured. Since our department is large, this bias also favors NUS.6. Conclusion
Using a variety of different methods, we have estimated that the Computer Science Department of NUS would be ranked somewhere in the 20s had it been included in the US NRC ranking of CS departments. Table 2 summarizes our findings under the different methods.| | |
| | |
| | |
| | |
| | |
References
[1] National Research Council. Research-doctorate programs in the united states: Continuity and change. Computer Science rankings and otherinformation available on the web at http://www.cra.org/statistics/nrcstudy2/home.html.
[2] J. A. Hertz, A. Krogh, and R. Palmer. Introduction to the theory of neural computation. Addison-Wesley, 1991.
[3] D. Karaboga and D.T. Pham. Intelligent Optimization Techniques : Genetic Algorithms, Tabu Search, Simulated Annealing and Neural Net- works.
Springer Verlag, 1999.
2008年3月1日星期六
Guitar訓練,2-12-練習 2008-03-01
1. 节拍器速度:160 ,176,192 练习次数:1遍,2遍,2遍
II.卡尔卡西25首练习曲 No.3
1. 节拍器速度:132,144,160 练习次数:1遍,2遍,2遍
III卡尔卡西25首练习曲 No.7
1. 节拍器速度:104,112 练习次数:3遍
IV卡尔卡西25首练习曲 No.19
1. 节拍器速度:72,80 练习次数:3遍
2.a指不靠弦,p指靠弦
V. guita etude Fernando Sor Op.31 No.20 前4句(1-16.5)小节 节拍器速度:88,96 三遍
VI. 基本练习 圆滑音 在5 4 3 2弦上练习
1. 四指组合练习(打后连、连后打) 加上节拍器,速度:104 112 120 132 144 160
2.注意多练习打音
