Project 65: High Performance Computing Soederkoeping 1983: =================== Large scale scientific computing. --------------------------------- Rice said that the report of the Lax committee (Dec. 1982) called for a national program to develop supercomputers and make them available to all the major scientific centers in the United States. Various government agencies are now putting substantial funds into this area and the amount of effort is expected to increase soon. The main impact on current group activities relates to defining language features appropriate for the paralleled algorithms that will be needed. Pasadena 1984: ============== Large scale computing (Rice) ---------------------------- A brief review is given of the current favorable political climate in the USA for the development of large scale scientific computing. There is a growing number of high performance machines evolving from CRAY, CDC, HEP, FPS, etc., designs. Other manufacturers are expected to introduce such machines and many interesting prototypes are under serious development. A review is given of the responsibilities of the various USA funding agencies for supporting large scale scientific computing. A similar review is given for UK (Ford), Canada (Gentleman) and Sweden (Einarsson). Conference at JPL, Tuesday June 26 ---------------------------------- Prof. Rice: Current trends in Scientific Computing, particularly the impact of supercomputers (see IFIP/WG 2.5 (Pasadena-25)1125). Rice discusses 3 issues: 1. supercomputers 2. better models 3. adaptability versus regularity. As of to-day, real time speech understanding requires 1 or 2 units of Cyber 205. But automatic search for viral vaccines requires 100 to 100,000 units. The advance in problem solving power from faster computing methods (from 1945 to 1980) is less than the advance from better methods, when one solves a general linear elliptic problem on a general domain in 3 dimensions. Adaptation leads to "irregular" models, data structures and computations. This negates much (most ?) of the supercomputer power. Open discussion Q: Is the ratio better models vs computer time an increasing function of the number of dimensions? A: Yes. up to 10 to 12 orders of magnitude between 1950 and 1980 in 3D. Q: Need systolic array machines be dedicated to a specific problem? A: A hard wired machine for a particular computation can be superfast. Comment: the area of optimization is an example of the strength of better methods. Stanford 1988: ============== Third International Conference on Supercomputers, Greece, 1989 -------------------------------------------------------------- G. Paul reported on the preparations for the Third International Conference on Supercomputers. The conference is sponsored by ACM and WG 2.5 members are involved in its organization. It was noted that WG 2.5 chairman, L. Fosdick, needs to contact TC 2 chairman, Mason, to officially establish co-operation of WG 2.5 in the organization of the above conference. L. Fosdick will send a letter to TC 2 chairman regarding co-operation of WG 2.5 in the organization of the Third International Conference on Supercomputers. Beijing 1989: ============= International Conferences on Supercomputers ------------------------------------------- WG 2.5 discussed a possible long term affiliation with the Supercomputer Conference (Paul, Houstis, Fosdick, Einarsson). The Chairman of WG 2.5 will request permission from TC 2 for WG 2.5 to be an affiliate of the Annual International Conference on Supercomputers (M; Paul, S: Einarsson; Y: Unanimous). Jerusalem 1990: =============== Fosdick reported on the work of interest to the project, such as NSF educational initiative in scientific computation, the problem solution on high performance computers (seniors at CU, Boulder), performance evaluation and modeling, visualization issues, etc. Oxford 1996: =========== This will remain active with Vouk, Pool, and Fosdick preparing the descriptive paragraph. Add Pool, Hammarling, Paul, Gladwell and Wright to people list. Patras 1998: ============ Ford pointed out that several WG members were actively involved in related activities but there were often restrictions in what they could report. Purdue 1999: ============ Ford presented an overview of recent progress in the development of parallel libraries for shared memory and distributed memory machines. He noted that the emerging standard for compiler directives is based on open MP. Supercomputing Research Activities in Japan ------------------------------------------- Shimasaki summarised the current status of NEC, Fujitsu and Hitachi's Supercomputers. He discussed, in particular, the impact of the major National Program to develop an "Earth Simulator" on Supercomputing in Japan. Ottawa 2000: ============ Parallel Software Contest in Japan ---------------------------------- Shimasaki gave an overview and reviewed recent results of a competition in Japan. There was wide participation and several lessons learned. It is expected that this competition will continue as it exposes young researchers to state-of-the-art parallel computers and their use to solve difficult problems. Amsterdam 2001: =============== A discussion of the next Wo Co took place. Possible locations and themes were identified and briefly discussed. A Wo Co on High Performance Computing. Pool would be willing to host such a meeting. Although there was no formal report on this project, there was a discussion of its importance and the recognition that progress was being made in several areas that we should be aware of. In particular, the use of `grids' and `clusters' is becoming more common and the implications on numerical software has to be better understood. Portland 2002: ============== Although there was no formal report on this project, there was a discussion of its importance and the recognition that progress was being made in several areas that we should be aware of. In particular, the use of `grids' and `clusters' is becoming more common and the implications for numerical software has to be better understood. This is a natural topic for a future WoCo. Shimasaki gave a presentation "Fast linear equation solvers in high performance electromagnetic field analysis". Mu gave a presentation "PDE Mart: A networked PDE solving environment". The NSF `Terra Grid' project ---------------------------- Pool presented an overview of this well funded project involving 3 major facilities (Argonne, San Diego and Caltech) and several distributed research teams. A timeline of objectives over the next four years was identified and the likelihood of reaching these objectives discussed. Focus is on increasing the available bandwidth, data storage capability and raw processing speed in a coordinated effort to make large scale computation more effective. Grid Computing -------------- Vouk used a large scale collaborative biological project as an example of how grid computing can be used for large scale data intensive applications. For example the BLAST project involves a website which provides pattern matching for protein identification using state of the art data storage and retrieval techniques. Washington 2004: ================ Several members are active in this area. Pool's discussion at the meeting of the activities to be covered in the next Wo Co which he will host is relevant here: Working Conference 9 on "Grid-Based Problem Solving Environments: Implications for Development and Deployment of Numerical Software", Prescott, Arizona, USA (17 - 21 July 2006) Meeting website is http://tempwoco9.cacr.caltech.edu/ Hong Kong 2005: =============== The theme of WoCo9 will relate to this project. It was also reported that a special issue of Parallel Computing will describe hardware, software, and applications of the Japanese Earth Simulator. The Earth Simulator will soon be connected to the Internet, opening up opportunities for external researchers to access this resource. The plan for WG 2.5's 9th working conference is now being carried out. The conference, entitled "Grid-based Problem Solving Environments: Implications for Deployment of Numerical Software" will be held July 17-21, 2006 at the Hassayampa Inn in Prescott, Arizona, USA. The organizer of the meeting is Dr. James Pool of the California Institute of Technology (USA). The Program Committee, which is chaired by Dr. William Gropp of Argonne National Laboratory (USA), is actively inviting speakers. Proposals for funding are being considered by the US National Science Foundation and the US Department of Energy. Prescott 2006: ============== Several members are active in this area. The theme of WoCo 9 and several of the talks were directly related. Reid's technical presentation "Benchmarking HPC Systems" at this meeting can be regarded as an report on an important component of this project. Uppsala 2007: ============= Several members are active in this area. Reid's technical talk "HECToR - the New UK Supercomputing Facility" can be regarded as a report on an important component of this project. There was more discussion on whether this topic should be the theme of our next Working Conference. Toronto 2008: ============= Several members are active in this area. The technical talks of Vouk "Cloud Computing - Implications for Numerical Computing" and Snyder "Coarrays in Fortran 2008" can be considered reports of two important components of this project. Co-Arrays for parallel processing, Abstract of talk by Van Snyder. ------------------------------------------------------------------ Co-Arrays were designed to answer the question "What is the smallest change required to convert Fortran into a robust and efficient parallel language?" The answer is a simple syntactic extension. It looks and feels like Fortran and requires Fortran programmers to learn only a few new rules. These rules are related to two fundamental issues any parallel programming model must resolve: work distribution and data distribution. To address work distribution, the co-array extension adopts the single program multiple data (SPMD) model. A single program is replicated a fixed number of times. Each replication, called an image, has its own set of data objects. Each image executes asynchronously, and the normal rules apply. The execution sequences may differ from image to image, with the actual path of each image determined by normal control constructs and explicit synchronizations. Between synchronizations, the compiler can use all its normal optimization techniques, as if only one image were present. The co-array extension addresses data distribution by providing data objects that span the images and means to access them using a syntax very much like ordinary syntax. The new data objects are called co-arrays, from which the paradigm gets its name. Co-Arrays have the important property that as well as having access to the local object, each image may access the corresponding object on any other image. Although co-arrays were originally designed as an extension to Fortran, and are included in the 2008 Fortran standard, other languages and language extensions inspired by co-arrays are under development. UPC, X10, Chapel, and class libraries being developed for use in C++ all have a conceptual framework similar to co-arrays. A complete description is in ftp://ftp.nag.co.uk/sc22wg5/N1751-N1800/N1762.pdf Raleigh 2009: ============= Several members are active in this area. There was no report presented, but Vouk agreed to provide an updated paragraph. Leuven 2010: ============ Mladen made a few comments, especially on exascale development. The technical invited talk by Kurt Lust on the Flemish Computer Center is a contribution to this project, and his slides are available at http://people.cs.kuleuven.be/~ronald.cools/WG2.5/presentation_Kurt_Lust.pdf The presentation by Gansterer on "Controlled trading of accuracy for speed in structured large symmetric eigenvalue problems" at ICCAM 2010 also relates to this project. Boulder 2011: ============= Several members are active in this area. The technical presentations of Scott and Gansterer can be considered to be reports on different aspects of this project. Vouk pointed out that workflow issues were of concern in HPC. Gansterer was added to the list of active participants. Santander 2012: =============== Presentation by Gansterer "New developments for the block divide-and-conquer eigensolver" at the Santander workshop. Shanghai 2013: ============== Short discussion. Vouk is currently writing a book on cloud-based HPC. Vienna 2014: ============ Vouk is working on a book on cloud-based HPC. The proceedings of last year's Workshop on "Data-Intensive Scientific Discovery" in Shanghai summarize recent activities in this project. Halifax 2015: ============= The following papers summarize activities in this project: Mladen A. Vouk, Eric Sills, and Patrick Dreher, "Integration of High-Performance Computing into Cloud Computing Services," Ch 11 in Handbook of Cloud Computing, editors B. Furht and A. Escalante, Springer, ISBN: 978-1-4419-6523-3, pp. 255-276, 2010. Patrick Dreher and Mladen Vouk, "Integration of high-performance computing into a VCL cloud," VHPC '13, Proceedings of the 8th Workshop on Virtualization in High-Performance Cloud Computing, Supercomputing 2013, Denver, 17-22 Nov 2013, Article No. 1, 6 pages. Patrick Dreher, William Scullin, and Mladen Vouk, "Toward a Proof of Concept Implementation of a Cloud Infrastructure on the Blue Gene/Q," International Journal of Grid and High Performance Computing (IJGHPC), Volume 7, Issue 1, 2015, PP. 32-31. Amsterdam 2023: =============== It was decided to close this project.