2017/12/06 1 Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures 2018 Call for Proposal of Joint Research Projects The Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) calls for joint research projects for fiscal year 2018. JHPCN is a "Network-Type" Joint Usage/Research Center, certified by the Minister of Education, Culture, Sports, Science and Technology, and comprises eight super-computer equipped centers affiliated with Hokkaido University, Tohoku University, The University of Tokyo, Tokyo Institute of Technology, Nagoya University, Kyoto University, Osaka University and Kyushu University, among which Information Technology Center at the University of Tokyo is functioning as the core institution of JHPCN. Each JHPCN member institution will provide its research resources for joint research projects. Researchers of an adopted joint research project can use provided research resources within granted amount with free of charge. In addition, expenses for publication or presentation of research results such as overseas travel expenses may be supported. Every eligible project should use research resources provided by JHPCN member institutions or its research team should include researchers affiliated with JHPCN member institutions. The available research resources are computers, storages, visualization systems and so on. Some of them can be connected to facilities inside and outside of JHPCN member institutions via high speed networks for the purposes of exchange, accumulation and processing of data. It is also possible to connect research resources of JHPCN member institutions with SINET5 L2VPN. Part of High Performance Computing Infrastructure (HPCI) system, referred to as the HPCI- JHPCN system, are also available for joint research projects. Applications for research projects that use the HPCI-JHPCN System are invited via the HPCI online application system and will be selected in line with our reviewing policy (see Section 6,“Research Project Reviews”). The aim of JHPCN is to contribute to the advancement and permanent development of academic and research infrastructure of Japan by implementing joint research projects that require large scale information infrastructures and that address Grand Challenge-type problems, thus far considered extremely difficult to solve, in the following fields: Earth environment, energy, materials, genomic information, web data, academic information, time- series data from the sensor networks, video data, program analysis, application of large capacity network and other fields in information technology. Since the JHPCN member institutions have enrolled leading researchers, acceleration of joint research projects is anticipated through the collaboration with these researchers. These joint research projects (for the fiscal year 2018) will be implemented from April 2018 to March 2019.
34
Embed
Joint Usage/Research Center for Interdisciplinary Large ... · 2018 Call for Proposal of Joint Research Projects The Joint Usage/Research Center for Interdisciplinary Large-scale
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
2017/12/06
1
Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures
2018 Call for Proposal of Joint Research Projects
The Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures
(JHPCN) calls for joint research projects for fiscal year 2018. JHPCN is a "Network-Type" Joint
Usage/Research Center, certified by the Minister of Education, Culture, Sports, Science and
Technology, and comprises eight super-computer equipped centers affiliated with Hokkaido
University, Tohoku University, The University of Tokyo, Tokyo Institute of Technology, Nagoya
University, Kyoto University, Osaka University and Kyushu University, among which Information
Technology Center at the University of Tokyo is functioning as the core institution of JHPCN.
Each JHPCN member institution will provide its research resources for joint research projects.
Researchers of an adopted joint research project can use provided research resources within
granted amount with free of charge. In addition, expenses for publication or presentation of
research results such as overseas travel expenses may be supported. Every eligible project
should use research resources provided by JHPCN member institutions or its research team
should include researchers affiliated with JHPCN member institutions. The available research
resources are computers, storages, visualization systems and so on. Some of them can be
connected to facilities inside and outside of JHPCN member institutions via high speed
networks for the purposes of exchange, accumulation and processing of data. It is also possible
to connect research resources of JHPCN member institutions with SINET5 L2VPN.
Part of High Performance Computing Infrastructure (HPCI) system, referred to as the HPCI-
JHPCN system, are also available for joint research projects. Applications for research
projects that use the HPCI-JHPCN System are invited via the HPCI online application system
and will be selected in line with our reviewing policy (see Section 6,“Research Project
Reviews”).
The aim of JHPCN is to contribute to the advancement and permanent development of
academic and research infrastructure of Japan by implementing joint research projects that
require large scale information infrastructures and that address Grand Challenge-type
problems, thus far considered extremely difficult to solve, in the following fields: Earth
environment, energy, materials, genomic information, web data, academic information, time-
series data from the sensor networks, video data, program analysis, application of large
capacity network and other fields in information technology. Since the JHPCN member
institutions have enrolled leading researchers, acceleration of joint research projects is
anticipated through the collaboration with these researchers. These joint research projects (for
the fiscal year 2018) will be implemented from April 2018 to March 2019.
2017/12/06
2
1. Joint Research Areas
This call for joint research projects will adopt interdisciplinary research projects in the four areas:
very large-scale numerical computation, very large-scale data processing, very large capacity
network technology, and very large-scale information systems. Approximately 60† joint research
projects will be adopted.
†Total number of JHPCN member institutions used for the adopted research projects.
(1) Very large-scale numerical computation
This includes scientific and technological simulations in scientific/engineering fields such as
Earth environment, energy, and materials, as well as modeling, numerical analysis algorithms,
visualization techniques, information infrastructure, etc., to support these simulations.
(2) Very large-scale data processing
This includes processing genomic information, web data (including from Wikipedia, as well as
news sites and blogs), academic information contents, time-series data from the sensor
networks, high-level multimedia information processing needed for streaming data for video
footage, etc., program analysis, access and search, information extraction, statistical and
semantic analysis, data-mining, machine learning, etc.
(3) Very large capacity network technology
This includes control and assurance of network quality for very large-scale data sharing,
monitoring and management required for construction and operation of very large capacity
networks, assessment and maintenance of the safety of such networks, a large-scale stream
processing framework taking advantage of very large capacity networks as well as the
development of various technologies for the support of research. Research dealing with large-
scale data processing using the large capacity network is classified in this area.
(4) Very large-scale information systems
This combines each of the above-mentioned areas, entailing exascale computer architectures,
software for high-performance computing infrastructures, grids computing, virtualization
technology, cloud computing, etc.
Please pay special attention to the following points when applying.
① We will only be accepting interdisciplinary joint research proposals that will involve
cooperation among researchers from a wide range of disciplines. For example, we
presume that a research team in the “very large-scale numerical computation” area will
2017/12/06
3
consist of researchers from computer and computational sciences who work together in a
cooperative and complimentary manner. In this area, therefore, we invite joint research
projects with researchers who will be solving problems in application fields using computers
and those who will be conducting research in computer science, such as on algorithms,
modeling, and parallel processing. It is not mandatory to use the HPCI-JHPCN system or
other research resources provided by JHPCN member institutions. In other words, a joint
research project with no use of provided research resources can be acceptable.
② We particularly appreciate the following categories of joint research projects.
● Research projects in close cooperation with multiple JHPCN member institutions
Taking advantage of the “network” of JHPCN, a project in this category should use
research resources and/or involve researchers of multiple JHPCN member institutions. For
example, relevant research topics may include but not limited to large-scale and
geographically distributed information systems and multi-platform implementations of
applications using research resources provided by multiple JHPCN member institutions.
● Research projects using both large-scale data and large capacity networks
The available research resources include those that can be directly connected to a very
wide bandwidth network provided by SINET5, including L2VPN, in cooperation with the
National Institute of Informatics. Therefore, research can be conducted that depends upon
a very wide bandwidth network. A project in this category should require massive data
transfer, using the very wide bandwidth network, between research facilities located at
involved researchers’ working places and at JHPCN member institutions or between those
at JHPCN member institutions. Please refer to Attachment 2 for possible examples of
research topics in this category.
2. Types of Joint Research Projects
Under the premise of the interdisciplinary joint research project structure mentioned above, joint
research projects to be invited are as follows.
(1) General Joint Research Projects (approximately 80% of the total number of accepted
projects will be in this type)
(2) International Joint Research Projects (approximately 10% of the total number of
accepted projects will be in this type)
International joint research projects are conducted in conjunction with foreign researchers to
address challenging problems that may not be resolved or clarified with only the help of
researchers within Japan. For such research projects, there will be a certain amount of
2017/12/06
4
subsidies paid to cover travel expenses incurred for holding meetings with foreign joint
researchers during the fiscal year of commencement. For details, please contact our office
once your research project has been accepted.
(3) Industrial Joint Research Project (approximately 10% of the total number of accepted
projects will be in this category)
Industrial joint research projects are interdisciplinary projects focused on industrial
applications.
A research proposal submitted to (2) International Joint Research Project or (3) Industrial Joint
Research Project might be adopted as a (1) General Joint Research Project. Further, we invite
applications in each of these three types for “research projects that use the HPCI-JHPCN
System” and “research projects that do not use the HPCI-JHPCN System.”
If you require the assistance and cooperation of researchers and research groups that are
affiliated with JHPCN member institutions in the computer science field, please complete
Section 7 of the application form describing the specific requirements in detail. We will arrange
for as much assistance and cooperation as possible.
During the application, please designate the university (JHPCN member institution) with which
you seek collaboration. You may also name several institutions. If this is difficult for you to
designate, it is possible for us to decide corresponding research center(s) on your behalf after
considering the research proposal. Please be sure to discuss this with our contact person (listed
on Section 10) in advance.
3. Application Requirements
Applications must be made by a Project Representative. Furthermore, the Project
Representative and the Deputy Representative(s), as well as any other joint researchers of a
project, must fulfill the following conditions.
① The Project Representative must be affiliated with an institution in Japan (University,
National Laboratory, Private Enterprise, etc), and must be in a position to be able to
obtain the approval of his/her institution (or representative head).
② At least one Deputy Representative must be a researcher in a different academic field
from that of the Project Representative. There may be two or more Deputy
Representatives, and one of them can be in charge of HPCI face-to-face identity vetting.
③ A graduate student can participate in the project as a joint researcher, but an
undergraduate cannot. A Graduate student cannot become the Representative or the
Deputy Representative of the project. If a non-resident member, defined by the Foreign
2017/12/06
5
Exchange and Foreign Trade Act, is going to use computers, a researcher affiliated with
the JHPCN member institutions equipped with them must participate as a joint
researcher.
International joint research projects must, in addition to the above-mentioned ①–③, fulfill the
following conditions (④ and ⑤).
④ At least one researcher affiliated with a research institution outside Japan must be
named as a Deputy Representative. Furthermore, an application must be made using
the English Application Form.
⑤ A researcher affiliated with the JHPCN member institutions that you designate “Desired
University for Joint Research” must participate as a joint researcher.
Industry joint research projects must, in addition to the above-mentioned ①–③, fulfill the
following conditions (⑥ and ⑦).
⑥ The Project Representative must be affiliated with a private company, excluding
universities and national laboratories.
⑦ At least one researcher affiliated with the JHPCN member institutions that you designate
“Desired University for Joint Research” must be named as a Deputy Project
Representative.
4. Joint Research Period
April 1, 2018 to March 31, 2019.
Depending on conditions for preparing computer accounts, the commencement of computer
use may be delayed.
5. Facility Use Fees
The research resources listed in Attachment 1 can be used within granted amount with free of
charge.
6. Research Project Reviews
Reviews will be conducted by the Joint Research Project Screening Committee, which
comprises faculty members affiliated with JHPCN member institutions as well as external
members, and the HPCI Project Screening Committee, which comprises industry, academic,
and government experts. Research project proposals will be reviewed in both a general and
technical sense for their scientific and technological validity, their facility/equipment
requirements and the validity of these requirements, and their potential for development. The
feasibility of resource requirements at and cooperation/collaboration with the JHPCN member
institutions that you designate “Desired University for Joint Research” will also be subject to
2017/12/06
6
review. In addition, how relevant a proposal is to its type of joint research project will be
considered.
Furthermore, for projects continuing from the previous fiscal year and projects determined to
have substantial continuity, an assessment of the previous year’s interim report and previous
usage of computer resources may be considered during the screening process.
7. Notification of Adoption
We expect to notify the review results by mid-March 2018.
8. Application Process
1. Please note that the following application procedures differ for “Research projects that use
the HPCI-JHPCN System” and “Research projects that do not use the HPCI-JHPCN System”
(The HPCI-JHPCN system is described in Attachment 1 (1)).
2. In particular, for “Research projects that use the HPCI-JHPCN System,” the Project
Representative (and the Deputy Representative who will submit the proposal or who will be in
charge of HPCI face-to-face identity vetting on behalf of the Project Representative) and all
joint researchers that will use the HPCI-JHPCN system must have obtained their HPCI-ID
prior to the application.
3. For international joint research projects, an English application form must be completed.
(1) Application Procedure
After completing the Research Project Proposal Application Form obtained from the
JHPCN website (https://jhpcn-kyoten.itc.u-tokyo.ac.jp/en/) and completing the online
submission of the electronic application, please print and press seals onto the application
form and mail it to the Information Technology Center, The University of Tokyo (address
provided on Section 10). For details on the application process, please consult the JHPCN
website and the “HPCI Quick Start Guide.”
The summary of the application process is shown below.
I: For “Research Projects that use the HPCI-JHPCN System”
① Download the Research Project Proposal Application Form (MS Word format) from
the JHPCN website and complete it. In parallel, the Project Representative (and the
Deputy Representative who will submit the proposal or who will be in charge of HPCI
face-to-face identity vetting on behalf of the Project Representative) and all joint
researchers who will use the HPCI-JHPCN System must obtain their HPCI-IDs
List of research resources available at the JHPCN member institutions for the Joint Research
Projects
The research resources that can be directly connected via SINET5 L2VPN provided by National
Institute of Informatics is annotated as “L2VPN ready.”
(1)List of the HPCI-JHPCN system resources available for Joint Research Projects
JHPCN Institution
Computational Resources, Type of Use
Estimated number of Projects adopted
Information Initiative Center, Hokkaido University
① Supercomputer HITACHI SR16000/M1
(Max. 4 node ・four months per 1 project)
2018-04 – 2018-07 (due to system renewal)
《Hardware resources》
168 nodes,5,376 physical cores,Total main memory capacity 21TB,164.72 TFLOPS (Shared with general user, MPI parallel processing of up to a maximum of 128node per 1 job is possible) 1) Calculation time: 27,000,000 seconds: file 2.5 TB 2) Calculation time: 2,700,000 seconds: file 0.5 TB (Assumed amount of resources:166,666,667s and 48 TB when combining 1) and 2))
《Software resources》
【Compilers】Optimized FORTRAN90 (Hitachi),
XL Fortran (IBM),XL C/C++ (IBM)
【Libraries】MPI-2 (without dynamic process creation function),
② Cloud-System Blade Symphony BS2000 2018/04 - 2018/10 (due to system renewal)
《Hardware resources》
Virtual server(”S”/”M”/”L” server)
2 nodes,80 cores
Equal to 8 units of “L” server (May also use “S” server and “M” server)
(Max. 4 units・seven months per 1 project)
Physical server(”XL” server)
1 unit (40 cores,Memory 128GB,DISK 2TB)
《Usage》
L2VPN Ready: Only for ” XL” server. (VPN service is available for “S”/”M”/”L” servers by “Cloud Stack”)
③ Data Science Cloud System HITACHI HA8000
《Hardware resources》
1) Physical server 2 units (20 cores, Memory 80GB, Hard-Disk 2TB)
①+②:10
③
Max.2
④+⑤:8
⑥:4
2017/12/06
13
《Usage》
L2VPN Ready
④ New Supercomputer A 2019/01 - 2019/03 (due to system renewal)
《Hardware resources》
(Max. 32 node years per 1 project)
About 900 nodes,36,000 physical cores,Total main memory capacity
337TB,2.5PFLOPS
(Shared with general user, MPI parallel processing of up to a maximum of 128node per 1 job is possible) 1) Calculation time: 15,000,000 seconds: file 3TB (Assumed amount of resources: 3,000,000,000 s and 600TB for multiple sets of 1))
《Software resources》
【Compilers】Fortran, C/C++
【Libraries】MPI, Numerical libraries
⑤ New Supercomputer B 2019/01 - 2019/03 (due to system renewal)
《Hardware resources》
(Max. 16 node years per 1 project)
About 400 nodes,24,000 physical cores,Total main memory capacity
43TB,1.2PFLOPS
(Shared with general user, MPI parallel processing of up to a maximum of 64node per 1 job is possible) 1) Calculation time: 15,000,000 seconds: file 3TB (Assumed amount of resources: 1,500,000,000 s and 600TB for multiple sets of 1))
《Software resources》
【Compilers】Fortran, C/C++
【Libraries】MPI, Numerical libraries
⑥ New Cloud System 2018/12 – 2019/3 (due to system renewal)
《Hardware resources》
1) Physical server 5 nodes (40+ cores, Memory 256+ GB, DISK 2+ TB)
2) Intercloud package 1 sets (Physical servers each of which is installed at Hokkaido university, University of Tokyo, Osaka university, and Kyushu university, connected via SINET VPN)
3) Virtual server (L server) 8 nodes (10core Memory 60+ GB, DISK 500+GB)
《Usage》
L2VPN Ready
Cyberscience Center, Tohoku University
① Supercomputer SX-ACE(2,560nodes)
《Hardware resources》
48 node years / project
About 707TFLOPS,Main memory 160TB,Maximum number of nodes
1,024, Shared use
《Software resources》
【Compilers】FORTRAN90/SX, C++/SX, NEC Fortran2003
【Libraries】MPI/SX, ASL, ASLSTAT, MathKeisan
10
2017/12/06
14
② Supercomputer LX406Re-2(68nodes)
《Hardware resources》
12 node years / project
About 31.3TFLOPS,Main memory 8.5TB,Maximum number of nodes
24,Shared use
《Software resources》
【Compilers】Intel Compiler(Fortran, C/C++)
【Libraries】MPI, IPP, TBB, MKL, NumericFactory
【Application software】Gaussian16
③ Storage 4PB
《Hardware resources》
20TB / project(possible to add)
Information Technology Center, the University of Tokyo
① Reedbush-U (Intel Broadwell-EP cluster, High-speed file cache system (DDN IME))
《Hardware resources》
Maximum tokens for each project: 16 node-year, Storage 16TB (138,240 node-hour) Options: node occupied service, customized login nodes, L2VPN Ready (negotiable)
Global Scientific Information and Computing Center, Tokyo Institute of Technology
① Cloudy, Big-Data and Green Super Computer “TSUBABE3.0”
《Hardware resources》
TSUBAME3.0 system includes 540 compute nodes, which provides 12.15PF performance (CPU 15,120 cores, 0.70PF + GPU 2,160 slots, 11.45PF) in total. Maximum system available at a time is 50% of full-system. (Shared use) Total provided resource is 230 units (= 230,000 node-hour, 1 unit = 1,000 node-hour). 27 units (= 3.125 node-year) are maximum resources for each project. Please specify not only total amount of resources but also quarterly amounts of resources. Maximum storage is 300TB for each project. Ensuring 1TB of storage for one year requires 120 node-hour of resource. Resources should be requested accordingly.
Same to software resources of (2) Fujitsu PRIMERGY CX400/2550 (4) SGI UV2000
《Hardware resources》
24TF (1,280 cores, 20TiB shared memory, 10TB memory base file system) 3D visualization system:
High resolution 8K display system (185in size by 16 panel tile display), Full HD circularly polarized light binocular vision system (150in screen, projector, circularly polarized light eyeglasses, etc.),
head mount display system (MixedReality, VICON, etc.), dome type display system
《Software resources》
【OS】 SuSE Linux Enterprise Server
【Programming languages】Intel Fortran C C++, Python (2.6, 3.5)
IDL,ENVI(SARscape), ParaView, POV-Ray, NICE DCV(SGI), ffmpeg, ffplay, osgviewer, vmd(Visual Molecular Dynamics) Maximum resource allocation to one project: 15 unit or 86 node year 1 unit becomes 50 thousand node hour (equivalent to 180 thousand
yen charge) You can use large scale storage by converting 5000 node hour to
10TB. All resources are shared with general users. When using 3D visualization system, it is 100 thousand yen per
research projects.(Equivalent to additional contribution)
Academic Center for Computing and Media Studies, Kyoto University
Cray XC40(Camphor 2:Xeon Phi KNL/node)
《Hardware resources》
① 128 nodes, 8,704 cores, 390.4TFLOPS×throughout the year (Each
project is assigned up to 48 nodes×throughout the year)
② 128 nodes, 8,704 cores, 390.4TFLOPS×20 weeks (Each project is
assigned up to 128 nodes×4 weeks. )
Available amount of computing resources are adjusted by the projects.)
BLACS, LAPACK, ScaLAPACK), NetCDF (ii) PC Cluster for large scale visualization (possible to link with the large-
scale visualization system)
《Hardware resources》
- Resource per project: up to 6 node years
- Computational node: 69 nodes (CPU: up to 31.1 TFlops, GPU: up to 69.03 TFlops) will be provided up to 53,000 node hours in shared use. Arbitrary number of GPU resources can be attached to each node based on user requirement. Simultaneous use of this system and the large-scale visualization system is possible based on user requirement.
- Resource per project: General purpose CPU nodes: up to 35 node years GPU nodes: up to 5 node years Xeon Phi nodes: up to 6 node years Large-scale shared-memory nodes: up to 0.3 node years
- Computational node: General purpose CPU nodes: 236 nodes (CPU: 471.1 TFLOPS) will be provided up to 310,000 node hours in shared use. Because processors on these nodes are the same as that of GPU nodes, users can use 273 nodes for a job. GPU nodes: 37 nodes (CPU: 73.9 TFLOPS, GPU: 784.4 TFLOPS) will be provided up to 49,000 node hours in shared use. Xeon Phi nodes: 44 nodes (CPU: 117.1 TFLOPS) will be provided up to 58,000 node hours in shared use. Large-scale shared-memory nodes: 2 nodes (CPU: 16.4 TFLOPS, memory: 12TB) will be provided up to 2,600 node hours in shared use. Storages per project: 20 TByte
Research Institute for Information Technology, Kyushu University
① ITO Subsystem A (Fujitsu PRIMERGY)
《Hardware Resources》
1) (Nearly dedicated-use) The Maximum resources allocated for 1 project is 32 nodes for a year. Most of resources are dedicated to the project. 32 nodes(1,152 cores), 110.59 TFLOPS
2) (Shared-use) Up to 64 nodes can be used at the same time per project. It is shared with general users. 64 nodes(2,304 cores), 221.18 TFLOPS
《Software Resources》
【Compilers】Intel Cluster Studio XE(Fortran, C, C++), Fujitsu Compiler
② ITO Subsystem B (Fujitsu PRIMERGY)
《Hardware Resources》
(Nearly dedicated-use) The maximum resource allocation per task is 16 nodes for a year. Most of resources are dedicated to the project.
16 nodes(576 cores), CPU 42.39TFLOPS + GPU 339.2TFLOPS,
including SSD
《Software Resources》
【Compilers】Intel Cluster Studio XE(Fortran, C, C++), Fujitsu Compiler,
CUDA
③ ITO Frontend (Virtual server / Physical server)
《Hardware Resources》
Standard Frontend:
1 nodes(36 cores), CPU 2.64TFLOPS,Memory 384GiB,GPU
(NVIDIA Quadro P4000)
Large Frontend:
1 nodes(352 cores), CPU 12.39TFLOPS,Shared memory 12GiB,
GPU(NVIDIA Quadro M4000)
《Software Resources》
【Compilers】Intel Cluster Studio XE(Fortran, C, C++), CUDA
Storages per project: 10 TByte, (possible to add)
Regarding ①/②, a server for pre/post-processing or visualization is
available (Standard Frontend)
①1):2
2):8
②:3
③:5
2017/12/06
19
(2)Other facilities/resources available for Joint Research Projects
The following facilities/resources, despite not being part of the HPCI-JHPCN system, are available
for Joint Research Projects.
JHPCN Institution
Computational Resources, Type of Use
Estimated number of Projects adopted
Information Initiative Center, Hokkaido University
《Hardware resources》
(1) Large-format printer
《Software resources》
《Usage》
12
Cyberscience Center, Tohoku University
《Hardware resources》
(1) Large-format printer (2) 3D visualization system(12 screens large stereoscopic display
system(1920×1080(FULL HD) 50 inch projection module × 12
screens), visualization server × 4, 3D visualization glasses)
《Software resources》
AVS/Express MPE(3D visualization software)
《Usage》
・ L2VPN Ready
・ On-demand L2VPN
10
Information Technology Center, the University of Tokyo
《Hardware resources》
(1) FENNEL (Real-time Data Analysis Nodes) x 4 At maximum four VMs or one bare metal server are provided to each group. The provided VMs or a bare metal server are dedicated to the group.
(2) GPGPU (Nvidia Tesla M60) is available on request. 10Gbps Network Access
Global Scientific Information and Computing Center, Tokyo Institute of Technology
《Hardware resources》
(1) Remote GUI environment: the VDI (Virtual Desktop Infrastructure)
system
If you are planning to use the VDI system, please contact us in advance.
《Software resources》
《Usage》
Information Technology Center, Nagoya University
《Hardware Resources》
(1) Visualization system Remote visualization system, on-site use(for data translation)
(2) Network connection up to 40GBASE Note: Nagoya University only provides SFP+/QSFP port so that you have to prepare optical modules including the module for network switch of Nagoya University.
《Software Resources》
(1) Fujitsu PRIMERGY CX400/2550 Python + scikit-learn + TensorFlow Other software which is executable on Linux
L2VPN Ready You can use L2VPN network connection when you connect own system to the network via hardware resource (2). We configure SINET L2VPN connection to outer university and internal university VLAN (with connection between them)
Academic Center for Computing and Media Studies, Kyoto University
《Hardware resources》
(1) Virtual Server Hosting Standard Spec: CPU 2 Cores, Memory 8GB, Disk capacity 1TB
《Software resources》
(1) 【Hypervisor】VMware
【OS】CentOS7
【Application】Basic software prepared at the center and support for
introducing various open source software.
《Usage》
L2VPN Ready Dedicated VLAN (on-campus) or SINET L2VPN(off-campus) Connection
Cybermedia Center, Osaka University
《Hardware resources》
1. 24-screen Flat Stereo Visualization System
・ Install location: Umekita Office of the Cybermedia Center, Osaka University
・ 50-inch FHD (1920 x 1080) stereo projection module: 24 displays Image Processing PC Cluster: 7 computing nodes
・ * Prioritize the visualization by users of “PC cluster for the large-scale visualization”.
2. 15-screen Cylindrical Stereo Visualization System
・ Install location: the main building of the Cybermedia Center, Suita Campus, Osaka University
・ 46-inch WXGA (1366 x 768) LCD: 15 displays
2017/12/06
21
・ Image Processing PC Cluster: 6 computing nodes * Prioritize the visualization by users of “PC cluster for the large-scale visualization”.
《Software resources》
・ AVS Express/MPE, IDL
・ AVS Express/MPE, IDL
《Usage》
L2VPN Ready
Research Institute for Information Technology, Kyushu University
《Hardware resources》
(1) Visualization environment Visualization server (SGI Asterism), 4K projector, and 4K monitor
《Usage》
L2VPN Ready
2017/12/06
22
Attachment 2
The following are possible examples of “Research projects using both large-scale data and large
capacity networks” described in Attention ② of "1. Joint Research Areas" in the call for
proposals. The purposes of this attachment are to present, by example, how research resources
provided by JHPCN member institutions may be used. We appreciate proposals of joint research
projects using both large-scale data and large capacity networks, not limited to those examples.
Information Initiative Center, Hokkaido University
Distributed systems (virtual private cloud systems) can be deployed using our cloud system / data
science cloud system federated with computational resources of the other universities or the
applicant’s laboratory, connected via SINET L2VPN or with software VPN solutions.
Available resources
Supercomputer system, Cloud system, data science cloud system (c.f. Attachment 1.)
How to use
Dedicated systems can be developed for the collaborative research projects employing physical
and virtual servers as virtual private clouds. Distributed systems can also be developed by
connecting our cloud resources with servers and storages in the users’ laboratories as hybrid
cloud systems. The users can access the systems not only via ssh/scp but also with virtual
console, which provided by Cloud Middleware,through web browsers.
Email address for inquiring about resource usage and joint research