Services Offered

  • CORE-compliant rate structures for GPU and CPU computing
  • Streamlined onboarding process
  • Educational opportunities (hands-on workshops, webinars)
  • Budget monitoring and web-based tools
  • Grant budget requests
  • Individualized help documentation
  • Recommendations on resources to enhance your research

Preferred Computing Partners

Research computing services provide access to nationally-recognized computing and data solutions for the research community through collaboration with premier computing centers. ARCSIM is part of Coordinated Operating Research Entities (CORE) and overseen by the Office of the Vice President for Research and the Dean of the Graduate School, providing assurance of UG Service Center compliance. Refer to the Office of Research Administration for more information about compliance considerations in developing proposals and executing research computing activities.

Our services include technical support and are available at nationally competitive rates. University of Maine System faculty and researchers can leverage negotiated agreements and competitive rate structures by contacting ARCSIM today. If you are looking for additional services not shown on this page, please reach out for a consultation.


Ohio Supercomputer Center (OSC)

The Ohio Supercomputer Center enables research development in computational science and the applications of supercomputing. OSC is equipped with research staff specializing in the fields of supercomputing, computational science, data management, biomedical applications, and a host of emerging disciplines.

ARCSIM serves as an OSC Campus Champion to facilitate access to nationally-recognized computing and data resources. We serve as a direct support for researchers to access and utilize HPC for their research needs. For more information on OSC’s campus champions program, visit OSC’s program website.

“The Ohio Supercomputer Center’s (OSC) Campus Champions program is composed of high performance computing (HPC) advocates at academic institutions. Campus Champions serve as local proponents for access and utilization of OSC resources on their campuses.” – OSC

Hands-On Workshops

OSC regularly offers hands-on training opportunities for anyone interested in learning more about a particular data or programming topic, or to get hands-on experience using the OSC platform. Attendance is free, and workshops are offered virtually. OSC also offers online training courses that are self-paced and can be completed at any time. For more information, visit the training page, or the posted upcoming events.

Topics previously offered

  • RNA-sequencing Data Analysis
  • Introduction to Python Environments
  • Introduction to Supercomputing
  • Linear Regression with R
  • Batch System at OSC

Computing Hardware

Clusters Pitzer Owens
Hardware Dell Intel Gold 6148 Dell Intel Xeon E5-2680 v4
# Cores 10,240 23,392
Memory (min.) 192 GB per node 128 GB per node
GPU capability 64 NVIDIA V100 160 NVIDIA P100

Additional resources are available on the Ascend cluster by request for computationally intensive GPU jobs utilizing NVIDIA A100 GPUs. Detailed descriptions of each cluster are available at OSC’s cluster computing webpage.

Software

A suite of preinstalled software is available: complete list of current software. Most applications are available without fee for academic use.

Data Storage

Home Directory Storage: Each user account includes a private home directory with 500GB of storage and a maximum of 1,000,000 files.

Project Storage (setup as a single repository for a defined group of users): Shared project storage for collaborating and sharing files and scripts among all lab and designated group members is available in 0.5TB increments, and may be modified at any time upon request.

For detail information, please visit OSC’s storage documentation webpage.


logo of TACC

Texas Advanced Computing Center (TACC)

The Texas Advanced Computing Center (TACC) is a premier center of computational excellence in the U.S. Since 2001, TACC has been enabling discoveries and the advancement of science through the application of advanced research computing technologies.

 

Detailed descriptions of each system are available:

STAMPEDE2: system summary; user guide.

MAVERICK2: user guide.

LONGHORN: system summary; user guide.

Software

A suite of software is available: complete list of current software. Most applications are available without fee for academic use.

Storage

Home Directory Storage (available to each User Account): Each user account includes a home directory with 500GB.

Project Data Storage: Corral is available for additional project storage needs. This space will never be purged. Users who wish to backup data to more than one system, separate Archival Data Storage is available. For detailed information please visit Corral User Guide. Competitive rates apply to Corral storage, and pricing information is available from ARCSIM.

Archival Data Storage: Ranch is a long term tape storage system, and it is available for archiving project data. This space is not intended for active data, and it is also not suitable for system backups. The Ranch system provides redundant data storage for project related data. For detailed information please visit the Ranch User Guide. Competitive rates apply to Ranch storage, and pricing information is available from ARCSIM.

Computing Hardware

Clusters STAMPEDE 2 MAVERICK 2 LONGHORN
Machine description 4,200 Intel Knight Landing 68 core nodes

1,736 Intel Skylake 48 core nodes

24 GTX compute nodes with Intel Xeon CPU E5-2620 v4

4 V100 compute nodes with Xeon Platinum 8160 CPU

3 P100 nodes with Intel Platinum 8160 CPU

IBM Power System AC922 nodes with IBM Power 9 processors
Memory Intel Knight Landing nodes – 96 GB of DDR RAM and 16 GB of MCDRAM

Intel Skylake – 192 GB of RAM per node

GTX nodes – 128 GB RAM

V100 nodes – 192 GB RAM

P100 nodes – 192 GB RAM

GPU Nodes – 256 GB of RAM

GPU Large Memory Nodes – 512 GB of RAM

GPU capability NA GTX nodes – 4 NVidia 1080-TI GPUs per node

V100 nodes – 2 NVidia V100 adapters

P100 nodes – 2 NVidia P100 adapters

96 V100 nodes, with 4 GPUs per node

8 large memory V100 nodes, each with 4 GPUs per node and


ACCESS (Advanced Cyberinfrastructure Coordination Ecosystem: Services and Support), formally XSEDE, is an advanced computing an data resource program established and funded by the National Science Foundation to help researchers and educators, with or without supporting grants, to utilize the nation’s advanced computing systems and services – at no cost.

For information on ACCESS allocations for University of Maine System researchers, visit the program page, or reach out to ARCSIM for more information to help you take advantage of the national resources available through the ACCESS program.

ARCSIM serves as a local ACCESS Campus Champion resource to facilitate access and can provide consulting and assistance with securing allocations. For more information on ACCESS’s campus champions program, please visit the campus champion program site.

Services available

  • High-performance computing clusters (GPU and CPU-based)
  • Storage resources for managing and storage large amounts of research data
  • Cloud resources and infrastructure to launch and run virtual machines
  • Specialized support to streamline research