Research Computing

Research computing services provide access to nationally-recognized computing and data solutions for the research community through collaboration with premier computing centers. ARCSIM is part of Coordinated Operating Research Entities (CORE) and overseen by the Office of the Vice President for Research and the Dean of the Graduate School, providing assurance of UG Service Center compliance. Refer to the Office of Research Administration for more information about compliance considerations in developing proposals and executing research computing activities.

Our services include technical support and are available at nationally competitive rates. University of Maine System faculty and researchers can leverage negotiated agreements and competitive rate structures by contacting ARCSIM today.

Services Offered

  • CORE-compliant rate structures for CPU and GPU computing
  • Streamlined onboarding process
  • Educational opportunities (hands-on workshops, webinars)
  • Budget monitoring and web-based tools
  • Grant budget requests
  • Individualized help documentation
  • Recommendations on resources to enhance your research

ARCSIM on-premise cluster computing capabilities provides system-wide resources to help researchers making discoveries in a vast array of scientific disciplines. Hardware is divided into two clusters with a variety of CPU and GPU enabled partitions.

UMS Logo

Katahdin Legacy Cluster

The Katahdin cluster is currently in legacy status and will continue to be accessible to researchers for use of older packages and modules not available on Penobscot. If you are currently utilizing Katahdin for your research and would like to move your workload to the Penobscot cluster, or have questions about which cluster is appropriate for your needs, please contact ARCSIM at um.arcsim@maine.edu.

Penobscot Cluster

The Penobscot HPC system is the newest cluster, and includes the most recent GPU hardware additions to support AI and ML research needs. This cluster supports terminal-based SSH connections, but also includes the Open OnDemand interface for access via web browser and interactive computing through GUI software, such as RStudio, Jupyter, and Matlab.

Technical Specifications

CPU Computing Nodes:

epyc partition: 1,344-core Dell AMD EPYC™ 3 Milan

  • 14 nodes have 96 cores per node and 512 GB RAM at 2.3 Ghz
  • Infiniband HDR100 connection at 100 Gbps
  • Total of 7 TB of RAM
  • Partition exists on both Katahdin and Penobscot clusters

epyc-hm partition: 128-core Dell AMD EPYC™ 3 Milan High-Memory

  • 4 nodes have 32 cores per node and 1 TB RAM at 3.0 Ghz
  • Infiniband HDR100 connection at 100 Gbps
  • Total of 4 TB of RAM

skylake partition: 288-core Intel

  • 8 nodes have 36 cores per node and 256 GB RAM
  • Infiniband HDR100 connection at 100 Gbps
  • Total of 2 TB of RAM

haswell partition: 2400+ Intel cores

  • 88 nodes have 24 or 28 cores per node and 64 or 128 GB RAM
  • Infiniband HDR100 connection at 100 Gbps
  • Total of over 8 TB of RAM
  • Partition exists on both Katahdin and Penobscot clusters

GPU Computing Nodes:

node-g105: NVIDIA DGX A100

  • 8 NVIDIA A100 GPUs with 40 GB RAM each
  • 128 cores/256 threads using AMD EPYC™ 2 Rome CPUs
  • Total of 1 TB of RAM
  • *Penobscot partition only

node-g104: NVIDIA RTX 2080 Ti

  • 8 NVIDIA RTX 2080 Ti GPUs Gigabyte Node
  • Single 32-core AMD EPYC™ 1 Naples CPU
  • Total of 768 GB of RAM
  • *Penobscot partition only

node-g101: NVIDIA A100

  • 2 NVIDIA A100 GPUs with 80 GB RAM each
  • 32-core AMD EPYC™ Milan CPUs at 3.0 Ghz
  • Total of 512 GB of RAM
  • *Penobscot partition only

node-g102: NVIDIA L40

  • 3 NVIDIA L40 GPUs with 48 GB RAM each
  • 32-core AMD EPYC™ Milan CPUs at 3.0 Ghz
  • Total of 512 GB of RAM
  • *Penobscot partition only

node-g103: NVIDIA A30

  • 4 NVIDIA A30 GPUs with 24 GB RAM each
  • 32-core Intel Xeon Gold CPUs at 2.9 Ghz
  • Total of 512 GB of RAM
  • *Penobscot partition only

If you would like to request on-premise computing or data storage, or to set up a consultation with our team, please contact us at um.arcsim@maine.edu or fill out our service request form.

Ohio Supercomputer Center (OSC)

The Ohio Supercomputer Center enables research development in computational science and the applications of supercomputing. OSC is equipped with research staff specializing in the fields of supercomputing, computational science, data management, biomedical applications, and a host of emerging disciplines.

Data Storage

Home Directory Storage: Each user account includes a private home directory with 500GB of storage and a maximum of 1,000,000 files.

Project Storage (setup as a single repository for a defined group of users): Shared project storage for collaborating and sharing files and scripts among all lab and designated group members is available in 0.5TB increments, and may be modified at any time upon request.

For detail information, please visit OSC’s storage documentation webpage.

OSC Campus Champion

ARCSIM serves as an OSC Campus Champion to facilitate access to nationally-recognized computing and data resources. We serve as a direct support for researchers to access HPC for their research needs. For more info, visit OSC’s program website.

Additional resources are available on the Ascend cluster by request for computationally intensive GPU jobs utilizing NVIDIA A100 GPUs. Detailed descriptions of each cluster are available at OSC’s cluster computing webpage.


Texas Advanced Computing Center (TACC)

TACC is a premier center of computational excellence in the U.S. Since 2001, TACC has been enabling discoveries and the advancement of science through the application of advanced research computing technologies.

Detailed descriptions of each system are available:

STAMPEDE3: system specs; user guide.

FRONTERA: system specs; user guide.

Storage

Home Directory Storage: Each user account includes a home directory with 500GB.

Project Data Storage: Corral is available for additional project storage needs. This space will never be purged. For detailed information please visit Corral User Guide.

Archival Data Storage: Ranch is a long term tape storage system, and it is available for archiving project data. This space is not intended for active data, and it is also not suitable for system backups. The Ranch system provides redundant data storage for project related data. For detailed information please visit the Ranch User Guide.

If you would like to request a new project account for computing or data storage on OSC or TACC, or to set up a consultation with our team, please contact us at um.arcsim@maine.edu or fill out our service request form.

Cost-Based Resources

Many user-managed cloud solutions are available to researchers for on-demand computing, storage and infrastructure to accomplish research needs. With a large number of free training resources, and self-help documentation, these can easily be managed and used for many project needs. If you are interested in cloud-based computing and storage resources, such as the major vendors listed below, or additional platforms better suited for your research, please contact us at um.arcsim@maine.edu or fill out our service request form.

Amazon Web Services (AWS) is one of the largest and most secure cloud platforms in the world with over 200 fully featured services offered from data centers globally, including S3 compatible storage.
For more information about this resource visit the AWS website.

Google Cloud Platform (GCP) uses Google’s core infrastructure, data analytics, machine learning, as well as security technology, and lets researchers run apps on open source solutions. For more information about this resource visit the Google Cloud website.

Digital Ocean has fifteen globally distributed data centers, and a suite of products including managed hosting, virtual machines, Kubernetes, managed databases, and storage. For more information about this resource visit the Digital Ocean website.


For more information on no-cost nationally funded computing and data storage resources, click on one of the external partners below.

ACCESS (Advanced Cyberinfrastructure Coordination Ecosystem: Services and Support), formally XSEDE, is an advanced computing an data resource program established and funded by the National Science Foundation to help researchers and educators, with or without supporting grants, to utilize the nation’s advanced computing systems and services – at no cost.

For information on ACCESS allocations for University of Maine System researchers, visit the program page, or reach out to ARCSIM for more information to help you take advantage of the national resources available through the ACCESS program.

ARCSIM serves as a local ACCESS Campus Champion resource to facilitate access and can provide consulting and assistance with securing allocations. For more information on ACCESS’s campus champions program, please visit the campus champion program site.

Services available

  • High-performance computing clusters (GPU and CPU-based)
  • Storage resources for managing and backup of large amounts of data
  • Cloud resources and infrastructure to launch and run virtual machines
  • Specialized support to streamline research

ARCSIM supports several different data storage solutions including cloud-based resources, and cluster-based backup options. Our storage services offer a competitively priced UG-compliant alternative to storage on local platforms. ARCSIM provides technical assistance for preferred partners listed below, but can also provide guidance for other platforms that best meet your research needs.

*UMS:IT offers free general purpose storage options to all students, staff, and faculty campus-wide (i.e., Google Drive, Microsoft OneDrive). On their own, these options are not intended as long-term archival storage and the risk of data loss is higher without a backup strategy.

Preferred Backup Storage Solutions

OSC project storage is available for all data backup needs, and can be combined with computing resources. No ingress/egress fees. Protected Data Service for controlled data available. Cost: [$8/TB/month for UMS researchers]

Wasabi S3 storage is an ARCSIM preferred vendor for cloud-based data storage needs; pay-as-you-go service. Cost: [$6.99/TB/month]

If you are interested in other cloud-service providers, please reach out for more information. Different tiers of storage available (incurs transfer fees).

Training and Workshop Events

ARCSIM offers a number of education opportunities from in-person training sessions, hosting online workshops, self-paced tutorials, and presentations on a variety of topics. Staff can also offer customized training for your group or department. Our existing training can be tailored to specific needs or new offerings can be developed. For more information about upcoming events offered by ARCSIM, please visit our News Page.

As an Ohio Supercomputer Center (OSC) Campus Champion member, ARCSIM has previously partnered with OSC staff to help support research and education through collaborative webinars and training opportunities. In addition, UMS researchers can attend any of the free virtual training events hosted by OSC on various HPC, programming, and analysis topics, both for beginners and advanced users. Some of the topics presented in the past include RNA-Sequencing Analysis, Linear Regression with R, Parallel Computing with MATLAB, Introduction to Python Environments, and Parallel R.

Upcoming OSC training events can be found on the OSC Events Page

Previous Training Opportunities

ARCSIM and OSC Logos

Webinar: Introduction to R Programming Resources and High-Performance Computing at OSC

Friday, February 24, 12:00 p.m. to 1:00 p.m. ○ Live via Zoom Join UMaine ARCSIM (Advanced Research Computing, Security & Information Management) and Ohio Supercomputing Center for an introductory webinar, as we discuss opportunities to enhance cloud-based research needs on campus. We will provide an overview of the HPC services that are available to UMS […]
Read More Webinar: Introduction to R Programming Resources and High-Performance Computing at OSC
Photo of building at UMaine

Save the Date: UMaine ARCSIM Hybrid Resource Seminar

Join UMaine ARCSIM in a hybrid resource seminar taking place in Stodder Hall Room 57 and over Zoom on Monday, November 28 from 12:00 p.m. – 1:30 p.m., as we discuss opportunities to enhance research needs on campus. We will provide an overview of the research computing resources and services that are available to faculty, staff and students, such as high-performance computing, data […]
Read More Save the Date: UMaine ARCSIM Hybrid Resource Seminar
MathWorks building logo

Ohio Supercomputer Center and MathWorks to host “Machine Learning with MATLAB” workshop

One of UMaine ARCSIM’s partners, the Ohio Supercomputer Center (OSC), is excited to announce a workshop, “Machine Learning with MATLAB,” in collaboration with experts from the developer, MathWorks, on Thursday, Oct. 20, 2022, from 1:00pm–2:30 p.m. EST. The workshop will explore the fundamentals of machine learning using MATLAB, a programming and numeric computing platform that […]
Read More Ohio Supercomputer Center and MathWorks to host “Machine Learning with MATLAB” workshop

Acknowledge Scientific Computing

Scientists who publish results of research that was performed using ARCSIM resources and capabilities should include an acknowledgement in any publications that result from that work. If significant contributions are made to the results of the work(s), authorship should also be considered. This includes journal articles, presentations, reports, proceedings and book chapters. Please use the text:
 
“This work was supported in part through the computational resources and staff expertise provided by Advanced Research Computing, Security, and Information Management (ARCSIM) at the University of Maine at Orono.”