We provide services with limited resources at **no cost** to all members affiliated with Brown. For advanced computing that requires extra resources, we charge a quarterly fee. See below the rates for FY20
High Performance Computing Cluster (Oscar)
The number and size of jobs allowed on Oscar vary with both partition and type of user account. The following partitions are available to all Oscar users:
- Batch - General Purpose Computing
- GPU - GPU Nodes
- BigMem - Large Memory Nodes
|Account Type||Partition||CPU Cores||Memory(GB)||GPU||Max Walltime* (Hours)||Cost (Billed Quarterly)|
| Exploratory * ||
|HPC Priority||Batch||208||1,500||2 Std.||96||$200|
|HPC Priority+||Batch||416||3,000||2 Std.||96||$400|
|Standard GPU Priority||GPU||16||192||4 Std.||96||$200|
|Standard GPU Priority+||GPU||16||384||8 Std.||96||$400|
|High End GPU Priority||GPU||16||256||4 high-end||96||$400|
|Large Memory Priority||Bigmem||32||2TB||-||96||$100|
|Condo Rental||Batch||512||2TB||-||No limits||$10,000 (Yearly)|
- Each account is assigned 20G Home, 512G Scratch (purged every 30 days).
- * Priority accounts and Exploratory accounts with a PI get a data directory.
- * Exploratory accounts without a PI have no data directory provided.
- Priority accounts have a higher Quality-of-Service (QOS) i.e. priority accounts will have faster job start times.
- The maximum number of cores and duration may change based on cluster utilization.
- HPC Priority account has a Quality-of-Service (QOS) allowing up to 208 cores, 1TB memory, and a total per-job limit of 1,198,080 core-minutes. This allows a 208-core job to run for 96 hours, a 104-core job to run for 192 hours, or 208 1-core jobs to run for 96 hours.
- Exploratory account has a Quality-of-Service (QOS) allowing up to 2 GPUs and a total of 5760 GPU-minutes. This allows a 2 GPU job to run for 48 hours or 1 GPU job to run for 96 hours.
- GPU Definitions:
- Std - QuadroRTX or lower
- High End - Tesla V100
- For more technical details, please see this link.
|General Support||Limited code troubleshooting, training, office-hours. Limited to 1 week per year||$0|
|Advanced Support||Any staff services requiring more than 1 week’s effort per year||$85/hour|
|Project Collaboration||Percent time of a specific staff member charged directly to the grant||%FTE|
Research Data Storage
- 1TB per Brown Faculty Member - Free
- 10TB per awarded Grant at the request of the Brown PI - an active grant account number will be required to provide this allocation and the data will be migrated to archive storage at the end of the grant.
- Additional Storage Allocation
- Rdata: $100 / Terabyte / year
- Stronghold Storage: $100 / Terabyte / Year
- Campus File Storage (replicated): $100 / Terabyte / Year
- Campus File Storage (non-replicated): $50 / Terabyte / Year
Pooled Storage Allocations
The transition from an essentially free service for researchers to one that anticipates that some of Brown’s storage costs will be absorbed by other sources will be challenging for some researchers. Some research groups have proposed a variation on the individual payment model in order to smooth out these challenges. In this group plan, all the researchers’ individual and grant allocations are pooled under the umbrella of the group; then, the billing takes place at the group/center/department level. CCV will be happy to accommodate a group payment/billing plan.
In order to do this, interested departments/centers/institutes should generate a list of researchers associated with the group as well as all grants with end dates for any PIs in the group. CCV will then be able to generate a new group-level bill. In addition, please let us know who is handling the invoicing. We appreciate your patience as CCV works to implement a new sustainable model for recovering part of the costs of massive data storage, while still providing a large amount of storage at no-cost.