Both Teamwork Cloud (TWCloud) and Cassandra installations are required. This section contains system requirements for installing TWCloud.
Recommended
system requirements
system requirements
System requirements are dictated by the intended deployment, taking into account the overall load which the environment will experience
Number of concurrent users
Level of activity (commits) per day
Overall number and size of the projects stored in Teamwork Cloud.
From a hardware perspective, the database (Cassandra) can be located on the same server as Teamwork Cloud, or separated into its own server. Storage requirements apply only to the node where the database is located.
Ideally, the node hosting Cassandra should be a physical node, since virtualization will introduce a performance degradation. Nodes running Cassandra should have DAS SSD drives (direct-attached). Best performance will be obtained using NVMe drives. Presently, there are hardware limitation limitations on the size of the NVMe drives as well as the number of NVMe drives which that can be installed on a single machine. Therefore, if the expected number and size of projects is are significant, SAS SSD backed by a high-speed caching controller may be a more suitable choice. For ease of maintenance and reduction of risk, we recommend that the volumes reside on RAID-1 or RAID-10. If RAID is not used, the failure of a drive will result in a downed node, impacting the enterprise. By opting to deploy on RAID volumes, a drive failure will not affect the application and will allow the replacement of a drive with zero down timedowntime.
Nodes hosting Teamwork Cloud nodes can be virtualized without any issues, provided the host is not oversubscribed on its resources.
Nodes containing both Teamwork Cloud and Cassandra
96 -128 GB ECC RAM
>=16 processor threads (such as E5-1660)
>1TB SSD DAS storage
Nodes containing only Cassandra
48 - 64 GB ECC RAM
>=8 processor threads (such as E5-1620)
>1TB SSD DAS storage
Nodes containing only Teamwork Cloud
48 - 64 GB ECC RAM
>=8 processor threads (such as E5-1620)
>250GB storage
Info
title
Multi-Node Clusters
Recommended minimum sizing stated above applies to each node in a multi-node cluster.
Warning
title
SAN Storage
SAN Storage should not be used on Cassandra nodes for data or commitlog volumes. This will result in severe performance degradation. There is absolutely no amount of SAN tuning and OS tuning which will mitigate this.
Minimum server system requirements
8 Processor Cores - i.e. Quad Core Hyper-threaded CPU (such as Intel E3-1230 or faster).
32 GB RAM (Motherboard with an ECC RAM is always preferred on any critical database server).
At least 3 separate disks, preferably SSD (NVMe), (OS/Application, Data, and Commit logs). Depending on company backup procedures and infrastructure, an additional disk, equal to the data disk in size, may be required for storing the backup snapshots.
Linux (RedHat/CentOS 7), 64 bit or Windows 2012 R2, Windows 2016.
Oracle Java/OpenJDK 1.8.0_version 8 (update 202 or higher).
A FlexNet License Server.
Open ports 2552, 7000, 7001, 7199, 9042, 9160, and 9142 between servers in a cluster, and open port 3579, 8111, 8443 and 8555 (default) to clients, as well as the port number assigned to secure connections between the client software and Teamwork Cloud.
Static IP address for each node.
Warning
title
Compatibility Remark
TWCloud 19.0 requires Cassandra 3.11-x
Please see the article found at the following link for additional server recommendations for capacity and performance:
Currently, if deploying on Amazon EC2, we recommend m5-2xlarge, r5-2xlarge, or i3-2xlarge instances. Depending on the workloads, you may want to go to the -4xlarge instances, but for the vast majority of users the -2xlarge will suffice. m5 instances will meet the minimum system requirements, which will be acceptable for small deployments. r5 instances provide more memory for the same CPU density. i3 instances should be used when workloads have a higher level of user concurrency, due to the significantly improved performance of the ephemeral NVMe storage.