Before installing Teamwork Cloud, ensure that the system requirements described in this chapter are met. Cassandra installation Teamwork Cloud requires Cassandra installation. To learn more about the hardware requirements for Cassandra, see https://cassandra.apache.org/doc/latest/cassandra/operating/hardware.html System requirements are dictated by the intended deployment, taking into account the overall load which the environment will experience, including: The database (Cassandra) can be located on the same server as Teamwork Cloud or on a separate server. Storage requirements apply only to the node where the database is located. Teamwork Cloud hosting nodes can be virtualized without any issues if the host is not oversubscribed on its resources. Nodes containing both Teamwork Cloud and Cassandra: Nodes containing only Cassandra: Nodes containing only Teamwork Cloud: Multi-Node Clusters Recommended minimum sizing stated above applies to each node in a multi-node cluster. SAN Storage SAN Storage should not be used for data or commit log volumes on Cassandra nodes. This will result in severe performance degradation. There is no amount of SAN tuning and OS tuning which could mitigate this. For adequate Teamwork Cloud operation, your hardware should meet the following requirements: At least 3 separate disks, preferably SSD (NVMe), (OS/Application, Data, and Commit logs). Depending on company backup procedures and infrastructure, an additional disk, equal to the data disk size, may be required to store the backup snapshots. Teamwork Cloud supports the following operating systems: Linux 64-bit RedHat 8, RedHat 9, Oracle Linux 8. The Linux operating system is highly recommended for Teamwork Cloud deployment. Cassandra 4 does not have native Windows support. For more information, please visit https://www.datastax.com/dev/blog/cassandra-and-windows-past-present-and-future. For a fully working environment, you will also need the following: Open ports 1101, 2181, 2552, 7000, 7001, 7199, 9042, and 9142 between servers in a cluster Open ports 3579, 8111, 8443, and 10002 (default) for clients. The port number 10002 can be changed according to the port assigned to secure connections between the client software and Teamwork Cloud. The following table lists the ports that Teamwork Cloud services use and their descriptions: If deploying on Amazon EC2Recommended system requirements
Minimal hardware requirements
Software requirements
Service Port Description FlexNet server (lmadmin) 1101 FLEXnet server port 8090 Default vendor daemon port (web browser management port) 27000-27009 Internal license server manager port Cassandra 7000 Internode cluster communication port (not used if TLS is enabled) 7001 Encrypted internode cluster communication port (used if TLS is enabled) 7199 JMX monitoring port of the Cassandra node 9042 Native client port used to connect to Cassandra and perform operations (used with 2021x version and later) 9142 Native client port when SSL encryption is enabled (used when Cassandra is on a separate server or Cassandra is deployed as a multinode cluster) Teamwork Cloud 2552 Teamwork Cloud default remote server port 3579 Default Teamwork Cloud port when SSL is not enabled 8111 Teamwork Cloud REST API port 10002 Default port when SSL is enabled Web Application Platform 8443 Web Application Platform port (Teamwork Cloud Admin, Collaborator…) Zookeeper 2181 Zookeeper internal port
For additional server capacity and performance recommendations, see https://cassandra.apache.org/doc/latest/cassandra/operating/hardware.html
When deploying on Amazon EC2, we recommend using the m5-2xlarge, r5-2xlarge, or i3-2xlarge instances. Depending on the workload, you may want to go to the -4xlarge instances, but for most users, the -2xlarge will suffice. The m5 instances meet the minimum system requirements and will be acceptable for small deployments. The r5 instances provide more memory for the same CPU density. The i3 instances should be used when workloads have a higher level of user concurrency due to the significantly improved performance of the ephemeral NVMe storage.