JupiterOne takes the confidentiality and integrity of its customer data very seriously. As stewards and partners of JupiterOne Customers, we strive to assure data is protected from unauthorized access and that it is available when needed. The following policies drive many of our procedures and technical controls in support of the JupiterOne mission of data protection.
Production systems that create, receive, store, or transmit Customer data (hereafter “Production Systems”) must follow the requirements and guidelines described in this section.
JupiterOne policy requires that:
(a) Data must be handled and protected according to its classification requirements and following approved encryption standards, if applicable.
(b) Security controls, including authentication, authorization, data encryption, and auditing, should be applied according to the highest classification of data in a given repository. Whenever possible, store data of the same classification in a given data repository and avoid mixing sensitive and non-sensitive data in the same repository.
(c) Workforce members shall not have direct administrative access to production data during normal business operations. Exceptions include emergency operations such as forensic analysis and manual disaster recovery.
(d) All Production Systems must disable services that are not required to achieve the business purpose or function of the system.
(e) All access to Production Systems must be logged, following the JupiterOne logging standard.
(f) All Production Systems must have security monitoring enabled, including activity and file integrity monitoring, vulnerability scanning, and/or malware detection, if applicable.
Data is classified and handled according to §Data Management.
Critical, confidential and internal data will be tagged upon creation, if tagging is supported. Each tag marks the data according to the JupiterOne data classification scheme, which then maps to a protection level for encryption, access control, backup, and retention. Data classification may alternatively be identified by its location/repository. For example, source codes in JupiterOne’s GitHub repos are considered “Internal” by default, even though a tag is not directly applied to each source file.
Critical and confidential data is always stored and transmitted securely, using approved encryption standards.
All systems that process and store sensitive data follow the provisioning process, configuration, change management, patching and anti-malware standards as defined in §Configuration and Change Management.
JupiterOne utilizes Amazon Web Services in the US-East (Ohio) region by default. Data is replicated across multiple regions for redundancy and disaster recovery.
All JupiterOne employees, systems, and resources adhere to the following standards and processes to reduce the risk of compromise of production data:
JupiterOne employee access to production is guarded by an approval process and by default is disabled. When access is approved, temporary access is granted that allows access to production. Production access requests are reviewed by the security team on a case by case basis.
Customer data is logically separated at the database/datastore level using a unique ‘account’ identifier for the institution. The separation is enforced at the API layer where the client must authenticate with a chosen account and then the customer unique identifier is included in the access token and used by the API to restrict access to data to the institution. All subsequent system queries and API actions then include the account identifier.
For details on the backup and recovery process, see controls and procedures defined in §Data Management.
JupiterOne uses AWS CloudWatch/CloudTrail to monitor the entire cloud service operation. If a system failure and alarm is triggered, key personnel are notified by text, chat, and/or email message in order to take appropriate corrective action. Escalation may be required and there is an on-call rotation for major services when further support is necessary.
JupiterOne uses a AIDE security agent to monitor production systems. The agents monitor system activities, generate alerts on suspicious activities and report on vulnerability findings to a centralized management console.
The security agent built into Amazon Machine Images (AMIs) for use in JupiterOne production AWS environments.
All databases, data stores (where applicable), and file systems are encrypted with Advanced Encryption Standard (AES) cryptographic algorithm in Galois/Counter Mode (GCM) with 256-bit secret keys. Separate keys shall be used for each storage type. These keys are rotated periodically on an automated basis per the AWS Key Management Service documentation.
Encryption and key management for local disk encryption of end-user devices follow the defined best practices for each major Operating System flavor:
|OS||Full Disk Encryption Mechanism|
All external data transmission is encrypted end-to-end using encryption keys managed by JupiterOne. This includes, but is not limited to, cloud infrastructure and third party vendors and applications.
Transmission encryption keys and systems that generate keys are protected from unauthorized access. Transmission encryption key materials are protected with access controls, and may only be accessed by privileged accounts.
Transmission encryption keys use a minimum of 4096-bit RSA keys, or keys and ciphers of equivalent or higher cryptographic strength (e.g., 256-bit AES session keys in the case of IPSec encryption).
Transmission encryption keys are limited to use for one year and then must be regenerated.
For all JupiterOne Platform APIs, enforcement of authentication, authorization, and auditing is used for all remote systems sending, receiving, or storing data.
System logs of all transmissions of production data access are kept. These logs must be available for audit.
All internet and intranet connections are encrypted and authenticated using TLS 1.2+ (a strong protocol), ECDHE_RSA with P-256 (a strong key exchange), and AES_128_GCM (a strong cipher).
Data in Use, sometimes known as Data in Process, refers to active data being processed by systems and applications which is typically stored in a non-persistent digital state such as in computer random-access memory (RAM), CPU caches, or CPU registers.
Protection of data in use relies on application layer controls and system access controls. See §Secure Software Development and Product Security and §Access for details.
JupiterOne applications implement logical account-level data segregation to protect data in a multi-tenancy deployment. In addition, JupiterOne applications may incorporate advanced security features such as Runtime Application Self Protection (RASP) modules and fine-grained Access Control (e.g. IAM-like policies) for protection of data in use.
JupiterOne uses AWS Key Management Service (KMS) for encryption key management.
KMS keys are unique to JupiterOne environments and services.
KMS keys are automatically rotated yearly.
JupiterOne uses AWS Certificate Manager (ACM) and LetsEncrypt for certificate management.
Certificates are renewed automatically.
The Security Team monitors the certificates for expiration, potential compromise and use/validity. A Certificate revocation process is invoked if the certificate is no longer needed or upon discovery of potential compromise.
When appropriate, JupiterOne engineering should implement “Versioning” and “Lifecycle”, or equivalent data management mechanism, such that direct edit and delete actions are not allowed on the data to prevent accidental or malicious overwrite. This protects against human errors and cyberattacks such as ransomware.
In AWS, the IAM and S3 bucket policy in production will be implemented accordingly when the environments are configured. When changes must be made, a new version is created instead of editing and overwriting existing data.
All edits create a new version and old versions are preserved for a period of time defined in the lifecycle policy.
Data objects are “marked for deletion” when deleted so that they are recoverable if needed within a period of time defined according to the data retention policy.
Data is archived offsite – i.e. to separate AWS account and/or region.
Additionally, all access to customer data is authenticated, and audited via logging of the infrastructure, systems and/or application.