JupiterOne standardizes and automates configuration management through the use of automation scripts as well as documentation of all changes to production systems and networks. Automation tools such as Terraform automatically configure all JupiterOne systems according to established and tested policies, and are used as part of our Disaster Recovery plan and process.
JupiterOne policy requires that:
(a) ALL production changes, including but not limited to software deployment, feature toggle enablement, network infrastructure changes, and access control authorization updates, must go through the approved change management process.
(b) Each production change must maintain complete traceability to fully document the request, including requestor, date/time of change, actions taken and results.
(c) Each production change must be fully tested prior to implementation.
(d) Each production change must include a response plan for dealing with failure should the change produce unwanted results.
(e) Each production change must include proper approval:
Configuration management is automated using industry-recognized tools like Terraform to enforce secure configuration standards.
All infrastructure changes to production systems, network devices, and firewalls are reviewed and approved by Security Team before they are implemented to assure they comply with business and security requirements.
All changes to production systems are tested before they are implemented in production.
Implementation of approved changes is only performed by authorized personnel.
Tooling is used to generate an up to date system inventory:
JupiterOne uses the CIS Benchmarks published by the Center for Internet Security as a baseline for hardening production systems.
All production IT assets in JupiterOne have time synchronized to a single
All frontend functionality (e.g. user dashboards and portals) is separated from backend (e.g. database and app servers) systems by being deployed as separate servers, services, or containers.
All software and systems are required to complete full-scale testing before being promoted to production.
All code changes are reviewed to assure software code quality, while in development, to proactively detect potential security issues using pull-requests and static code analysis tools. More details can be found in the Code Promotion Processes section.
All infrastructure and system configurations, including all software-defined sources, are centrally aggregated to the JupiterOne Platform which functions as a configuration management database (CMDB).
Configuration auditing rules are created according to established baseline, approved configuration standards and control policies. Deviations, misconfigurations, or configuration drifts are detected by these rules and alerted to the Security Team.
Before provisioning any systems, a request must be created and approved in the GitHub PRODCM Tracking Repo (PRODCM - Production Change Management).
The Security Team must approve the provisioning request before any new system can be provisioned, unless a pre-approved automation process is followed.
Once provisioning has been approved, the implementer must configure the new system according to the standard baseline chosen for the system’s role.
Sensitive data in transit must always be encrypted.
A security analysis is conducted once the system has been provisioned. This can be achieved either via automated configuration/vulnerability scans or manual inspection by the Security Team. Verifications include, but are not limited to:
The new system is fully promoted into production upon successful verification against corresponding JupiterOne standards and change request approvals.
Employee laptops, including Windows, Mac, and Linux systems, are configured either:
The following security controls are applied at the minimum:
The security configurations on all end-user systems are inspected by the Security Team through either a manual periodic review or an automated compliance auditing tool integrated with the JupiterOne Platform.
Linux System Hardening: Linux systems have their baseline security configuration applied via automation tools. These tools cover:
Windows System Hardening:
Windows systems are not used in JupiterOne production environments.
Provisioning management systems such as configuration management servers, remote access infrastructure, directory services, or monitoring systems follows the same procedure as provisioning a production system.
Critical infrastructure roles applied to new systems must be clearly documented by the implementer in the change request.
All network devices, services, or controls on a sensitive network are configured such that:
Vendor provided default configurations are modified securely, including:
Encryption keys and passwords are changed anytime anyone with knowledge of the keys or passwords leaves the company or changes positions.
Traffic filtering (e.g. firewall rules) and inspection (e.g. Network IDS/IPS or AWS VPC flow logs) are enabled.
An up-to-date network diagram is maintained.
In AWS, network controls are implemented using Virtual Private Clouds (VPCs) and Security Groups. The configurations are managed as code and stored in approved repos. All changes to the configuration follow the defined code review, change management and production deployment approval process.
JupiterOne maintains a single Organization in AWS, maintained in a top-level AWS account (master). Sub-accounts are connected that each hosts separate workloads and resources in its own sandboxed environment. The master account itself handles aggregated billing for all connected sub-accounts but does not host any workload, service or resource, with the exception of DNS records for JupiterOne root domain, using the AWS Route53 service. DNS records for subdomains are maintained in the corresponding sub-accounts.
Access to each account is funneled through our designated SSO provider, Google G Suite, which establishes a trust relationship to a set of predefined roles in the master account. Once authenticated, a user then leverages AWS IAM Assume Role capability to switch to a sub-account to access services and resources.
The account and network structure looks like the following:
SSO/IdP ── jupiterone-master │ └── billing and root DNS records only │ ├── jupiterone-dev │ └── VPC │ └── Subnets │ └── Security-Groups │ └── EC2 instances │ └── Docker containers │ ├── jupiterone-test │ └── VPC │ └── Subnets │ └── Security-Groups │ └── EC2 instances │ └── Docker containers │ ├── jupiterone-infra │ └── VPC │ └── Subnets │ └── Security-Groups │ └── EC2 instances │ └── Docker containers │ ├── jupiterone-prod-us │ └── VPC │ └── Subnets │ └── Security-Groups │ └── EC2 instances │ └── Docker containers │ ├── jupiterone-prod-us2 │ ...
JupiterOne AWS environments and infrastructure are managed as code.
Provisioning is accomplished using a set of automation scripts and Terraform
code. Each new environment is created as a sub-account connected to
jupiterone-master. The creation and provisioning of a new account
follows the instructions documented in the “Bootstrap a new AWS environment” page
of the Engineering wiki.
The JupiterOne Continuous Delivery Pipeline automates creation and validation of change requests. This is done in a 3-phase process:
Jenkins is used for continuous delivery (build and deploy), and we employ
Jenkins-GitHub PRODCM Tracking Repo automation such that:
change-management-bot (cm-bot) is implemented to provide
additional details to assist/automate the change artifact approval
Whenever a GitHub PRODCM Tracking Repo artifact is created or updated, the bot is triggered via a GitHub PRODCM Tracking Repo webhook. The bot is configured to examine the following:
The following practices will fail this validation and result in manual processing, therefore should be avoided: * squashing commits on PR merges * commits after PR approval without re-approval
Details of the analysis are posted to the GitHub PRODCM Tracking Repo artifact.
When all the required checks pass validation, the cm-bot recommends approval. The cm-bot may be configured to automatically approve the artifact if all of the required conditions above (and future ones) are met. Additionally, a manual review / approval is always required in the following conditions:
If human approvals are needed, the required approvers will review the details and approve/decline accordingly.
Random inspections of automatically approved artifacts are performed by the security team quarterly to ensure the automation functions properly. CCM1
1. Note that the above flow does not catch weaknesses in design, and therefore does not replace the need for threat modeling and security review in the design phase. 2. Additional requirements may be added later as the process continues to mature.
Jenkins job proceeds only with an approved and validated GitHub PRODCM Tracking Repo artifact.
During production deploys, the
terraform plan output is inspected by
policy-enforcement tooling to detect risky changes.
Examples of security-related or risky changes include:
If risky changes are detected, the deploy is paused and the GitHub PRODCM Tracking Repo artifact is updated to require manual review before continuing.
Once a deploy is completed, the GitHub PRODCM Tracking Repo artifact is automatically resolved and closed.
JupiterOne requires that endpoint devices are configured to automatically download and apply security updates shipped by the system vendor.
JupiterOne follows an immutable infrastructure methodology to keep the resources in the cloud environments immutable and up-to-date with security patches.
AWS Elastic Container Service (ECS) is used to dynamically manage container resources based on demand.
The Development Team builds security-approved AMI from the latest AWS optimized Amazon Machine Image (AMI) to include required security agents.
The security agents installed on the security-approved AMIs continuously scan for and report new vulnerabilities.
The custom AMIs are automatically rebuilt from the latest AWS AMIs weekly to include the latest security patches.
JupiterOne requires auto-update for security patches to be enabled for all user endpoints, including laptops and workstations.
The auto-update configuration and update status on all end-user systems are inspected by Security Team through either manual periodic audits or automated compliance auditing agents installed on the endpoints.
In order to promote changes into Production, a valid and approved Change Request (CR) is required. It can be created in the GitHub PRODCM Tracking Repo which implements the JupiterOne Change Management workflow, and manages changes and approvals.
At least two approvals are required for each GitHub PRODCM Tracking Repo artifact. By default, the approvers are:
Additional approver(s) may be added depending on the impacted component(s). For example:
jupiterone-infraaccount in AWS.
Each GitHub PRODCM Tracking Repo artifact requires the following information at a minimum:
Additional details are required for a code deploy, including:
In the event of an emergency, the person or team tasked with emergency response is notified. This may include a combination or Development, Security, and/or Leadership.
If an emergency change must be made, such as mitigating a critical security vulnerability or recovering from a system downtime, and the standard change management process cannot be followed due to time constraint or personnel availability or other unforeseen issues, the change can be made by:
Notification: The Engineering Lead, Security Lead, and/or Leadership must be notified by email, Slack, or phone call prior to the change . Depending on the nature of the emergency, the leads may choose to inform other members of the executive team.
Access and Execution: Manually access of the production system or manual deploy of software, using one of the following access mechanisms as defined in §Access Control:
Post-emergency Documentation: A GitHub PRODCM Tracking Repo artifact should be created within 24 hours following the emergency change. The artifact should contain all details related to the change, including:
Prevention and Improvement: The change must be fully reviewed by Security and Engineering Teams together with the person/team responsible for the change. Any process improvement and/or preventative measures should be documented and an implementation plan should be developed.