How do I manually create a Kubernetes network policy for continuous compliance?

By default, Container Security continuous compliance creates a Kubernetes network policy on your behalf. If you want to create the policy manually, follow the steps below:


  1. Change the value of cloudOne.oversight.enableNetworkPolicyCreation to false.
          enableNetworkPolicyCreation: false
  2. Create a network policy with matchLabels set to trendmicro-cloud-one: isolate in your desired namespaces.
      kind: NetworkPolicy
        name: trendmicro-oversight-isolate-policy
            trendmicro-cloud-one: isolate
        - Ingress
        - Egress
    The network policy with matchLabels trendmicro-cloud-one: isolate must exist in each application namespaces in order to perform proper isolation mitigation.
Reference the following table to locate the available tasks you can perform using Helm commands.
Upgrade your Container Security deployment
To upgrade an existing installation in the default Kubernetes namespace to the latest version:
  helm upgrade \
    --values overrides.yaml \
    --namespace ${namespace} \
    trendmicro \
The above script overrides or resets the values in the overrides.yaml file. If you want to use the values that you had previously, use the -reuse-values parameter during the Helm upgrade:
  helm upgrade \
    --namespace ${namespace} \
    --reuse-values \
    trendmicro \ 
Enabling or disabling a specific component
Specific components of the Container Security helm chart can be enabled or disabled individually using an overrides file. For example, you can choose to enable the runtime security component by including the below in your overrides.yaml file:
      enabled: true
Enable runtime security on AWS bottlerocket
You can run runtime security on AWS bottlerocket nodes by adding these configurations in your overrides.yaml file:
      allowPrivilegeEscalation: true
      privileged: true

How do I collect logs for troubleshooting purposes?

When troubleshooting an issue, you have several logs that you can use.

Access logs

Most issues can be investigated using the application logs. The logs can be accessed using kubectl.You can access the logs for the
  • Admission controller using the following command:
    kubectl logs deployment/trendmicro-admission-controller --namespace ${namespace}
  • Runtime security component using the following command, where container can be one of scout, or falco:
     kubectl logs daemonset/trendmicro-scout --namespace ${namespace} -c
  • Oversight controller (Continuous Compliance policy enforcement) using the following command:
     kubectl logs deployment/trendmicro-oversight-controller [controller-manager |
         rbac-proxy] --namespace ${namespace}
  • Usage controller using the following command:
     kubectl logs deployment/trendmicro-usage-controller [controller-manager | rbac-proxy]
         --namespace ${namespace}

Collect support logs

When opening a support case, make sure to include a log package. The log package helps your support provider to debug issues, particularly those related to in-cluster components or communication. A log collection script is available for you to use from the Trend Micro Cloud One Container Security Helm GitHub repository.
Gather logs using the following command:
The following environment variables are supported for log collection:
Environment variable Description Default
RELEASE Helm release name trendmicro
NAMESPACE The namespace that the helm chart is deployed in Current namespace declared in kubeconfig. If no namespace setting exists in kubeconfig, then trendmicro-system will be used.

Why am I getting a '401 Unauthorized' message on API calls?

This is usually because you haven't created an API key to authenticate your requests with Container Security. For information on creating and using a Trend Vision One API key, see Obtaining an API key.
Deprecated: For information on creating and using a legacy API key, see the Workload Security API key help.

Does Container Security require inbound network access to my Kubernetes cluster?

Container Security currently does not require any inbound network access and does not require any extra IP addresses to be added to inbound firewall rules. Communication from the admission controller is outbound-initiated only over HTTPS port 443.

Are regular expressions supported when creating policies?

We support the keywords "contains" and "start with" for image registry, name, and tag in the first release. This provides a basic regular expressions interface.

Does each Kubernetes cluster need its own admission controller?

Yes. Each Kubernetes cluster should have its own admission controller. If you need to, you can scale the desired replicas. The default is 1.

Will the validation of admission control webhooks cause Container Security to change a container's configuration?

No. It only validates if a deployment request is allow or denied in a policy definition.

During the validating phase, when you run kubectl apply -f <...>, does the admission controller query Container Security? If so, is a local cache being used for each query?

Yes. The admission controller queries Container Security everytime a review request happens in Kubernetes, both when doing a kubectl create or a kubectl apply.
No local cache is being used for queries or policies to ensure the policy is always up to date.
By default, review requests from the kube-system namespace are not forwarded to Container Security. For more information, see the admission controller yaml file.

What is the telemetry in Container Security used for? What kind of data is admission control sending?

For more information about data collection and telemetry, see Trend Vision One Container Security Data Collection Notice.

When should you increase the replica count for the admission controller?

Consider increasing the replica count for the admission controller in large environments, where many admission requests may occur at the same time. Admission requests occur when a pod scales its replica counts, new deployments occur, etc.

How do you add pods with multiple containers to exceptions?

Pods with multiple containers should have exceptions for all containers inside of them. Container Security only allows the admission request if all requested containers are not violating a policy rule or meet exception criteria.

Why is my pod not being isolated from network access?

If you are using the "Isolate" action in your Continuous Compliance policy or Runtime rules, the Kubernetes cluster where the protected resources are running must have Kubernetes network policies enabled. To enable Kubernetes network policies, install a network plugin with NetworkPolicy support using the provided guide in the helm chart README.

Why are vulnerabilities not showing up in the vulnerability view?

This section covers some commonly seen issues in Runtime Scanning, and how to address them.
Scanner pods are getting terminated with an OOMKilled status:
  • Scanner pod status can be observed through tools such kubectl. In this situation, the following log might be observed by running
    kubectl describe nodes: 
                         Memory cgroup out of memory: Killed process xxxxx (sbom-job)
  • During normal operations, every unique image deployed in your cluster triggers a scanner pod. This scan job generates a Software Bill of Material (SBOM) for the deployed image, and the SBOM is sent to Trend Vision One for further analysis. If the generated SBOM is larger than the default maximum memory limit of the scan job, then the pod will be terminated with an OOMKilled status. Exceptionally large images (such as machine learning images), could lead to exceptionally large SBOMs. To remediate this issue, you can override the default maximum memory limit of the scan job in your helm overrides YAML file (usually overrides.yaml):
        apiKey: <API_KEY>
        endpoint: <ENDPOINT>
            enabled: true
                memory: 1024Mi
  • To apply the new configuration, run the helm upgrade command. If you continue encountering the same problem, consider increasing the scanner memory again (for example, 2048Mi).
Discovered vulnerabilities are disappearing from the vulnerability view:
  • The runtime scanning vulnerability view is currently a live representation of vulnerabilities in your cluster. Once a vulnerability is no longer running in the cluster (the vulnerable container is terminated), it will be immediately removed from the vulnerability view.

Can I have multiple scan tools installed in my cluster?

It is recommended to only include one scanning tool in each cluster, as multiple such tools running concurrently can cause unpredictable behavior where both tools continuously scan each other's pods. If this situation is not avoidable, you can exclude the other scan tool's namespace from Container Security scans by adding the following to your overrides file:
namespaces: [list, of, namespaces]
It is also recommended to exclude the namespace where you installed Container Security from getting scanned by the other scan tool.

When should I increase the maximum concurrency for the vulnerability scanner pods?

Large clusters could benefit from increasing the default maximum concurrency for the vulnerability scanner pods to drive faster scan results, by using more of your cluster's resources. The scanner pod concurrency limit is meant to constrain Container Security's resource usage within your cluster. For example, if the concurrency limit was set to 5, then a maximum of 5 unique images can be scanned at a time. Modifying the scanner pod concurrency limit can be done through your overrides file:
maxJobCount: 15
When increasing the concurrency limit for the vulnerability scanner pods, please ensure your cluster has enough resources to handle the additional scanner pods. You can change the default resource requirements for each scanner pod by changing the maxJobCount value in the scanManager section of the helm chart.