IBM Cloud Docs
Controlling traffic with network policies

Controlling traffic with network policies

Classic clusters

This network policy information is specific to classic clusters. For network policy information for VPC clusters, see Controlling traffic with security groups.

Every IBM Cloud® Kubernetes Service cluster comes with a network plug-in called Calico. Default network policies secure the public network interface of every worker node in the cluster.

You can use Calico and Kubernetes to create network policies for a cluster. With Kubernetes network policies, you can specify the network traffic that you want to allow or block to and from a pod within a cluster. To set more advanced network policies such as blocking inbound (ingress) traffic to network load balancer (NLB) services, use Calico network policies.

Kubernetes network policies
Kubernetes network policies specify how pods can communicate with other pods and with external endpoints. Both incoming and outgoing network traffic is allowed or blocked based on protocol, port, and source or destination IP addresses. Traffic can also be filtered based on pod and namespace labels. You can apply Kubernetes network policies by using kubectl commands or the Kubernetes APIs.
Calico network policies
Calico network policies are a set of the Kubernetes network policies. You can apply Calico policies by using the calicoctl command line. Calico policies add the following features.

Calico enforces these policies, including any Kubernetes network policies, by setting up Iptables rules serve as a firewall for the worker node to define the characteristics that the network traffic must meet to be forwarded to the targeted resource.

Default Calico and Kubernetes network policies

When a cluster with a public VLAN is created, a HostEndpoint resource with the ibm.role: worker_public label is created automatically for each worker node and its public network interface. This HostEndpoint causes all traffic to or from the public network interface to be dropped unless it is specifically allowed by a Calico policy that selects ibm.role: worker_public label.

A HostEndpoint resource with the ibm.role: worker_private label is also created automatically for each worker node and its private network interface. A default allow-all-private-default policy is created so that all traffic is allowed to and from the private network interface. This HostEndpoint makes it easy for cluster users to further restrict private network traffic by creating Calico policies that select ibm.role: worker_private and have a lower order number than the allow-all-private-default."

These default Calico host policies allow all public outbound network traffic and allow public inbound traffic to specific cluster components, such as Kubernetes NodePort, LoadBalancer, and Ingress services. All private traffic is allowed by default by the allow-all-private-default policy. Any other inbound network traffic from the internet to your worker nodes that isn't specified in the default policies gets blocked. The default policies don't affect pod to pod traffic.

Don't remove the default policies from your cluster, because they are recreated on the next cluster master refresh or master update. If you want to further restrict traffic, apply lower order Calico policies to block the traffic. Be sure you fully understand what you are blocking, and that the cluster components do not need the traffic you want to block.

Review the following default Calico host policies that are automatically applied to your cluster.

Default Calico host policies for each cluster
Calico policy Description
allow-all-outbound Allows all outbound traffic on the public network.
allow-all-private-default Allows all inbound and outbound traffic on the private network.
allow-bigfix-port Allows incoming traffic on port 52311 to the BigFix app to allow necessary worker node updates.
allow-icmp Allows incoming ICMP packets (pings).
allow-node-port-dnat Allows incoming network load balancer (NLB), Ingress application load balancer (ALB), and NodePort service traffic to the pods that those services are exposing. Note: You don't need to specify the exposed ports because Kubernetes uses destination network address translation (DNAT) to forward the service requests to the correct pods. That forwarding takes place before the host endpoint policies are applied in Iptables.
allow-sys-mgmt Allows incoming connections for specific IBM Cloud infrastructure systems that are used to manage the worker nodes.
allow-vrrp Allows VRRP packets, which monitor and move virtual IP addresses between worker nodes.

Default Kubernetes policies that limits access to the Kubernetes Dashboard are also created. Kubernetes policies don't apply to the host endpoint, but at the pod-level instead, and to all classic and VPC clusters.

Default Kubernetes policies for each cluster
Kubernetes policy Description
dashboard-metrics-scraper Kubernetes 1.20 or later: Provided in the kube-system namespace: Blocks all pods from accessing the Kubernetes Dashboard metrics scraper. This policy doesn't prevent the Kubernetes Dashboard from accessing the dashboard metrics. Further, this policy doesn't impact accessing the dashboard metrics from the IBM Cloud console or from using kubectl proxy. If a pod requires access to the dashboard metrics scraper, deploy the pod in a namespace that has the dashboard-metrics-scraper-policy: allow label.
kubernetes-dashboard Provided in the kube-system namespace: Blocks all pods from accessing the Kubernetes Dashboard. This policy doesn't impact accessing the dashboard from the IBM Cloud console or by using kubectl proxy. If a pod requires access to the dashboard, deploy the pod in a namespace that has the kubernetes-dashboard-policy: allow label.

Installing and configuring the Calico CLI

To view, manage, and add Calico policies, install and configure the Calico CLI.

  1. Set the context for your cluster to run Calico commands.

    • Kubernetes version 1.19 and later:

      1. Download the kubeconfig configuration file for your cluster.
        ibmcloud ks cluster config --cluster <cluster_name_or_ID>
        
      2. Set the DATASTORE_TYPE environment variable to kubernetes.
        export DATASTORE_TYPE=kubernetes
        
  2. If corporate network policies use proxies or firewalls to prevent access from your local system to public endpoints, allow TCP access for Calico commands.

  3. Follow the steps to install the calicoctl command line tool.

    • Linux and OS X

      1. Download the version of the Calico CLI that matches your operating system. For OS X, you might need to manually allow the downloaded file to be opened and run by navigating to System Preferences > Security & Privacy > General.

      2. Move the file to the /usr/local/bin directory.

        mv <filepath>/<filename> /usr/local/bin/calicoctl
        
      3. Make the file an executable file.

        chmod +x /usr/local/bin/calicoctl
        
      4. Ensure there isn't an old Calico configuration file calicoctl.cfg in the /etc/calico directory. If the /etc/calico/calicoctl.cfg file exists, delete it.

    • Windows

      1. Download the Calico CLI. When you save the file, rename it to calicoctl.exe and save it in the same directory as the IBM Cloud CLI. This setup saves you some file path changes when you run commands later.

      2. Set the environment variable to the configuration file for your cluster.

        export KUBECONFIG=./.bluemix/plugins/container-service/clusters/<cluster_name>-<hash>/kube-config.yaml
        
  4. Verify that the Calico configuration is working correctly.

    calicoctl get nodes
    

    Example output

    NAME
    10.176.48.106
    10.176.48.107
    10.184.58.23
    10.184.58.42
    ...
    

Changing the Calico plug-in, components, or default Calico settings is not supported. For example, don't deploy a new Calico plug-in version, or modify the daemon sets or deployments for the Calico components, default IPPool resources, or Calico nodes. Instead, you can follow the documentation to change the Calico MTU or to disable the port map plug-in for the Calico CNI if necessary.

Viewing network policies

View the details for default and any added network policies that are applied to your cluster.

Before you begin, install and configure the Calico CLI, and set the context for your cluster to run Calico commands.

  1. View the Calico host endpoint.

    calicoctl get hostendpoint -o yaml
    
  2. View all the Calico network policies that were created for the cluster. This list includes policies that might not apply to any pods or hosts yet. For a Calico policy to be enforced, a Kubernetes pod or Calico HostEndpoint must exist that matches the selector that in the Calico network policy.

    Network policies are scoped to specific namespaces:

    calicoctl get NetworkPolicy --all-namespaces -o wide
    

    Global network policies are not scoped to specific namespaces:

    calicoctl get GlobalNetworkPolicy -o wide
    
  3. View details for a network policy.

    calicoctl get NetworkPolicy -o yaml <policy_name> --namespace <policy_namespace>
    
  4. View the details of all global network policies for the cluster.

    calicoctl get GlobalNetworkPolicy -o yaml
    

Adding network policies

Usually, the default policies don't require changes. Only advanced scenarios might require changes. If you find that you must make changes, you can create your own network policies.

To create Kubernetes network policies, see the Kubernetes network policy documentation.

To create Calico policies, use the following steps. Before you begin, install and configure the Calico CLI, and set the context for your cluster to run Calico commands.

  1. Define your Calico network policy or global network policy by creating a configuration script (.yaml) with Calico v3 policy syntax. These configuration files include the selectors that describe what pods, namespaces, or hosts that these policies apply to.

    For new clusters provisioned at version 1.29 or later, the short names for globalnetworkpolicies.crd.projectcalico.org (gnp) and hostendpoints.crd.projectcalico.org (hep) are not supported. However, if you upgrade a cluster to version 1.29 or later, the short names are still supported.

  2. Apply the policies to the cluster. If you have a Windows system, include the --config=<filepath>/calicoctl.cfg option.

    calicoctl apply -f policy.yaml [--config=<filepath>/calicoctl.cfg]
    

Note that Calico and Kubernetes network policies only block new connections, they don't interrupt connections that existed before the policy was applied. So, after applying a new or changed policy, to test that it is working and not blocking more than it should, do the following:

  1. Restart any pods that might be affected by the policy. Better yet, restart all pods, just in case you don't have your selector correct and it affects more than you think it will.

  2. Run ibmcloud ks cluster master refresh -c CLUSTER-ID to restart your cluster master pods. This will interrupt existing connections from kubelet and other components to the master and force them to reconnect. This will show you if the new and changed policies block any necessary connections to your master components.

  3. Try to connect to the Kubernetes dashboard to ensure the policy changes don't block the connections needed by those components.

Controlling inbound traffic to NLB or NodePort services

By default, Kubernetes NodePort and LoadBalancer services make your app available on all public and private cluster interfaces. However, you can use Calico policies to block incoming traffic to your services based on traffic source or destination.

Default Kubernetes and Calico policies are difficult to apply to protect Kubernetes NodePort and LoadBalancer services due to the DNAT Iptables rules generated for these services. However, pre-DNAT policies prevent specified traffic from reaching your apps because they generate and apply Iptables rules before Kubernetes uses regular DNAT to forward traffic to pods.

Some common uses for Calico pre-DNAT network policies:

  • Block traffic to public node ports of a private network load balancer (NLB) service: An NLB service makes your app available over the NLB IP address and port and makes your app available over the service's node ports. Node ports are accessible on every IP address (public and private) for every node within the cluster.
  • Block traffic to public node ports on clusters that are running edge worker nodes: Blocking node ports ensures that the edge worker nodes are the only worker nodes that handle incoming traffic.
  • Block traffic from certain source IP addresses or CIDRs
  • Allow traffic from only certain source IP addresses or CIDRs, and block all other traffic

To see how to allow or block source IP addresses, try the Using Calico network policies to block traffic tutorial.

Example Calico policies to restrict public or private network traffic

We provide a set of example Calico public network policies that further restrict public/private network traffic on cluster workers. These policies allow the traffic that is necessary for the cluster to function, and block certain other traffic.

These policies are not meant to block everything, nor do they necessarily meet any compliance requirements on their own. They are intended as a starting point and must be edited to meet your unique use cases. For more information, see the README.

Whenever new locations for IBM Cloud Kubernetes Service and other IBM Cloud are enabled, the subnets for these locations are added to the Calico policies. Be sure to watch the GitHub repository for any updates to these policies.

Applying public network policies

Before you begin, install and configure the Calico CLI, and set the context for your cluster to run Calico commands.

  1. Clone the IBM-Cloud/kube-samples repository.

    git clone https://github.com/IBM-Cloud/kube-samples.git
    
  2. Change directories to the public policy directory for the region that your cluster is in. Example command for a cluster in US South:

    cd <filepath>/IBM-Cloud/kube-samples/calico-policies/public-network-isolation/us-south
    
  3. Review each policy for any changes you might need to make. For example, if you specified a custom subnet when you created your cluster that provides the private IP addresses for your pods, you must specify that CIDR instead of the 172.30.0.0/16 CIDR in the allow-ibm-ports-public.yaml policy. Also review these policies for any connections you might not want to allow. For example, if a certain port listed in the policy is described as only being needed for Istio, and you are not deploying Istio in your cluster, you can remove that port.

  4. Apply the public or private policies that you want to use.

    calicoctl apply -f allow-egress-pods-public.yaml
    calicoctl apply -f allow-ibm-ports-public.yaml
    calicoctl apply -f allow-public-service-endpoint.yaml
    calicoctl apply -f deny-all-outbound-public.yaml
    calicoctl apply -f allow-konnectivity.yaml
    calicoctl apply -f allow-k8s-master-to-dashboard.yaml
    
    
  5. Optional: To allow your worker nodes to access other IBM Cloud services over the public network, apply the allow-public-services.yaml and allow-public-services-pods.yaml policies. The policy allows access to the IP addresses for IBM Cloud Container Registry, and if the services are available in the region, IBM Log Analysis and IBM Cloud Monitoring. To access other IBM Cloud services, you must manually add the subnets for those services to this policy.

    calicoctl apply -f allow-public-services.yaml
    calicoctl apply -f allow-public-services-pods.yaml
    
  6. Verify that the network policies are applied.

    calicoctl get NetworkPolicies -o yaml -A
    
  7. Verify that the global network policies are applied.

    calicoctl get GlobalNetworkPolicies -o yaml
    
  8. Optional: If you must allow traffic that is not specified by these policies, create and apply Calico policies to allow this traffic. For example, if you use any in-cluster webhooks, you must add policies to ensure that the webhooks can make the required connections. You also must create policies for any non-local services that extend the Kubernetes API. You can find these services by running kubectl get apiservices.

Applying private network policies

We provide a set of example Calico private network policies that further restrict public/private network traffic on cluster workers. These policies allow the traffic that is necessary for the cluster to function, and block certain other traffic.

These policies are not meant to block everything, nor do they necessarily meet any compliance requirements on their own. They are intended as a starting point and must be edited to meet your unique use cases. For more information, see the README.

Whenever new locations for IBM Cloud Kubernetes Service and other IBM Cloud are enabled, the subnets for these locations are added to the Calico policies. Be sure to watch the GitHub repository for any updates to these policies.

When you apply the egress pod policies that are in this policy set, only network traffic to the subnets and ports that are specified in the pod policies is permitted. All traffic to any subnets or ports that are not specified in the policies is blocked for all pods in all namespaces. Because only the ports and subnets that are necessary for the pods to function in IBM Cloud Kubernetes Service are specified in these policies, your pods can't send network traffic over the private network until you add or change the Calico policy to allow them to.

Before you begin, install and configure the Calico CLI, and set the context for your cluster to run Calico commands.

  1. Clone the IBM-Cloud/kube-samples repository.

    git clone https://github.com/IBM-Cloud/kube-samples.git
    
  2. Go to the private policy directory for the region that your cluster is in. Example command for a cluster in US South:

    cd <filepath>/IBM-Cloud/kube-samples/calico-policies/private-network-isolation/us-south
    
  3. Review each policy for any changes you might need to make. For example, if you specified a custom subnet when you created your cluster that provides the private IP addresses for your pods, you must specify that CIDR instead of the 172.30.0.0/16 CIDR in the allow-all-workers-private.yaml policy.

  4. Apply the policies.

    calicoctl apply -f allow-all-workers-private.yaml
    calicoctl apply -f allow-egress-pods-private.yaml
    calicoctl apply -f allow-ibm-ports-private.yaml
    calicoctl apply -f allow-icmp-private.yaml
    calicoctl apply -f allow-private-service-endpoint.yaml
    calicoctl apply -f allow-sys-mgmt-private.yaml
    calicoctl apply -f deny-all-private-default.yaml
    
  5. Optional: To allow your workers and pods to access IBM Cloud Container Registry over the private network, apply the allow-private-services.yaml and allow-private-services-pods.yaml policies. To access other IBM Cloud services that support private cloud service endpoints, you must manually add the subnets for those services to this policy.

    calicoctl apply -f allow-private-services.yaml
    calicoctl apply -f allow-private-services-pods.yaml
    
  6. Optional: To expose your apps with private network load balancers (NLBs) or Ingress application load balancers (ALBs), you must open the VRRP protocol by applying the allow-vrrp-private policy.

    calicoctl apply -f allow-vrrp-private.yaml
    

    You can further control access to networking services by creating Calico pre-DNAT policies. In the pre-DNAT policy, ensure that you use selector: ibm.role=='worker_private' to apply the policy to the workers' private host endpoints.

  7. Verify that the policies are applied.

    calicoctl get GlobalNetworkPolicies -o yaml
    
  8. Optional: If you must allow traffic that is not specified by these policies, create and apply Calico policies to allow this traffic. For example, if you use any in-cluster webhooks, you must add policies to ensure that the webhooks can make the required connections. You also must create policies for any non-local services that extend the Kubernetes API. You can find these services by running kubectl get apiservices.

Controlling traffic between pods

Kubernetes policies protect pods from internal network traffic. You can create simple Kubernetes network policies to isolate app microservices from each other within a namespace or across namespaces.

By default, any pod has access to any other pod in the cluster. Additionally, any pod has access to any services that are exposed by the pod network, such as a metrics service, the cluster DNS, the API server, or any services that you manually create in your cluster.

Logging denied traffic

To log denied traffic requests to certain pods in your cluster, you can create a Calico log network policy.

When you set up network policies to limit traffic to app pods, traffic requests that are not permitted by these policies are denied and dropped. In some scenarios, you might want more information about denied traffic requests. For example, you might notice some unusual traffic that is continuously being denied by one of your network policies. To monitor the potential security threat, you can set up logging to record every time that the policy denies an attempted action on specified app pods.

This section shows you how to log traffic that is denied by a Kubernetes network policy. To log traffic that is denied by a Calico network policy, see Lesson 5 of the Calico network policy tutorial.

Before you begin, install and configure the Calico CLI, and set the context for your cluster to run Calico commands.

  1. Create or use an existing Kubernetes network policy that blocks or limits incoming traffic.

    1. Create a Kubernetes network policy. For example, to control traffic between pods, you might use the following example Kubernetes policy that is named access-nginx that limits access to an NGINX app. Incoming traffic to pods that are labeled "run=nginx" is allowed only from pods with the "run=access" label. All other incoming traffic to the "run=nginx" app pods is blocked.

      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
       name: access-nginx
      spec:
       podSelector:
         matchLabels:
           run: nginx
       ingress:
         - from:
           - podSelector:
               matchLabels:
                 run: access
      
    2. Apply the policy.

      kubectl apply -f <policy_name>.yaml
      
  2. To log all the traffic that is denied by the policy you created in the previous step, create a Calico NetworkPolicy named log-denied-packets. The following Calico policy uses the same pod selector as the example access-nginx Kubernetes policy described in step 1, however the syntax is slightly different since it is a Calico NetworkPolicy instead of a Kubernetes NetworkPolicy. Also, because all Kubernetes NetworkPolicy are evaluated by Calico as order 1000, the order number 3000 is added to ensure it is evaluated after the Kubernetes NetworkPolicy. With these two policies in place, here is the result:

    • New connections coming in to the nginx pod are first evaluated against the Kubernetes NetworkPolicy (order 1000). Connections coming from a pod with the run=access label will be immediately accepted, meaning no other policies are evaluated.

    • If the connection is coming from a pod without the run=access label (or from anything that isn't a pod), that Kubernetes NetworkPolicy won't do anything and Calico next evaluates the log-denied-packets policy. This policy logs the packet to syslog on the worker the nginx pod is on.

    • Calico then checks for any other policies to apply to the connection, and since it doesn't find any, the packet is dropped. This is because any traffic to a pod with a policy that isn't explicitly allowed is dropped.

      apiVersion: projectcalico.org/v3
      kind: NetworkPolicy
      metadata:
        name: log-denied-packets
      spec:
        types:
        - Ingress
        ingress:
        - action: Log
          destination: {}
          source: {}
        selector: projectcalico.org/orchestrator == 'k8s' && run == 'nginx'
        order: 3000
      
    types
    This Ingress policy applies to all incoming traffic requests. The value Ingress is a general term for all incoming traffic, and does not refer to traffic only from the IBM Ingress ALB. | ingress
    action: The Log action writes a log entry for any requests that match this policy to the /var/log/syslog path on the worker node.
    destination: No destination is specified because the selector applies this policy to all pods with a certain label.
    source: This policy applies to requests from any source.
    selector
    The selector should target the same traffic as the original access-nginx Kubernetes NetworkPolicy. Since this is a Calico policy, you must include projectcalico.org/orchestrator == 'k8s' to indicate that it applies to all pods in the policy's namespace, in addition to the original run == 'nginx'.
    order
    Calico policies have orders that determine when they are applied to incoming request packets. Policies with lower orders, such as 1000, are applied first. Policies with higher orders are applied after the lower-order policies. For example, a policy with a very high order, such as 3000, is effectively applied last after all the lower-order policies have been applied. Incoming request packets go through the Iptables rules chain and try to match rules from lower-order policies first. If a packet matches any rule, the packet is accepted. However, if a packet doesn't match any rule, it arrives at the last rule in the Iptables rules chain with the highest order. To make sure that this policy is the last policy in the chain, use a much higher order, such as 3000, than the policy you created in step 1. Note that Kubernetes NetworkPolicy are applied as order 1000.
  3. Apply the policy. If you use a Windows machine, include the --config=<filepath>/calicoctl.cfg option.

    calicoctl apply -f log-denied-packets.yaml [--config=<filepath>/calicoctl.cfg]
    
  4. Generate log entries by sending requests that are not allowed by the policy that you created in step 1. For example, try to ping the pod that is protected by the network policy from a pod or an IP address that is not permitted.

  5. Check for log entries that are written to the /var/log/syslog path. The DST (destination) or SRC (source) IP addresses in the log entry might be different than expected due to proxies, Network Address Translation (NAT), and other networking processes. The log entry looks similar to the following.

    Sep 5 14:34:40 <worker_hostname> kernel: [158271.044316] calico-packet: IN=eth1 OUT= MAC=08:00:27:d5:4e:57:0a:00:27:00:00:00:08:00 SRC=192.XXX.XX.X DST=192.XXX.XX.XX LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=52866 DF PROTO=TCP SPT=42962 DPT=22 WINDOW=29200 RES=0x00 SYN URGP=0
    
  6. Optional: Forward the logs from /var/log/syslog to IBM® Log Analysis or to an external syslog server.