Skip to content

Support to apply network policies to multiple interfaces #432

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

Pavani-Panakanti
Copy link
Contributor

Issue #, if available:
This PR supports attaching probes and applying policies to all interfaces of a pod that has multi-NIC enabled and multi-NIC-attachment annotation. The same policies will be applied to all interfaces of the pod. BPF programs are shared across all interfaces. Probes needs to be attached to all interfaces of the pod.

Description of changes:

  • New parameter numInterfaces added to AttacheBPFProbes method
  • Interface count determination logic in getInterfaceCountForPod()
  • IPAM cache loading on startup for multi-NIC clusters
  • Same BPF programs are shared across all interfaces of a pod
  • Error handling to skip attaching probes for unknown interface counts
  • Loop-based probe attachment for multiple interfaces with indexed interface names
  • Fallback to IPAM cache when interface count is unavailable during recovery

Testing

On allowed port

dev-dsk-pavanipt-2a-0981017d % ./check-egress.sh multi-nic-pod-64b986c4bf-m9ttv multi-nic 443
🔧 Creating test-server pod in namespace multi-nic...
pod/test-server condition met
✅ Test server is ready at 192.168.72.116:443

🚀 Starting EGRESS test from each interface on multi-nic-pod-64b986c4bf-m9ttv
From 127.0.0.1 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.92.110 ... ✅ ALLOWED (HTTP 200)
From 192.168.78.132 ... ✅ ALLOWED (HTTP 200)
From 192.168.74.64 ... ✅ ALLOWED (HTTP 200)
From 192.168.81.237 ... ✅ ALLOWED (HTTP 200)
From 192.168.92.3 ... ✅ ALLOWED (HTTP 200)
From 192.168.82.186 ... ✅ ALLOWED (HTTP 200)
From 192.168.93.98 ... ✅ ALLOWED (HTTP 200)
From 192.168.69.117 ... ✅ ALLOWED (HTTP 200)
From 192.168.94.35 ... ✅ ALLOWED (HTTP 200)
From 192.168.69.172 ... ✅ ALLOWED (HTTP 200)
From 192.168.89.140 ... ✅ ALLOWED (HTTP 200)
From 192.168.68.48 ... ✅ ALLOWED (HTTP 200)
From 192.168.84.181 ... ✅ ALLOWED (HTTP 200)
From 192.168.72.204 ... ✅ ALLOWED (HTTP 200)
From 192.168.68.207 ... ✅ ALLOWED (HTTP 200)
From 192.168.95.243 ... ✅ ALLOWED (HTTP 200)
From 192.168.86.88 ... ✅ ALLOWED (HTTP 200)
From 192.168.90.86 ... ✅ ALLOWED (HTTP 200)
From 192.168.78.95 ... ✅ ALLOWED (HTTP 200)
From 192.168.77.81 ... ✅ ALLOWED (HTTP 200)
From 192.168.64.143 ... ✅ ALLOWED (HTTP 200)
From 192.168.66.206 ... ✅ ALLOWED (HTTP 200)
From 192.168.65.31 ... ✅ ALLOWED (HTTP 200)
From 192.168.94.236 ... ✅ ALLOWED (HTTP 200)
From 192.168.65.148 ... ✅ ALLOWED (HTTP 200)
From 192.168.86.11 ... ✅ ALLOWED (HTTP 200)
From 192.168.94.166 ... ✅ ALLOWED (HTTP 200)

🧹 Cleaning up test-server pod...

On not allowed port

dev-dsk-pavanipt-2a-0981017d % ./check-egress.sh multi-nic-pod-64b986c4bf-m9ttv multi-nic 8080
🔧 Creating test-server pod in namespace multi-nic...
pod/test-server condition met
✅ Test server is ready at 192.168.71.27:8080

🚀 Starting EGRESS test from each interface on multi-nic-pod-64b986c4bf-m9ttv
From 127.0.0.1 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.92.110 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.78.132 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.74.64 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.81.237 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.92.3 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.82.186 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.93.98 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.69.117 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.94.35 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.69.172 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.89.140 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.68.48 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.84.181 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.72.204 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.68.207 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.95.243 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.86.88 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.90.86 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.78.95 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.77.81 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.64.143 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.66.206 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.65.31 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.94.236 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.65.148 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.86.11 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.94.166 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)

🧹 Cleaning up test-server pod...

Allow 8080 port now and see if updates are applied

dev-dsk-pavanipt-2a-0981017d % ./check-egress.sh multi-nic-pod-64b986c4bf-m9ttv multi-nic 8080
🔧 Creating test-server pod in namespace multi-nic...
pod/test-server condition met
✅ Test server is ready at 192.168.64.43:8080

🚀 Starting EGRESS test from each interface on multi-nic-pod-64b986c4bf-m9ttv
From 127.0.0.1 ... command terminated with exit code 28
❌ BLOCKED or FAILED (code 000)
From 192.168.92.110 ... ✅ ALLOWED (HTTP 200)
From 192.168.78.132 ... ✅ ALLOWED (HTTP 200)
From 192.168.74.64 ... ✅ ALLOWED (HTTP 200)
From 192.168.81.237 ... ✅ ALLOWED (HTTP 200)
From 192.168.92.3 ... ✅ ALLOWED (HTTP 200)
From 192.168.82.186 ... ✅ ALLOWED (HTTP 200)
From 192.168.93.98 ... ✅ ALLOWED (HTTP 200)
From 192.168.69.117 ... ✅ ALLOWED (HTTP 200)
From 192.168.94.35 ... ✅ ALLOWED (HTTP 200)
From 192.168.69.172 ... ✅ ALLOWED (HTTP 200)
From 192.168.89.140 ... ✅ ALLOWED (HTTP 200)
From 192.168.68.48 ... ✅ ALLOWED (HTTP 200)
From 192.168.84.181 ... ✅ ALLOWED (HTTP 200)
From 192.168.72.204 ... ✅ ALLOWED (HTTP 200)
From 192.168.68.207 ... ✅ ALLOWED (HTTP 200)
From 192.168.95.243 ... ✅ ALLOWED (HTTP 200)
From 192.168.86.88 ... ✅ ALLOWED (HTTP 200)
From 192.168.90.86 ... ✅ ALLOWED (HTTP 200)
From 192.168.78.95 ... ✅ ALLOWED (HTTP 200)
From 192.168.77.81 ... ✅ ALLOWED (HTTP 200)
From 192.168.64.143 ... ✅ ALLOWED (HTTP 200)
From 192.168.66.206 ... ✅ ALLOWED (HTTP 200)
From 192.168.65.31 ... ✅ ALLOWED (HTTP 200)
From 192.168.94.236 ... ✅ ALLOWED (HTTP 200)
From 192.168.65.148 ... ✅ ALLOWED (HTTP 200)
From 192.168.86.11 ... ✅ ALLOWED (HTTP 200)
From 192.168.94.166 ... ✅ ALLOWED (HTTP 200)

🧹 Cleaning up test-server pod...

normal pod probes attach

{"level":"info","ts":"2025-06-30T02:35:03.023Z","caller":"rpc/rpc.pb.go:1314","msg":"Received Enforce Network Policy Request for Pod: normal-pod-694465576f-d6m44 Namespace: multi-nic Mode: standard"}
{"level":"info","ts":"2025-06-30T02:35:13.023Z","caller":"rpc/rpc_handler.go:76","msg":"No map instance found"}{"level":"debug","ts":"2025-06-30T02:35:13.024Z","caller":"rpc/rpc_handler.go:77","msg":"Got the attachProbesLock for Pod: normal-pod-694465576f-d6m44, Namespace: multi-nic, PodIdentifier: normal-pod-694465576f-multi-nic"}
{"level":"info","ts":"2025-06-30T02:35:13.024Z","caller":"rpc/rpc_handler.go:77","msg":"AttacheBPFProbes for pod normal-pod-694465576f-d6m44 in namespace multi-nic with hostVethName eni3271e6da000 at interface 0"}
{"level":"info","ts":"2025-06-30T02:35:13.024Z","caller":"ebpf/bpf_client.go:779","msg":"Load the eBPF program"}
{"level":"info","ts":"2025-06-30T02:35:13.076Z","caller":"ebpf/bpf_client.go:779","msg":"Prog Load Succeeded for ingress, progFD: 20, pinpath: /sys/fs/bpf/globals/aws/programs/normal-pod-694465576f-multi-nic_handle_ingress"}{"level":"info","ts":"2025-06-30T02:35:13.076Z","caller":"ebpf/bpf_client.go:722","msg":"Attempting to do an Ingress Attach with progFD: 20"}
{"level":"info","ts":"2025-06-30T02:35:13.094Z","caller":"rpc/rpc_handler.go:77","msg":"Successfully attached Ingress TC probe for pod: normal-pod-694465576f-d6m44 in namespace multi-nic atinterface 0"}
{"level":"info","ts":"2025-06-30T02:35:13.094Z","caller":"ebpf/bpf_client.go:813","msg":"Load the eBPF program"}
{"level":"info","ts":"2025-06-30T02:35:13.144Z","caller":"ebpf/bpf_client.go:813","msg":"Prog Load Succeeded for egress, progFD: 23, pinpath: /sys/fs/bpf/globals/aws/programs/normal-pod-694465576f-multi-nic_handle_egress"}{"level":"info","ts":"2025-06-30T02:35:13.144Z","caller":"ebpf/bpf_client.go:735","msg":"Attempting to do an Egress Attach with progFD: 23"}
{"level":"info","ts":"2025-06-30T02:35:13.157Z","caller":"rpc/rpc_handler.go:77","msg":"Successfully attached Egress TC probe for pod: normal-pod-694465576f-d6m44 in namespace multi-nic at interface 0"}

multi-nic pod on restart

{"level":"info","ts":"2025-06-30T02:32:00.267Z","caller":"ebpf/bpf_client.go:301","msg":"Connected to ipamd grpc endpoint. NetworkPolicyMode: standard MultiNICEnabled: true"}
{"level":"debug","ts":"2025-06-30T02:32:00.268Z","caller":"ebpf/bpf_client.go:311","msg":"Cached interface count for pod multi-nic-pod-64b986c4bf-qqg8bmulti-nic: 27"}
{"level":"info","ts":"2025-06-30T02:32:00.268Z","caller":"ebpf/bpf_client.go:311","msg":"Loaded IPAM data from 2 allocations"}
{"level":"debug","ts":"2025-06-30T02:32:00.376Z","caller":"controllers/policyendpoints_controller.go:316","msg":"Got the attachProbesLock for Pod: multi-nic-pod-64b986c4bf-qqg8b, Namespace:multi-nic, PodIdentifier: multi-nic-pod-64b986c4bf-multi-nic"}
{"level":"info","ts":"2025-06-30T02:32:00.376Z","caller":"ebpf/bpf_client.go:702","msg":"Found interface count 27 from IPAM cache for pod multi-nic-pod-64b986c4bf-qqg8b"}
{"level":"info","ts":"2025-06-30T02:32:00.376Z","caller":"controllers/policyendpoints_controller.go:316","msg":"AttacheBPFProbes for pod multi-nic-pod-64b986c4bf-qqg8b in namespace multi-nic with hostVethName eni7a8ac004134 at interface 0"}
{"level":"info","ts":"2025-06-30T02:32:00.376Z","caller":"ebpf/bpf_client.go:722","msg":"Found an existing instance, let's derive the ingress context.."}
{"level":"info","ts":"2025-06-30T02:32:00.376Z","caller":"ebpf/bpf_client.go:722","msg":"Attempting to do an Ingress Attach with progFD: 16"}
{"level":"info","ts":"2025-06-30T02:32:00.391Z","caller":"controllers/policyendpoints_controller.go:316","msg":"Successfully attached Ingress TC probe for pod: multi-nic-pod-64b986c4bf-qqg8b in namespace multi-nic at interface 0"}
{"level":"info","ts":"2025-06-30T02:32:00.391Z","caller":"ebpf/bpf_client.go:735","msg":"Found an existing instance, let's derive the egress context.."}
{"level":"info","ts":"2025-06-30T02:32:00.391Z","caller":"ebpf/bpf_client.go:735","msg":"Attempting to do an Egress Attach with progFD: 15"}
{"level":"info","ts":"2025-06-30T02:32:00.404Z","caller":"controllers/policyendpoints_controller.go:316","msg":"Successfully attached Egress TC probe for pod: multi-nic-pod-64b986c4bf-qqg8bin namespace multi-nic at interface 0"}
{"level":"info","ts":"2025-06-30T02:32:00.404Z","caller":"controllers/policyendpoints_controller.go:316","msg":"AttacheBPFProbes for pod multi-nic-pod-64b986c4bf-qqg8b in namespace multi-nic with hostVethName eni6a26de29b06 at interface 1"}
{"level":"info","ts":"2025-06-30T02:32:00.404Z","caller":"ebpf/bpf_client.go:722","msg":"Found an existing instance, let's derive the ingress context.."}
{"level":"info","ts":"2025-06-30T02:32:00.404Z","caller":"ebpf/bpf_client.go:722","msg":"Attempting to do an Ingress Attach with progFD: 16"}
{"level":"info","ts":"2025-06-30T02:32:00.418Z","caller":"controllers/policyendpoints_controller.go:316","msg":"Successfully attached Ingress TC probe for pod: multi-nic-pod-64b986c4bf-qqg8b in namespace multi-nic at interface 1"}
{"level":"info","ts":"2025-06-30T02:32:00.418Z","caller":"ebpf/bpf_client.go:735","msg":"Found an existing instance, let's derive the egress context.."}
{"level":"info","ts":"2025-06-30T02:32:00.418Z","caller":"ebpf/bpf_client.go:735","msg":"Attempting to do an Egress Attach with progFD: 15"}
{"level":"info","ts":"2025-06-30T02:32:00.433Z","caller":"controllers/policyendpoints_controller.go:316","msg":"Successfully attached Egress TC probe for pod: multi-nic-pod-64b986c4bf-qqg8bin namespace multi-nic at interface 1"}

normal pod on restart

{"level":"info","ts":"2025-06-30T02:37:40.615Z","caller":"ebpf/bpf_client.go:301","msg":"Connected to ipamd grpc endpoint. NetworkPolicyMode: standard MultiNICEnabled: true"}{"level":"debug","ts":"2025-06-30T02:37:40.616Z","caller":"ebpf/bpf_client.go:311","msg":"Cached interface count for pod normal-pod-694465576f-d6m44multi-nic: 1"}
{"level":"info","ts":"2025-06-30T02:37:40.616Z","caller":"ebpf/bpf_client.go:311","msg":"Loaded IPAM data from 5 allocations"}
{"level":"info","ts":"2025-06-30T02:37:40.727Z","caller":"ebpf/bpf_client.go:702","msg":"Found interface count 1 from IPAM cache for pod normal-pod-694465576f-d6m44"}
{"level":"info","ts":"2025-06-30T02:37:40.727Z","caller":"controllers/policyendpoints_controller.go:316","msg":"AttacheBPFProbes for pod normal-pod-694465576f-d6m44 in namespace multi-nic with hostVethName eni3271e6da000 at interface 0"}
{"level":"info","ts":"2025-06-30T02:37:40.727Z","caller":"ebpf/bpf_client.go:722","msg":"Found an existing instance, let's derive the ingress context.."}
{"level":"info","ts":"2025-06-30T02:37:40.727Z","caller":"ebpf/bpf_client.go:722","msg":"Attempting to do an Ingress Attach with progFD: 16"}
{"level":"info","ts":"2025-06-30T02:37:40.741Z","caller":"controllers/policyendpoints_controller.go:316","msg":"Successfully attached Ingress TC probe for pod: normal-pod-694465576f-d6m44 in namespace multi-nic at interface 0"}
{"level":"info","ts":"2025-06-30T02:37:40.741Z","caller":"ebpf/bpf_client.go:735","msg":"Found an existing instance, let's derive the egress context.."}
{"level":"info","ts":"2025-06-30T02:37:40.741Z","caller":"ebpf/bpf_client.go:735","msg":"Attempting to do an Egress Attach with progFD: 15"}
{"level":"info","ts":"2025-06-30T02:37:40.754Z","caller":"controllers/policyendpoints_controller.go:316","msg":"Successfully attached Egress TC probe for pod: normal-pod-694465576f-d6m44 innamespace multi-nic at interface 0"}

if reconcile request comes first

{"level":"debug","ts":"2025-06-30T02:53:40.788Z","caller":"controllers/policyendpoints_controller.go:316","msg":"Got the attachProbesLock for Pod: multi-nic-pod-64b986c4bf-2h6r2, Namespace:multi-nic, PodIdentifier: multi-nic-pod-64b986c4bf-multi-nic"}
{"level":"error","ts":"2025-06-30T02:53:40.788Z","caller":"controllers/policyendpoints_controller.go:294","msg":"Failed to attach eBPF probes for pod multi-nic-pod-64b986c4bf-2h6r2: skipping probe attach: multiNIC enabled and probes not already attached"}
{"level":"error","ts":"2025-06-30T02:53:40.788Z","caller":"controllers/policyendpoints_controller.go:141","msg":"Error configuring eBPF Probes skipping probe attach: multiNIC enabled and probes not already attached"}

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR extends eBPF probe attachment to support multi-NIC pods by determining interface counts and iterating attachments across all interfaces.

  • Introduces numInterfaces parameter and getInterfaceCountForPod logic backed by IPAM cache.
  • Loads IPAM allocations from JSON at startup when multi-NIC is enabled.
  • Updates probe attachment to loop over each interface and share BPF programs.

Reviewed Changes

Copilot reviewed 8 out of 9 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
pkg/utils/utils.go Modify GetHostVethName signature to accept interfaceIndex
pkg/rpc/rpc_handler.go Pass InterfaceCount into AttacheBPFProbes call
pkg/ebpf/bpf_client.go Load IPAM JSON, determine interface count, and attach probes in loop
pkg/ebpf/bpf_client_test.go Add tests for multi-NIC scenarios and getInterfaceCountForPod
pkg/ebpf/bpf_client_mock.go Update mock client to new AttacheBPFProbes signature and mode API
controllers/policyendpoints_controller.go Call updated AttacheBPFProbes with unknown count constant
go.mod Bump various dependencies
Comments suppressed due to low confidence (2)

pkg/utils/utils.go:176

  • The variable name errors is ambiguous and shadows the built-in error type; consider renaming it to err or aggErr for clarity.
	var errors error

pkg/ebpf/bpf_client.go:638

  • The podIdentifier parameter is not used in this function body; consider removing it to simplify the signature.
func (l *bpfClient) getInterfaceCountForPod(pod types.NamespacedName, podIdentifier string, providedCount int) (int, error) {

if err != nil {
log().Errorf("Attaching eBPF probe failed for pod %s namespace %s : %v", pod.Name, pod.Namespace, err)
log().Errorf("Failed to attach eBPF probes for pod %s: %v", pod.Name, err)
Copy link
Preview

Copilot AI Jun 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This error log omits the pod's namespace, which can be useful for debugging; include both name and namespace, e.g., pod %s/%s.

Suggested change
log().Errorf("Failed to attach eBPF probes for pod %s: %v", pod.Name, err)
log().Errorf("Failed to attach eBPF probes for pod %s/%s: %v", pod.Namespace, pod.Name, err)

Copilot uses AI. Check for mistakes.

return 0, errors.New("Skipping probe attach: multiNIC enabled and interface count is unknown")
}

func (l *bpfClient) AttacheBPFProbes(pod types.NamespacedName, podIdentifier string, numInterfaces int) error {
Copy link
Preview

Copilot AI Jun 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] This method combines interface count logic and a looped attachment flow, making it lengthy; consider extracting helper functions (e.g., for count determination and per-interface attachment) to improve readability.

Copilot uses AI. Check for mistakes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant