cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Separate multi-tenant log data with management zones

PeterR
Helper
Separate log data with management zones

 

Description
Currently we separate tenants on our kubernetes platform using management zones.
Each tenant can have multiple namespaces and are prefixes with the tenant name.
For example tenant1-namespace1, tenant1-namespace2, tenant2-namespace1.
A tenant represents a proces group that is tied to a management zone, policy and one or more synced (SSO) SCIM Active Directory groups.
We're using Container groups (Process group naming) to make process group tenant1 with all workloads (tenant1-namespace1, tenant1-namespace2)
Each tenant (dt.entity.process_group_instance) has their own process group (tenant1) and their own management zone (tenant1).
This works perfectly.

 

Now we also want to isolate the log data between tenants. So each tenant should only see their own logs.
The can share data (like logs and dashboards), but primarily they should only see the logs of their own workload and the underlying platform.

 

The problem is that it looks like the logviewer doesn't seem to be able to filter on management zones or on dt.entity.process_group_instance or any other field
and separate them between tenants/users
So the tenant sees all logs of all other tenants on the whole platform, or no logs at all.
 
How can we tackle this?
Other platforms support this. The method of log ingestion is API, Fluentd or Oneagent itself but this is not really relevant in this case. We have plenty of ways to isolate the tenant using fields like dt.entity.process_group_instance or labels in Kubernetes or the payload itself.

 

Our approach:
We now have the following policy for logging on each tenant that looks like this:
ALLOW storage:buckets:read;
ALLOW storage:system:read,
storage:events:read,
storage:logs:read,
storage:metrics:read,
storage:entities:read,
storage:bizevents:read,
storage:spans:read;
ALLOW environment:roles:viewer WHERE environment:management-zone = "tenant1-mgmtzone";
ALLOW environment:roles:logviewer WHERE environment:management-zone = "tenant1-mgmtzone";

 

What we want is something like this, but that doesn't seem to work:
ALLOW storage:system:read WHERE environment:management-zone = "tenant1-mgmtzone";
ALLOW storage:events:read WHERE environment:management-zone = "tenant1-mgmtzone";
ALLOW storage:buckets:read WHERE storage:table-name = "tenant1-mgmtzone";
ALLOW storage:metrics:read WHERE environment:management-zone = "tenant1-mgmtzone";
ALLOW storage:entities:read WHERE environment:management-zone = "tenant1-mgmtzone";
ALLOW storage:bizevents:read WHERE environment:management-zone = "tenant1-mgmtzone";
ALLOW storage:spans:read WHERE environment:management-zone = "tenant1-mgmtzone";
ALLOW environment:roles:viewer WHERE environment:management-zone = "tenant1-mgmtzone";

 

Links
Logs and events, advantaged mode using Grail
6 REPLIES 6

Hi, its more about data ownership. That the people in one management zone can only read their own data (logdata) regardless of the form of the data. In our case this is json and syslog compatible logging, but that is only important for processing.

PeterR
Helper

We've made a feature request and we are investigating some new possibilities:
https://community.dynatrace.com/t5/Product-ideas/opt-in-logging-using-kubernetes-labels/idi-p/219533

Vakho
Helper

Hello PeterR, 
 Is this something you are looking for? 
213213123.PNG

Hi, unfortunately no. In Kubernetes, tenants are separated by data ownership on a application level like namespaces, labels and annotations. Not underlying hosts. Pods and containers share underlying hosts. It's an abstraction layer so to speak. But thanks for your reply.

Theodore_x86
Helper

Hello all.

I think this should be a new idea. Filter logs based on Management Zone.

PeterR
Helper

Hi, We have the following workaround to achieve this.
It took a long time to figure this out.
For the record: We use Capsule in Kubernetes to separate teams

Step 1: 
Process groups are one of the few elements that supports filtering on Annotation or label
Create a process group based on annotation/or label.
 
Example:
Process group Rule name: 
Tenant:YOURTEAM
Process group name format
        {ProcessGroup:DetectedName} Tenant:YOURTEAM
Process group Rule:
Process groups on Kubernetes namespace where capsule.clastix.io/tenant equals 'YOURTEAM'
 
 
Step 2:
Create a security context that matches this process group to tag the logs
 
Example:
Rule name:
YOURTEAM-context
Matcher:
matchesPhrase(dt.process.name, "*Tenant:YOURTEAM")
Value:
YOURTEAM
 
 
Step 3:
Optional:
Make a bucket assignment rules to save the DEBUG logs 7 days and all others for 30 days
 
Rule-name:
YOURTEAM-log-storage-7-rule
Matcher DQL:
matchesPhrase(dt.kubernetes.cluster.name, "YOURCLUSTER") AND ( matchesValue(loglevel, "DEBUG") OR matchesValue(loglevel, "NONE"))
Bucket:
YOUR_TEAM-bucket-log-storage-7
 
Rule-name:
YOURTEAM-log-storage-30-rule
Matcher DQL:
matchesPhrase(dt.kubernetes.cluster.name, "YOURCLUSTER") AND ( matchesValue(loglevel, "EMERGENCY") OR matchesValue(loglevel, "ERROR") OR matchesValue(loglevel, "ALERT") OR matchesValue(loglevel, "CRITICAL") OR matchesValue(loglevel, "SEVERE") OR matchesValue(loglevel, "WARN") OR matchesValue(loglevel, "NOTICE") OR matchesValue(loglevel, "INFO"))
Bucket:
YOUR_TEAM-bucket-log-storage-30
 
Step 4 Testing with DQL
Now you can query your logs with the following DQL!!! Each log line will have dt.process.name with the name of <process-group>-Tenant:YOURTEAM.

 

 

 

fetch logs //, scanLimitGBytes: 500, samplingRatio: 1000
| sort timestamp desc
| filter matchesValue(dt.security_context, "YOURTEAM")
| filter matchesValue(loglevel, "INFO")

 

 

 

 
Step 5 Opt-in logs based on annotation
Because you have process groups you can now create a Settings/Log Monitoring/Log ingest rule
that can import logs based on Kubernetes annotations/labels in combination with namespace wildcards!
Create the following rule:
Rule name:
  YOURTEAM
Condition:
  Condition attribute:
  K8S namespace name is any of: 
  Value:
  YOURTEAM-*
Condition:
  Matcher attribute: Process group
  Value: *Tenant:YOURTEAM
  

Step 6 IAM policy 
Each team needs to have the following IAM policy, named YOURTEAM-policy. Note that you only need 2 buckets. Security contexts will do the multi-tenancy (YOURTEAM splitting)

 

 

 

 

Each team needs to have the following IAM policy, named YOURTEAM-policy
ALLOW environment:roles:viewer WHERE environment:management-zone = "YOURTEAM";
ALLOW storage:buckets:read WHERE storage:bucket-name = "YOUR_ORGANISATION_log-storage-7";
ALLOW storage:buckets:read WHERE storage:bucket-name = "YOUR_ORGANISATION_log-storage-30";
ALLOW storage:metrics:read WHERE storage:k8s.namespace.name STARTSWITH "YOURTEAM";
ALLOW storage:logs:read WHERE storage:dt.security_context = "YOURTEAM";
ALLOW platform-management:tenants:write;
ALLOW storage:buckets:read WHERE storage:bucket-name = "default_logs";
ALLOW storage:buckets:read WHERE storage:table-name = "metrics";
ALLOW app-engine:apps:run, app-engine:functions:run;
ALLOW automation:workflows:read, automation:rules:read, automation:calendars:read;
ALLOW document:documents:read, document:documents:write, document:documents:delete, document:environment-shares:read, document:environment-shares:write;
ALLOW document:environment-shares:claim, document:environment-shares:delete, document:direct-shares:read, document:direct-shares:write, document:direct-shares:delete;
ALLOW state:app-states:read, state:app-states:write, state:app-states:delete, state:user-app-states:read;
ALLOW state:user-app-states:write,state:user-app-states:delete, app-settings:objects:read;
ALLOW hub:catalog:read;
ALLOW environment:roles:manage-settings WHERE environment:management-zone = "YOURTEAM";
ALLOW environment:roles:logviewer WHERE environment:management-zone = "YOURTEAM";
ALLOW storage:entities:read;

 

 

 

 

 

Step 7 Terraform everything
Optional but recommended!! Export these settings for YOURTEAM to terraform code with terraform --export (See Dynatrace docs) and create this for 50+ teams to realize full multitenancy for Kubernetes



 

 

Featured Posts