Here are my findings about LinkerD Service Mesh and its mutual automatic TLS feature that encrypts communications between meshed Pods without any modification needed to them.
We'll start from a slightly (ok, severely) outdated K8S cluster to test the appropriate update procedure, too.
This article is meant as a work-in-progress.
References
Cert-Manager installation
[test0|default] ferdi@DESKTOP-NL6I2OD:~$ helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.10.1 \
--set installCRDs=true
NAME: cert-manager
LAST DEPLOYED: Fri Dec 9 09:39:17 2022
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
cert-manager v1.10.1 has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
More information on the different types of issuers and how to configure them
can be found in our documentation:
https://cert-manager.io/docs/configuration/
For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:
https://cert-manager.io/docs/usage/ingress/
Root CA certificate
We need a (long lasting) top-level certificate to serve as an origin for our mTLS certificates, so we'll create one ourselves with the aid of Step CLI.
Creation
[test0|default] ferdi@DESKTOP-NL6I2OD:~$ step certificate create root.linkerd.cluster.local ca.crt ca.key \
--profile root-ca \
--no-password \
--insecure \
--not-after=87600h
Your certificate has been saved in ca.crt.
Your private key has been saved in ca.key.
We're creating it with a 10-years validity, plenty of time to experiment!
Loading it to K8S
[test0|default] ferdi@DESKTOP-NL6I2OD:~$ kubectl create namespace linkerd
namespace/linkerd created
[test0|default] ferdi@DESKTOP-NL6I2OD:~$ kubectl create secret tls \
linkerd-trust-anchor \
--cert=ca.crt \
--key=ca.key \
--namespace=linkerd
secret/linkerd-trust-anchor created
We have created the linkerd namespace in advance, to be able to instruct Linkerd's Helm Chart to use the new certificate.
Hooking it up to Cert-Manager
We're going to need an Issuer resource, referencing our Secret
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: linkerd-trust-anchor
namespace: linkerd
spec:
ca:
secretName: linkerd-trust-anchor
And a Certificate generated by this Issuer
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: linkerd-identity-issuer
namespace: linkerd
spec:
secretName: linkerd-identity-issuer
duration: 48h
renewBefore: 25h
issuerRef:
name: linkerd-trust-anchor
kind: Issuer
commonName: identity.linkerd.cluster.local
dnsNames:
- identity.linkerd.cluster.local
isCA: true
privateKey:
algorithm: ECDSA
usages:
- cert sign
- crl sign
- server auth
- client auth
After having applied these Custom Resources, the situation will be:
[test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$ kubectl get secrets,issuers,certificates
NAME TYPE DATA AGE
secret/default-token-pbgcg kubernetes.io/service-account-token 3 8m10s
secret/linkerd-identity-issuer kubernetes.io/tls 3 29s
secret/linkerd-trust-anchor kubernetes.io/tls 2 7m33s
NAME READY AGE
issuer.cert-manager.io/linkerd-trust-anchor True 41s
NAME READY SECRET AGE
certificate.cert-manager.io/linkerd-identity-issuer True linkerd-identity-issuer 29s
LinkerD
First of all, we'll need the CRDs
First installation
[test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$ helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds
NAME: linkerd-crds
LAST DEPLOYED: Fri Dec 9 10:08:26 2022
NAMESPACE: linkerd
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The linkerd-crds chart was successfully installed π
To complete the linkerd core installation, please now proceed to install the
linkerd-control-plane chart in the linkerd namespace.
Looking for more? Visit https://linkerd.io/2/getting-started/
Then LinkerD itself, within the already present Namespace
[test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$ helm install linkerd-control-plane -n linkerd \
--set-file identityTrustAnchorsPEM=ca.crt \
--set identity.issuer.scheme=kubernetes.io/tls \
linkerd/linkerd-control-plane
NAME: linkerd-control-plane
LAST DEPLOYED: Fri Dec 9 10:09:58 2022
NAMESPACE: linkerd
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Linkerd control plane was successfully installed π
To help you manage your Linkerd service mesh you can install the Linkerd CLI by running:
curl -sL https://run.linkerd.io/install | sh
Alternatively, you can download the CLI directly via the Linkerd releases page:
https://github.com/linkerd/linkerd2/releases/
To make sure everything works as expected, run the following:
linkerd check
The viz extension can be installed by running:
helm install linkerd-viz linkerd/linkerd-viz
Looking for more? Visit https://linkerd.io/2/getting-started/
Course correction
My single-host K8S test cluster is running on top of Ubuntu 22.04, using Rancher (now SuSE) RKE1 Kubernetes distribution.
This simple fact lead me to this dire situation, after having installed the Helm Chart as shown above.
[test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$ kubectl get po
NAME READY STATUS RESTARTS AGE
linkerd-destination-54bc79696d-xcw7f 0/4 Init:CrashLoopBackOff 7 14m
linkerd-identity-76578bd5b9-68vrc 0/2 Init:CrashLoopBackOff 7 14m
linkerd-proxy-injector-746646f874-j45l4 0/2 Init:CrashLoopBackOff 7 14m
Let's check what's wrong with all of them!
test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$ kubectl describe po linkerd-identity-76578bd5b9-68vrc|grep Message
Message: 2-12-09T09:21:28Z" level=info msg="iptables-save v1.8.8 (legacy): Cannot initialize: Permission denied (you must be root)\n\n"
Type Reason Age From Message
[test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$ kubectl describe po linkerd-destination-54bc79696d-xcw7f|grep Message
Message: 2-12-09T09:21:15Z" level=info msg="iptables-save v1.8.8 (legacy): Cannot initialize: Permission denied (you must be root)\n\n"
Type Reason Age From Message
There's a reference to this error, at the thread's bottom there is a suggestion about an Helm Chart parameter that could save the day.
[test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$ helm upgrade --install linkerd-control-plane -n linkerd \
--set-file identityTrustAnchorsPEM=ca.crt \
--set identity.issuer.scheme=kubernetes.io/tls \
--set "proxyInit.iptablesMode=nft" \
linkerd/linkerd-control-plane
Release "linkerd-control-plane" has been upgraded. Happy Helming!
NAME: linkerd-control-plane
LAST DEPLOYED: Fri Dec 9 10:33:13 2022
NAMESPACE: linkerd
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
The Linkerd control plane was successfully installed π
To help you manage your Linkerd service mesh you can install the Linkerd CLI by running:
curl -sL https://run.linkerd.io/install | sh
Alternatively, you can download the CLI directly via the Linkerd releases page:
https://github.com/linkerd/linkerd2/releases/
To make sure everything works as expected, run the following:
linkerd check
The viz extension can be installed by running:
helm install linkerd-viz linkerd/linkerd-viz
Looking for more? Visit https://linkerd.io/2/getting-started/
Unfortunately, it didn'help a lot, below you'll find the relevant excerpt of kubectl describe pod
Reason: Error
Message: ave v1.8.8 (nf_tables): Could not fetch rule set generation id: Permission denied (you must be root)\n\n"
time="2022-12-09T09:43:09Z" level=error msg="aborting firewall configuration"
Error: exit status 4
Serious troubleshooting here
Ok, let's check the default values of the Helm Chart to find out something that could help us.
[test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$ helm show values linkerd/linkerd-control-plane > linkerd-values.yml
[test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$
Here's the relevant section:
# proxy-init configuration
proxyInit:
# -- Variant of iptables that will be used to configure routing. Currently,
# proxy-init can be run either in 'nft' or in 'legacy' mode. The mode will
# control which utility binary will be called. The host must support
# whichever mode will be used
iptablesMode: "legacy"
# -- Default set of inbound ports to skip via iptables
# - Galera (4567,4568)
ignoreInboundPorts: "4567,4568"
# -- Default set of outbound ports to skip via iptables
# - Galera (4567,4568)
ignoreOutboundPorts: "4567,4568"
# -- Comma-separated list of subnets in valid CIDR format that should be skipped by the proxy
skipSubnets: ""
# -- Log level for the proxy-init
# @default -- info
logLevel: ""
# -- Log format (`plain` or `json`) for the proxy-init
# @default -- plain
logFormat: ""
image:
# -- Docker image for the proxy-init container
name: cr.l5d.io/linkerd/proxy-init
# -- Pull policy for the proxy-init container Docker image
# @default -- imagePullPolicy
pullPolicy: ""
# -- Tag for the proxy-init container Docker image
version: v2.0.0
resources:
cpu:
# -- Maximum amount of CPU units that the proxy-init container can use
limit: 100m
# -- Amount of CPU units that the proxy-init container requests
request: 100m
memory:
# -- Maximum amount of memory that the proxy-init container can use
limit: 20Mi
# -- Amount of memory that the proxy-init container requests
request: 20Mi
ephemeral-storage:
# -- Maximum amount of ephemeral storage that the proxy-init container can use
limit: ""
# -- Amount of ephemeral storage that the proxy-init container requests
request: ""
closeWaitTimeoutSecs: 0
# -- Allow overriding the runAsNonRoot behaviour (<https://github.com/linkerd/linkerd2/issues/7308>)
runAsRoot: false
# -- This value is used only if runAsRoot is false; otherwise runAsUser will be 0
runAsUser: 65534
xtMountPath:
mountPath: /run
name: linkerd-proxy-init-xtables-lock
Let's go with the plain old brute-force approach (running it as root user, in this case)/
[test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$ helm upgrade --install linkerd-control-plane -n linkerd \
--set-file identityTrustAnchorsPEM=ca.crt \
--set identity.issuer.scheme=kubernetes.io/tls \
--set "proxyInit.iptablesMode=nft" \
--set "proxyInit.runAsRoot=true" \
linkerd/linkerd-control-plane
Release "linkerd-control-plane" does not exist. Installing it now.
Aaand... we did it! Everything's running!
[test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$ kubectl get po
NAME READY STATUS RESTARTS AGE
linkerd-destination-7f5857f549-xsdjf 4/4 Running 0 79s
linkerd-identity-b7d47f674-2mrsm 2/2 Running 0 79s
linkerd-proxy-injector-5c5f849755-4xrz7 2/2 Running 0 79s
[test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$
[test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$
[test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$ linkerd check
Linkerd core checks
===================
kubernetes-api
--------------
β can initialize the client
β can query the Kubernetes API
kubernetes-version
------------------
β is running the minimum Kubernetes API version
β is running the minimum kubectl version
linkerd-existence
-----------------
β 'linkerd-config' config map exists
β heartbeat ServiceAccount exist
β control plane replica sets are ready
β no unschedulable pods
β control plane pods are ready
β cluster networks contains all node podCIDRs
β cluster networks contains all pods
β cluster networks contains all services
linkerd-config
--------------
β control plane Namespace exists
β control plane ClusterRoles exist
β control plane ClusterRoleBindings exist
β control plane ServiceAccounts exist
β control plane CustomResourceDefinitions exist
β control plane MutatingWebhookConfigurations exist
β control plane ValidatingWebhookConfigurations exist
β proxy-init container runs as root user if docker container runtime is used
linkerd-identity
----------------
β certificate config is valid
β trust anchors are using supported crypto algorithm
β trust anchors are within their validity period
β trust anchors are valid for at least 60 days
β issuer cert is using supported crypto algorithm
β issuer cert is within its validity period
βΌ issuer cert is valid for at least 60 days
issuer certificate will expire on 2022-12-11T09:04:05Z
see https://linkerd.io/2.12/checks/#l5d-identity-issuer-cert-not-expiring-soon for hints
β issuer cert is issued by the trust anchor
linkerd-webhooks-and-apisvc-tls
-------------------------------
β proxy-injector webhook has valid cert
β proxy-injector cert is valid for at least 60 days
β sp-validator webhook has valid cert
β sp-validator cert is valid for at least 60 days
β policy-validator webhook has valid cert
β policy-validator cert is valid for at least 60 days
linkerd-version
---------------
β can determine the latest version
β cli is up-to-date
control-plane-version
---------------------
β can retrieve the control plane version
β control plane is up-to-date
β control plane and cli versions match
linkerd-control-plane-proxy
---------------------------
β control plane proxies are healthy
β control plane proxies are up-to-date
β control plane proxies and cli versions match
Status check results are β
Experiments with LinkerD
Protocols
Here you'll find a detailed explanation of the proxying capabilities of LinkerD, I'm quoting some excerpt
Linkerd is capable of proxying all TCP traffic, including TLS connections, WebSockets, and HTTP tunneling.
In most cases, Linkerd can do this without configuration. To accomplish this, Linkerd performs protocol detection to determine whether traffic is HTTP or HTTP/2 (including gRPC). If Linkerd detects that a connection is HTTP or HTTP/2, Linkerd automatically provides HTTP-level metrics and routing.
If Linkerd cannot determine that a connection is using HTTP or HTTP/2, Linkerd will proxy the connection as a plain TCP connection, applying mTLS and providing byte-level metrics as usual.
(Note that HTTPS calls to or from meshed pods are treated as TCP, not as HTTP. Because the client initiates the TLS connection, Linkerd is not be able to decrypt the connection to observe the HTTP transactions.)
Linkerd maintains a default list of opaque ports that corresponds to the standard ports used by protocols that interact poorly with protocol detection. As of the 2.12 release, that list is: 25 (SMTP), 587 (SMTP), 3306 (MySQL), 4444 (Galera), 5432 (Postgres), 6379 (Redis), 9300 (ElasticSearch), and 11211 (Memcache).
The following table contains common protocols that may require additional configuration.
Protocol |
Standard port(s) |
In default list? |
Notes |
SMTP |
25, 587 |
Yes |
|
MySQL |
3306 |
Yes |
|
PostgreSQL |
5432 |
Yes |
|
Redis |
6379 |
Yes |
|
ElasticSearch |
9300 |
Yes |
|
Memcache |
11211 |
Yes |
|
MySQL with Galera |
3306, 4444, 4567, 4568 |
Partially |
Ports 4567 and 4568 are not in Linkerdβs default set of opaque ports |
Using LinkerD to secure the traffic
Here, as usual, there are some references
Enabling default-deny firewall policy
Let's upgrade again our LinkerD Helm Chart configuration:
[test0|linkerd] ferdi@DESKTOP-NL6I2OD:~$ helm upgrade --install linkerd-control-plane \
--namespace linkerd \
--set-file identityTrustAnchorsPEM=ca.crt \
--set identity.issuer.scheme=kubernetes.io/tls \
--set "proxyInit.iptablesMode=nft" \
--set "proxyInit.runAsRoot=true" \
--set "proxy.defaultInboundPolicy=deny" \
linkerd/linkerd-control-plane
This way, we ensure that LinkerD-meshed Pods could only be contacted by predetermined entities
Test deployments
We're going to use custom ServiceAccounts for each Deployment (it's a requirement to identify the client side) and we'll show two different ways to select the server side of the allowed connection.
# File: whoami.yml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: whoami1
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: whoami2
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: whoami3
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: whoami1
name: whoami1
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: whoami1
template:
metadata:
labels:
app: whoami1
spec:
automountServiceAccountToken: false
containers:
- image: containous/whoami
imagePullPolicy: Always
name: whoami
ports:
- containerPort: 80
protocol: TCP
resources: {}
- args:
- -c
- sleep 999999999
command:
- /bin/bash
image: nicolaka/netshoot
imagePullPolicy: Always
name: netshoot
resources: {}
serviceAccount: whoami1
serviceAccountName: whoami1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: whoami2
name: whoami2
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: whoami2
template:
metadata:
labels:
app: whoami2
spec:
automountServiceAccountToken: false
containers:
- image: containous/whoami
imagePullPolicy: Always
name: whoami
ports:
- containerPort: 80
protocol: TCP
- args:
- -c
- sleep 999999999
command:
- /bin/bash
image: nicolaka/netshoot
imagePullPolicy: Always
name: netshoot
serviceAccount: whoami2
serviceAccountName: whoami2
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: whoami3
name: whoami3
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: whoami3
template:
metadata:
labels:
app: whoami3
spec:
automountServiceAccountToken: false
containers:
- image: containous/whoami
imagePullPolicy: Always
name: whoami
ports:
- containerPort: 80
protocol: TCP
- args:
- -c
- sleep 999999999
command:
- /bin/bash
image: nicolaka/netshoot
imagePullPolicy: Always
name: netshoot
serviceAccount: whoami3
serviceAccountName: whoami3
---
apiVersion: v1
kind: Service
metadata:
labels:
app: whoami1
name: whoami1
namespace: default
spec:
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: whoami1
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: whoami2
name: whoami2
namespace: default
spec:
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: whoami2
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: whoami3
name: whoami3
namespace: default
spec:
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: whoami3
sessionAffinity: None
type: ClusterIP
LinkerD-izing deployments
The deployments defined above haven't been LinkerD-ized yet! Let's use linkerd CLI tool to produce an edited version of the same YAML.
[test0|default] ferdi@DESKTOP-NL6I2OD:~/linkerdtest$ cat whoami.yml | linkerd inject - | tee whoami-injected.yml
Below there's an excerpt of the new YAML, detailing one of the updated Deployments:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: whoami1
name: whoami1
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: whoami1
template:
metadata:
annotations:
linkerd.io/inject: enabled
labels:
app: whoami1
spec:
automountServiceAccountToken: false
containers:
- image: containous/whoami
imagePullPolicy: Always
name: whoami
ports:
- containerPort: 80
protocol: TCP
- args:
- -c
- sleep 999999999
command:
- /bin/bash
image: nicolaka/netshoot
imagePullPolicy: Always
name: netshoot
serviceAccount: whoami1
serviceAccountName: whoami1
Definining rules
We're going to define the firewall rules (LinkerD calls them ServerAuthorizations) first, and only after that, the corresponding targets (Servers for LinkerD).
This way, we're sure that our defined rules will be applied instantly after the creation of the Server* custom resource.
Identifying the destination (Server) by resource name
---
apiVersion: policy.linkerd.io/v1beta1
kind: ServerAuthorization
metadata:
namespace: default
name: whoami2-whoami1-auth
labels:
app.kubernetes.io/part-of: whoami
app.kubernetes.io/name: whoami2-whoami1
spec:
# the Server whoami1 can be contacted by Pods running with whoami2 ServiceAccount
server:
name: whoami1-server
client:
meshTLS:
serviceAccounts:
- name: whoami2
Identifying the destination (Server) by resource label(s)
---
apiVersion: policy.linkerd.io/v1beta1
kind: ServerAuthorization
metadata:
namespace: default
name: whoami3-whoami1-auth
labels:
app.kubernetes.io/part-of: whoami
app.kubernetes.io/name: whoami3-whoami1
spec:
# the Server labeled app=whoami1 can be contacted by Pods running with whoami2 ServiceAccount
server:
selector:
matchLabels:
app: whoami1
client:
meshTLS:
serviceAccounts:
- name: whoami3
Defining the Server (connection destination)
---
apiVersion: policy.linkerd.io/v1beta1
kind: Server
metadata:
namespace: default
name: whoami1-server
labels:
app: whoami1
spec:
podSelector:
matchLabels:
app: whoami1
port: 80
proxyProtocol: HTTP/1
Testing the ServerAuthorizations/Server combination
These are my currently running Pods, for reference
[test0|default] ferdi@DESKTOP-NL6I2OD:~/linkerdtest$ kubectl get po
NAME READY STATUS RESTARTS AGE
whoami1-6cccb5f954-mjmgg 3/3 Running 0 3h
whoami2-689d97cf87-7p5s7 3/3 Running 0 3h
whoami3-7689d5fbc6-7n7l9 3/3 Running 0 3h
From whoami2's sidecar container
[test0|default] ferdi@DESKTOP-NL6I2OD:~/linkerdtest$ kubectl exec -ti whoami2-689d97cf87-7p5s7 -c netshoot -- bash
bash-5.2# curl -v http://whoami1/
* Trying 10.43.253.230:80...
* Connected to whoami1 (10.43.253.230) port 80 (#0)
> GET / HTTP/1.1
> Host: whoami1
> User-Agent: curl/7.86.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< date: Sat, 10 Dec 2022 14:45:01 GMT
< content-length: 299
< content-type: text/plain; charset=utf-8
<
Hostname: whoami1-6cccb5f954-mjmgg
IP: 127.0.0.1
IP: 10.42.0.31
RemoteAddr: 10.42.0.31:57388
GET / HTTP/1.1
Host: whoami1
User-Agent: curl/7.86.0
Accept: */*
L5d-Client-Id: whoami2.default.serviceaccount.identity.linkerd.cluster.local
L5d-Dst-Canonical: whoami1.default.svc.cluster.local:80
* Connection #0 to host whoami1 left intact
bash-5.2#
Yeah! It works!
From whoami3's sidecar container
[test0|default] ferdi@DESKTOP-NL6I2OD:~/linkerdtest$ kubectl exec -ti whoami3-7689d5fbc6-7n7l9 -c netshoot -- bash
bash-5.2# curl -v http://whoami1
* Trying 10.43.253.230:80...
* Connected to whoami1 (10.43.253.230) port 80 (#0)
> GET / HTTP/1.1
> Host: whoami1
> User-Agent: curl/7.86.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< date: Sat, 10 Dec 2022 14:46:24 GMT
< content-length: 299
< content-type: text/plain; charset=utf-8
<
Hostname: whoami1-6cccb5f954-mjmgg
IP: 127.0.0.1
IP: 10.42.0.31
RemoteAddr: 10.42.0.31:51432
GET / HTTP/1.1
Host: whoami1
User-Agent: curl/7.86.0
Accept: */*
L5d-Client-Id: whoami3.default.serviceaccount.identity.linkerd.cluster.local
L5d-Dst-Canonical: whoami1.default.svc.cluster.local:80
* Connection #0 to host whoami1 left intact
bash-5.2#
This one, too
A connection from an unauthorized resource, within the cluster
Whoo-hoo, I'm a bad guy trying to reach our precious whoami1
[test0|default] ferdi@DESKTOP-NL6I2OD:~/linkerdtest$ kubectl run badguy -ti --rm --image=nicolaka/netshoot -- bash
If you don't see a command prompt, try pressing enter.
bash-5.2# curl -v http://whoami1
* Trying 10.43.253.230:80...
* Connected to whoami1 (10.43.253.230) port 80 (#0)
> GET / HTTP/1.1
> Host: whoami1
> User-Agent: curl/7.86.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 403 Forbidden
< content-length: 0
< date: Sat, 10 Dec 2022 14:49:24 GMT
<
* Connection #0 to host whoami1 left intact
bash-5.2#
... and I miserably fail to connect!