Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit 09fb3cb

Browse files
committed
CP 6.0 updateÄ
1 parent 8feadfb commit 09fb3cb

File tree

131 files changed

+13885
-38
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

131 files changed

+13885
-38
lines changed

‎.DS_Store

2 KB
Binary file not shown.

‎infrastructure/.DS_Store

2 KB
Binary file not shown.

‎infrastructure/README.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,14 +18,14 @@ The following components are required on your laptop to provison and install the
1818

1919
* [jq](https://stedolan.github.io/jq/): Lightweight and flexible command-line JSON processor, e.g. `brew install jq`
2020
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/): Kubernetes CLI to deploy applications, inspect and manage cluster resources, e.g. `brew install kubernetes-cli` (tested with 1.16.0)
21-
* [Helm](https://helm.sh/): Helps you manage Kubernetes applications - Helm Charts help you define, install, and upgrade even the most complex Kubernetes application, e.g. `brew install kubernetes-helm` (tested with 3.0.1). Please note that we already use Helm 3 (no Tiller!) instead of the painful Helm 2.x with Tiller.
21+
* [Helm](https://helm.sh/): Helps you manage Kubernetes applications - Helm Charts help you define, install, and upgrade even the most complex Kubernetes application, e.g. `brew install kubernetes-helm` (tested with 3.0.2). Please note that we already use Helm 3 (no Tiller!) instead of the painful Helm 2.x with Tiller.
2222
* [terraform (0.12)](https://www.terraform.io/downloads.html): Enables you to safely and predictably create, change, and improve infrastructure (infrastructure independent, but currently only implemented GCP setup), e.g. `brew install terraform`
2323
* [gcloud](https://cloud.google.com/sdk/docs/quickstart-macos): Tool that provides the primary CLI to Google Cloud Platform, e.g. (always run `gcloud init` first)
2424
* `wget`
2525

2626
Make sure to have up-to-date versions (see the tested versions above). For instance, an older version of helm or kubectl CLI did not work well and threw (sometimes confusing) exceptions.
2727

28-
The setup is tested on Mac OS X. We used HiveMQ 4.2.2 and Confluent Platform 5.4 (with Apache Kafka 2.4).
28+
The setup is tested on Mac OS X. We used HiveMQ 4.2.2 and Confluent Platform 5.4, 5.5.1 and 6.0.0.
2929

3030
## Configure GCP Account and Project
3131

@@ -50,16 +50,17 @@ The setup is tested on Mac OS X. We used HiveMQ 4.2.2 and Confluent Platform 5.4
5050
3. Monitoring and interactive queries
5151
* Go to [confluent](confluent) directory
5252
* Use the hints to connect Confluent Control Center, Grafana, Prometheus for monitoring or working with KSQL CLI for interactive queries
53-
4. You can also connect to Grafana and observe Cluster and application-level metrics for HiveMQ and the Device Simulator: `kubectl port-forward -n monitoring service/prom-grafana 3000:service`
53+
4. You can also connect to Grafana and observe Cluster and application-level metrics for HiveMQ and the Device Simulator: `kubectl -n monitoring port-forward service/prometheus-grafana 3000:service`
5454
5. Streaming Model Training and Inference with Kafka and TensorFlow IO: This includes seperated steps which are explained here: [python-scripts/README.md](../python-scripts/README.md).
5555

5656
For more details about the demo, UIs, customization of the setup, monitoring, etc., please go to the subfolders of the components: [terraform-gcp](terraform-gcp), [confluent](confluent), [hivemq](hivemq), [test-generator](test-generator), [tensorflow-io](python-scripts/README.md).
5757

5858
## Deletion of Demo Infrastructure
5959

6060
When done with the demo, go to `terraform-gcp` directory and run `terraform destroy` to stop and remove the created Kubernetes infrastructure.
61+
When terraform answered please execute terraform destory again, please do it. If you do a second `terraform destroy`all resources will be deleted and terraform could output: `Error: googleapi: Error 404: Not found: projects/xperryment/locations/europe-west1/clusters/car-demo-cluster., notFound`but this ok and expected.
6162

62-
`Doublecheck the 'disks' in your GCP console`. If you had some errors, the script might not be able to delete all SDDs!
63+
`Doublecheck the 'disks' in your GCP console` with `gcloud compute disks list `. If you had some errors, the script might not be able to delete all SSDs!
6364

6465
### Open Source and License Requirements
6566

‎infrastructure/confluent/01_installConfluentPlatform.sh

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,9 +32,13 @@ else
3232
#tar -xvf confluent-operator-20190912-v0.65.1.tar.gz
3333
#rm confluent-operator-20190912-v0.65.1.tar.gz
3434
# CP 5.4
35-
wget https://platform-ops-bin.s3-us-west-1.amazonaws.com/operator/confluent-operator-20200115-v0.142.1.tar.gz
36-
tar -xvf confluent-operator-20200115-v0.142.1.tar.gz
37-
rm confluent-operator-20200115-v0.142.1.tar.gz
35+
#wget https://platform-ops-bin.s3-us-west-1.amazonaws.com/operator/confluent-operator-20200115-v0.142.1.tar.gz
36+
#tar -xvf confluent-operator-20200115-v0.142.1.tar.gz
37+
#rm confluent-operator-20200115-v0.142.1.tar.gz
38+
# CP 6.0
39+
wget https://platform-ops-bin.s3-us-west-1.amazonaws.com/operator/confluent-operator-1.6.0-for-confluent-platform-6.0.0.tar.gz
40+
tar -xvf confluent-operator-1.6.0-for-confluent-platform-6.0.0.tar.gz
41+
rm confluent-operator-1.6.0-for-confluent-platform-6.0.0.tar.gz
3842

3943
cp ${MYDIR}/gcp.yaml helm/providers/
4044
fi

‎infrastructure/confluent/README.md

Lines changed: 66 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,11 @@ security.protocol=SASL_PLAINTEXT
8181
Query the bootstrap server:
8282

8383
```bash
84+
# get the version of confluent platform
85+
kafka-topics --version
8486
kafka-broker-api-versions --command-config kafka.properties --bootstrap-server kafka:9071
87+
# list topics: internal and created topics
88+
kafka-topics --list --command-config kafka.properties --bootstrap-server kafka:9071
8589
```
8690

8791
#### Test KSQL (Data Analysis and Processing)
@@ -221,7 +225,20 @@ kubefwd is generating for all k8s services an Port forwarding and add in /etc/ho
221225

222226
### Test Control Center (Monitoring) with external access
223227

224-
Use your browser and go to [http://controlcenter:9021](http://controlcenter:9021) enter the Username=admin and Password=Developer1.
228+
Please be aware that we have now a couple of Port 80 accesses running:
229+
```bash
230+
kubectl get services -n operator | grep LoadBalancer
231+
connect-bootstrap-lb LoadBalancer 10.31.251.131 34.78.14.84 80:30780/TCP 40m
232+
controlcenter-bootstrap-lb LoadBalancer 10.31.251.69 34.77.215.100 80:30240/TCP 37m
233+
kafka-0-lb LoadBalancer 10.31.240.64 34.76.110.6 9092:30800/TCP 42m
234+
kafka-1-lb LoadBalancer 10.31.254.180 34.78.57.254 9092:30587/TCP 42m
235+
kafka-2-lb LoadBalancer 10.31.240.145 34.78.44.127 9092:32690/TCP 42m
236+
kafka-bootstrap-lb LoadBalancer 10.31.251.160 34.76.43.198 9092:31051/TCP 42m
237+
ksql-bootstrap-lb LoadBalancer 10.31.243.225 34.78.203.230 80:30109/TCP 38m
238+
schemaregistry-bootstrap-lb LoadBalancer 10.31.254.192 34.77.237.172 80:30828/TCP 41m
239+
```
240+
241+
Use your browser and go to [http://controlcenter:80](http://controlcenter:80) enter the Username=admin and Password=Developer1.
225242

226243
Please note that https (default by most web browers) is not configured, explicity type http://URL:port.
227244
(Here you can use also KSQL)
@@ -232,13 +249,57 @@ Please follow the Confluent documentation [External Access](https://docs.conflue
232249

233250
## Kubernetes Dashboard
234251

252+
* Create a service account
253+
```bash
254+
cat <<EOF | kubectl apply -f -
255+
apiVersion: v1
256+
kind: ServiceAccount
257+
metadata:
258+
name: admin-user
259+
namespace: kubernetes-dashboard
260+
EOF
261+
```
262+
* Create ClusterRoleBindung
263+
```bash
264+
cat <<EOF | kubectl apply -f -
265+
apiVersion: rbac.authorization.k8s.io/v1
266+
kind: ClusterRoleBinding
267+
metadata:
268+
name: admin-user
269+
roleRef:
270+
apiGroup: rbac.authorization.k8s.io
271+
kind: ClusterRole
272+
name: cluster-admin
273+
subjects:
274+
- kind: ServiceAccount
275+
name: admin-user
276+
namespace: kub
277+
```
278+
* create token `kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print 1ドル}')`
279+
* Login to K8s dashboard using The token from `gcloud config config-helper --format=json | jq -r '.credential.access_token'`
235280
* Run `kubectl proxy &`
236281
* Go to [K8s dashboard](http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/)
237-
* Login to K8s dashboard using The token from `gcloud config config-helper --format=json | jq -r '.credential.access_token'`
282+
* Login to K8s dashboard using The token from `kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print 1ドル}')`
283+
284+
Note: Please check your browser with Chrome I got the message, after login with token: `Mark cross-site cookies as Secure to allow setting them in cross-site contexts`. With Safari I do not got a problem.
285+
And please after you are finish with your demo, check if kubectl proxy is running: `ps -ef | grep proxy`. If yes, kill the process `kill -9 <process id>``
238286
239287
## Grafana Dashboards
240288
241-
* Forward local port to Grafana service: `kubectl -n monitoring port-forward service/prom-grafana 3000:service`
289+
Get services from monitoring namespace:
290+
```bash
291+
kubectl get services -n monitoring
292+
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 59m
293+
prometheus-grafana ClusterIP 10.31.240.63 <none> 80/TCP 59m
294+
prometheus-kube-state-metrics ClusterIP 10.31.240.46 <none> 8080/TCP 59m
295+
prometheus-operated ClusterIP None <none> 9090/TCP 59m
296+
prometheus-prometheus-node-exporter ClusterIP 10.31.245.236 <none> 9100/TCP 59m
297+
prometheus-prometheus-oper-alertmanager ClusterIP 10.31.248.114 <none> 9093/TCP 59m
298+
prometheus-prometheus-oper-operator ClusterIP 10.31.250.49 <none> 8080/TCP,443/TCP 59m
299+
prometheus-prometheus-oper-prometheus ClusterIP 10.31.248.245 <none> 9090/TCP 59m
300+
```
301+
302+
* Forward local port to Grafana service: `kubectl -n monitoring port-forward service/prometheus-grafana 3000:service`
242303
* Go to [localhost:3000](http://localhost:3000) (login: admin, prom-operator)
243304
* Dashboards will be deployed automatically (if they are not visible, bounce the deployment by deleting the current Grafana pod. It will reload the ConfigMaps after it restarts.)
244305
@@ -248,6 +309,8 @@ For more details, follow the examples of how to use and play with Confluent Plat
248309
249310
## Destroy Confluent Platform from GKE
250311
312+
Please use
313+
251314
* Run the script 02_deleteConfluentPlatform.sh to delete the Confluent Platform from GKE
252315
253316
```bash

‎infrastructure/terraform-gcp/.gitignore

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,4 +5,6 @@
55
*.tfstate
66
*.tfstate.*
77

8-
account.json
8+
account.json
9+
10+
.DS_Store
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
Copyright 2018 Confluent, Inc.
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
connect: 6.0.0.0
2+
controlcenter: 6.0.0.0
3+
externaldns: v0.5.14
4+
kafka: 6.0.0.0
5+
ksql: 6.0.0.0
6+
operator: 0.419.0
7+
replicator: 6.0.0.0
8+
schemaregistry: 6.0.0.0
9+
zookeeper: 6.0.0.0
10+
init-container: 6.0.0.0
Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
# Monitoring
2+
3+
All Confluent Platform (CP) components deployed through the Confluent Operator expose metrics that can be scraped by
4+
Prometheus. This folder contains an example Grafana metrics dashboard for all components except Confluent Control Center. For
5+
production environments, you may need to modify the example dashboard to meet your needs. Follow best practices for
6+
managing your Prometheus and Grafana deployments. Completing the following instructions will help you understand what the example
7+
dashboard can display for you.
8+
9+
These instructions were last verified with:
10+
11+
* Helm v3.0.2
12+
* Prometheus Helm chart v9.7.2 (app version 2.13.1)
13+
* Grafana Helm chart v4.2.2 (app verison 6.5.2)
14+
15+
## Install Prometheus
16+
17+
helm install demo-test stable/prometheus \
18+
--set alertmanager.persistentVolume.enabled=false \
19+
--set server.persistentVolume.enabled=false \
20+
--namespace default
21+
22+
## Install Grafana
23+
24+
helm install grafana stable/grafana --namespace default
25+
26+
## Open Grafana in your Browser
27+
28+
Start port-forwarding so you can access Grafana in your browser with a `localhost` address:
29+
30+
kubectl port-forward \
31+
$(kubectl get pods -n default -l app=grafana,release=grafana -o jsonpath={.items[0].metadata.name}) \
32+
3000 \
33+
--namespace default
34+
35+
Get your 'admin' user password:
36+
37+
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode
38+
39+
Visit http://localhost:3000 in your browser, and login as the `admin` user with the decoded password.
40+
41+
## Configure Grafana with a Prometheus Data Source
42+
43+
Follow the in-browser instructions to configure a Prometheus data source for Grafana, or consult the
44+
[online documentation](https://prometheus.io/docs/visualization/grafana/#creating-a-prometheus-data-source). You will be asked
45+
to provide a URL. Enter the URL as shown below:
46+
47+
http://demo-test-prometheus-server.default.svc.cluster.local
48+
49+
Click "Save & Test". You should see a green alert at the bottom of the page saying "Data source
50+
is working".
51+
52+
## Import Grafana Dashboard Configuration
53+
54+
Follow the in-browser instructions to import a dashboard JSON configuration, or consult the
55+
[online documentation](https://grafana.com/docs/grafana/latest/reference/export_import/#importing-a-dashboard). Select the
56+
`grafana-dashboard.json` file located in this folder, and then select the previously-configured Prometheus data source.
57+
58+
## Explore Grafana Dashboard
59+
60+
The following five CP component rows should be displayed:
61+
62+
* Confluent Kafka (16 panels)
63+
* Confluent Kafka Connect/Replicator (9 panels)
64+
* Confluent KSQL Server (6 panels)
65+
* Confluent Schema Registry (4 panels)
66+
* Confluent Zookeeper (10 panels)
67+
68+
You can expand these rows to see a variety of metrics for each CP component.

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /