Tractus-X Provisioning Agent - Knowledge-Agents Compatible Data Binding Layer
Homepage: https://github.com/eclipse-tractusx/knowledge-agents/
$ helm repo add eclipse-tractusx https://eclipse-tractusx.github.io/charts/dev
$ helm install my-release eclipse-tractusx/provisioning-agent --version 1.9.6-SNAPSHOT
Name | Url | |
---|---|---|
Tractus-X Knowledge Agents Team |
Key | Type | Default | Description | ||
---|---|---|---|---|---|
affinity | object | {} |
Affinity constrains which nodes the Pod can be scheduled on based on node labels. | ||
automountServiceAccountToken | bool | false |
Whether to automount kubernetes API credentials into the pod | ||
autoscaling.enabled | bool | false |
Enables horizontal pod autoscaling | ||
autoscaling.maxReplicas | int | 100 |
Maximum replicas if resource consumption exceeds resource threshholds | ||
autoscaling.minReplicas | int | 1 |
Minimal replicas if resource consumption falls below resource threshholds | ||
autoscaling.targetCPUUtilizationPercentage | int | 80 |
targetAverageUtilization of cpu provided to a pod | ||
autoscaling.targetMemoryUtilizationPercentage | int | 80 |
targetAverageUtilization of memory provided to a pod | ||
bindings.dtc | object | {"mapping":"[PrefixDeclaration]\ncx:\t\t\thttps://w3id.org/catenax/ontology#\ncx-diag:\thttps://w3id.org/catenax/ontology/diagnosis#\nowl:\t\thttp://www.w3.org/2002/07/owl#\nrdf:\t\thttp://www.w3.org/1999/02/22-rdf-syntax-ns#\nxml:\t\thttp://www.w3.org/XML/1998/namespace\nxsd:\t\thttp://www.w3.org/2001/XMLSchema#\nobda:\t\thttps://w3id.org/obda/vocabulary#\nrdfs:\t\thttp://www.w3.org/2000/01/rdf-schema#\n\n[MappingDeclaration] @collection [[\nmappingId\tdtc-meta\ntarget\t\tcx:BusinessPartner/{bpnl} rdf:type cx:BusinessPartner ; cx:BPNL {bpnl}^^xsd:string . \nsource\t\tSELECT distinct \"bpnl\" FROM \"dtc\".\"meta\"\n\nmappingId\tdtc-content\ntarget\t\tcx-diag:DTC/{id} rdf:type cx-diag:DTC ; cx-diag:Code {code}^^xsd:string ; cx-diag:Description {description}^^xsd:string ; cx-diag:PossibleCauses {possible_causes}^^xsd:string ; cx-diag:Version {lock_version}^^xsd:long . \nsource\t\tSELECT * FROM \"dtc\".\"content\"\n\nmappingId\tdtc-meta-content\ntarget\t\tcx-diag:DTC/{id} cx:provisionedBy cx:BusinessPartner/{bpnl}. \nsource\t\tSELECT \"bpnl\",\"id\" FROM \"dtc\".\"content\"\n\nmappingId\tdtc-part\ntarget\t\tcx-diag:DiagnosedPart/{entityGuid} rdf:type cx-diag:DTCPart ; cx-diag:EnDenomination {enDenomination}^^xsd:string ; cx-diag:Classification {classification}^^xsd:string ; cx-diag:Category {category}^^xsd:string.\nsource\t\tSELECT * FROM \"dtc\".\"part\"\n\nmappingId\tdtc-part-content\ntarget\t\tcx-diag:DTC/{dtc_id} cx-diag:affects cx-diag:DiagnosedPart/{part_entityGuid}. \nsource\t\tSELECT \"part_entityGuid\",\"dtc_id\" FROM \"dtc\".\"content_part\"\n\nmappingId\tdtc-meta-part\ntarget\t\tcx-diag:DiagnosedPart/{entityGuid} cx:provisionedBy cx:BusinessPartner/{bpnl}. \nsource\t\tSELECT \"bpnl\",\"entityGuid\" FROM \"dtc\".\"part\"\n]]","ontology":"cx-ontology.xml","path":"(/|$)(.*)","port":8080,"settings":{"jdbc.driver":"org.h2.Driver","jdbc.url":"jdbc:h2:file:/opt/ontop/database/db;INIT=RUNSCRIPT FROM '/opt/ontop/data/dtc.sql'","ontop.cardinalityMode":"LOOSE"}} |
Diagnostic trouble codesample endpoint/binding, for disabling, simply put dtc: {} in your values.yaml | ||
bindings.dtc.path | string | "(/|$)(.*)" |
Potential Ingress Path | ||
bindings.dtc.port | int | 8080 |
Exposed Service Port for the binding | ||
bindings.dtc.settings | object | {"jdbc.driver":"org.h2.Driver","jdbc.url":"jdbc:h2:file:/opt/ontop/database/db;INIT=RUNSCRIPT FROM '/opt/ontop/data/dtc.sql'","ontop.cardinalityMode":"LOOSE"} |
Settings for the binding including JDBC backend connections and meta-data directives, you should use secret references when putting passwords here | ||
customLabels | object | {} |
Additional custom Labels to add | ||
env | object | {} |
Container environment variables e.g. for configuring JAVA_TOOL_OPTIONS Ex.: JAVA_TOOL_OPTIONS: > -Dhttp.proxyHost=proxy -Dhttp.proxyPort=80 -Dhttp.nonProxyHosts=”localhost | 127.* | [::1]” -Dhttps.proxyHost=proxy -Dhttps.proxyPort=443 |
envSecretName | string | nil |
Kubernetes Secret Resource name to load environment variables from | ||
fullnameOverride | string | "" |
Overrides the releases full name | ||
image.digest | string | "" |
Overrides the image digest | ||
image.pullPolicy | string | "IfNotPresent" |
|||
image.pullSecrets | list | [] |
|||
image.registry | string | "docker.io" |
target regirtry | ||
image.repository | string | "tractusx/provisioning-agent" |
Which derivate of agent to use | ||
image.tag | string | "" |
Overrides the image tag whose default is the chart appVersion | ||
ingresses[0].annotations | string | nil |
Additional ingress annotations to add, for example when implementing more complex routings you may set { nginx.ingress.kubernetes.io/rewrite-target: /$2, nginx.ingress.kubernetes.io/use-regex: “true” } | ||
ingresses[0].certManager.clusterIssuer | string | "" |
If preset enables certificate generation via cert-manager cluster-wide issuer | ||
ingresses[0].certManager.issuer | string | "" |
If preset enables certificate generation via cert-manager namespace scoped issuer | ||
ingresses[0].className | string | "" |
Defines the ingress class to use | ||
ingresses[0].enabled | bool | false |
|||
ingresses[0].endpoints | list | ["dtc"] |
Agent endpoints exposed by this ingress resource | ||
ingresses[0].hostname | string | "provisioning-agent.local" |
The hostname to be used to precisely map incoming traffic onto the underlying network service | ||
ingresses[0].prefix | string | "" |
Optional prefix that will be prepended to the paths of the endpoints | ||
ingresses[0].tls | object | {"enabled":false,"secretName":""} |
TLS tls class applied to the ingress resource | ||
ingresses[0].tls.enabled | bool | false |
Enables TLS on the ingress resource | ||
ingresses[0].tls.secretName | string | "" |
If present overwrites the default secret name | ||
livenessProbe.enabled | bool | true |
Whether to enable kubernetes liveness-probe | ||
livenessProbe.failureThreshold | int | 3 |
Minimum consecutive failures for the probe to be considered failed after having succeeded | ||
livenessProbe.periodSeconds | int | 60 |
Number of seconds each period lasts. | ||
livenessProbe.timeoutSeconds | int | 5 |
number of seconds until a timeout is assumed | ||
nameOverride | string | "" |
Overrides the charts name | ||
nodeSelector | object | {} |
Node-Selector to constrain the Pod to nodes with specific labels. | ||
ontologies | object | {"cx-ontology.ttl":"resources/cx-ontology.ttl","cx-ontology.xml":"resources/cx-ontology.xml"} |
Ontologies to be included | ||
podAnnotations | object | {} |
Annotations added to deployed pods | ||
podSecurityContext.fsGroup | int | 30000 |
The owner for volumes and any files created within volumes will belong to this guid | ||
podSecurityContext.runAsGroup | int | 30000 |
Processes within a pod will belong to this guid | ||
podSecurityContext.runAsUser | int | 10001 |
Runs all processes within a pod with a special uid | ||
podSecurityContext.seccompProfile.type | string | "RuntimeDefault" |
Restrict a Container’s Syscalls with seccomp | ||
readinessProbe.enabled | bool | true |
Whether to enable kubernetes readiness-probes | ||
readinessProbe.failureThreshold | int | 3 |
Minimum consecutive failures for the probe to be considered failed after having succeeded | ||
readinessProbe.periodSeconds | int | 300 |
Number of seconds each period lasts. | ||
readinessProbe.timeoutSeconds | int | 5 |
number of seconds until a timeout is assumed | ||
replicaCount | int | 1 |
Specifies how many replicas of a deployed pod shall be created during the deployment Note: If horizontal pod autoscaling is enabled this setting has no effect | ||
resources | object | {"limits":{"cpu":"900m","memory":"512Mi"},"requests":{"cpu":"500m","memory":"512Mi"}} |
Resource management applied to the deployed pod We recommend using 50% of CPU and 0.5Gi of memory per exported endpoint | ||
securityContext.allowPrivilegeEscalation | bool | false |
Controls Privilege Escalation enabling setuid binaries changing the effective user ID | ||
securityContext.capabilities.add | list | ["NET_BIND_SERVICE"] |
Specifies which capabilities to add to issue specialized syscalls | ||
securityContext.capabilities.drop | list | ["ALL"] |
Specifies which capabilities to drop to reduce syscall attack surface | ||
securityContext.readOnlyRootFilesystem | bool | true |
Whether the root filesystem is mounted in read-only mode | ||
securityContext.runAsGroup | int | 30000 |
The container’s process will run with the specified uid | ||
securityContext.runAsNonRoot | bool | true |
Requires the container to run without root privileges | ||
securityContext.runAsUser | int | 10001 |
The container’s process will run with the specified uid | ||
service.type | string | "ClusterIP" |
Service type to expose the running application on a set of Pods as a network service. | ||
serviceAccount.annotations | object | {} |
Annotations to add to the service account | ||
serviceAccount.create | bool | true |
Specifies whether a service account should be created per release | ||
serviceAccount.name | string | "" |
The name of the service account to use. If not set and create is true, a name is generated using the release’s fullname template | ||
startupProbe.enabled | bool | true |
Whether to enable kubernetes startup-probes | ||
startupProbe.failureThreshold | int | 18 |
Minimum consecutive failures for the probe to be considered failed after having succeeded | ||
startupProbe.initialDelaySeconds | int | 60 |
Number of seconds after the container has started before liveness probes are initiated. | ||
startupProbe.periodSeconds | int | 30 |
Number of seconds each period lasts. | ||
startupProbe.timeoutSeconds | int | 5 |
number of seconds until a timeout is assumed | ||
tolerations | list | [] |
Tolerations are applied to Pods to schedule onto nodes with matching taints. |
Autogenerated from chart metadata using helm-docs v1.11.0