Container Optimized Workflow for Tectonic by CoreOS (Now Red Hat)

Following our recent blog onArtifactory’s integration with OpenShift, you can now deploy your binaries hosted on Artifactory to Tectonic, another major enterprise-ready Kubernetes platform that specializes in runningcontainerized microservicesmore securely.
Artifactory integrates with Tectonic to support end-to-end binary management that overcomes the complexity of working withdifferent software package management systems码头工人一样,NPM,柯南(c++), RPM, providing consistency to your CI/CD workflow. In this blog, we will show you how to deploy your binaries on Artifactory to the Tectonic platform.
With the increase of polyglot programming in enterprises, a substantial amount of valuable metadata is emitted during the CI process and is captured by Artifactory.One of the major advantages of leveraging this metadata (build info) is traceability from the docker-manifest, all the way down to the application tier.For example in the Docker build object (docker-manifest), it is easy to trace the CI job responsible for producing the application tier (such as a WAR file) that is part of the Docker image layer being referred by the docker-manifest. Artifactory achieves this by usingdocker build info. Also, Artifactory supportsmulti-site replication.
Deploying Your Docker Images on Artifactory to Tectonic
Prerequisite
Validate that the Tectonic instance is active.
The following steps show how to deploy Docker images hosted onJFrog Artifactory as your Kubernetes Docker registryto Tectonic. In this example, we have created an example-deployment.yaml file to deploy a Docker image that is part of Artifactory’s Docker virtual repository to Tectonic.
- Create a secret with a Docker Config to hold your Artifactory authentication token inyour Docker registryproject, by running the following command:
kubectl create secret docker-registry rt-registrykey--docker-server=docker-virtual.artifactory.com--docker-使用rname=${USERNAME}--docker-password=${PASSWORD}--docker-email=test@test
- Run the followingexample-deployment.yamlto deploy the containerized microservice to Tectonic, by running the following command:
kubectl create -f example-deployment.yaml
The following example-deployment.yaml file is a customized template that can be edited according to your needs. Note that we have added the following references:
– TheDeployment objectpoints to the Docker image that is part of Artifactory’s Docker virtual repository.
image: docker-virtual.artifactory.com/nginx:1.12
–TheImagePullSecretsis set as follows:
imagePullSecrets: - name:rt-registrykey:example-deployment.yaml sample
apiVersion:extensions/v1beta1kind:Deploymentmetadata:name:example-deploymentnamespace:defaultlabels:k8s-app:examplespec:replicas:3template:metadata:labels:k8s-app:examplespec:containers:-name:nginximage: docker-virtual.artifactory.com/nginx:1.12ports:-name:httpcontainerPort:80imagePullSecrets: - name:rt-registrykey
Following successful authentication, an image is pulled from the ArtifactoryDocker virtual repository.
- Log in toTectonic UI > Deployments, to view the Docker deployment status.

- Create service and ingress objects to access the nginx service using the following example:
kind:ServiceapiVersion: v1metadata:name:example-servicenamespace:defaultspec:选择器:k8s-app:exampleports:-protocol:TCPport:80type:NodePortapiVersion:extensions/v1beta1kind:Ingressmetadata:name:example-ingressnamespace:defaultannotations:kubernetes.io/ingress.class:"tectonic"ingress.kubernetes.io/rewrite-target:/ingress.kubernetes.io/ssl-redirect:"true"ingress.kubernetes.io/使用-port-in-redirects:"true"spec:rules:-host:console.tectonicsandbox.comhttp:paths:-path:/example-deploymentbackend:serviceName:example-serviceservicePort:80
- Access the Tectonic console and run thekubectl get allcommand to view the status of the service and ingress objects status.



