Skip to main content

One post tagged with "CNCF"

View All Tags

· 阅读需要 1 分钟
Fog Dong

ChatGPT is taking the tech industry by storm, thanks to its unparalleled natural language processing capabilities. As a powerful AI language model, it has the ability to understand and generate human-like responses, revolutionizing communication in various industries. From streamlining customer service chatbots to enabling seamless language translation tools, ChatGPT has already proved its mettle in creating innovative solutions that improve efficiency and user experience.

Now the question is, can we leverage ChatGPT to transform the way we deliver applications? With the integration of ChatGPT into DevOps workflows, we are witnessing the possible emergence of a new era of automation called PromptOps. This advancement in AIOps technology is revolutionizing the way businesses operate, allowing for faster and more efficient application delivery.

In this article, we will explore how to integrate ChatGPT into your DevOps workflow to deliver applications.

Integrate ChatGPT into Your DevOps Workflow

When it comes to integrating ChatGPT into DevOps workflows, many developers are faced with the challenge of managing extra resources and writing complicated shells. However, there is a better way - KubeVela Workflow. This open-source cloud-native workflow project offers a streamlined solution that eliminates the need for pods or complex scripting.

In KubeVela Workflow, every step has a type that can be easily abstracted and reused. The step-type is programmed in CUE language, making it incredibly easy to customize and use atomic capabilities like a function call in every step. An important point to note is that with all these atomic capabilities, such as HTTP requests, it is possible to integrate ChatGPT in just 5 minutes by writing a new step.

Check out the Installation Guide to get started with KubeVela Workflow. The complete code of this chat-gpt step type is available at GitHub.

Now that we choose the right tool, let's see the capabilities of ChatGPT in delivery.

Case 1: Diagnose the resources

It's quite common in the DevOps world to encounter problems like "I don't know why the pod is not running" or "I don't know why the service is not available". In this case, we can use ChatGPT to diagnose the resource.

For example, In our workflow, we can apply a Deployment with an invalid image in the first step. Since the deployment will never be ready, we can add a timeout in the step to ensure the workflow is not stuck in this step. Then, passing the unhealthy resources deployed in the first step to the second step, we can use the chat-gpt step type to diagnose the resource to determine the issue. Note that the second step is only executed if the first one fails.

The process of diagnosing the resource in the workflow

The complete workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: chat-gpt-diagnose
namespace: default
spec:
workflowSpec:
steps:
# Apply an invalid deployment with a timeout
- name: apply
type: apply-deployment
timeout: 3s
properties:
image: invalid
# output the resource to the next step
outputs:
- name: resource
valueFrom: output.value

# Use chat-gpt to diagnose the resource
- name: chat-diagnose
# only execute this step if the `apply` step fails
if: status.apply.failed
type: chat-gpt
# use the resource as inputs and pass it to prompt.content
inputs:
- from: resource
parameterKey: prompt.content
properties:
token:
value: <your token>
prompt:
type: diagnose

Apply this Workflow and check the result, the first step will fail because of timeout. Then the second step will be executed and the result of chat-gpt will be shown in the log:

vela workflow logs chat-gpt-diagnose

The logs of diagnose step

Visualize in the dashboard

If you want to visualize the process and the result in the dashboard, it's time to enable the [velaux](https://kubevela.io/docs/reference/addons/velaux#install) addon.

vela addon enable velaux

Copy all the steps in the above yaml to create a pipeline.

Create the pipeline in VelaUX

Run this pipeline, and you can check out the failed reason analyzed by ChatGPT in the logs of the second step.

Run the pipeline in VelaUX

Write the chat-gpt step from scratch

How to write this chat-gpt step type? Is it simple for you to write a step type like this? Let's see how to complete this step type.

We can first define what this step type need from the user. That is: the users' token for ChatGPT, and the resource to diagnose. For some other parameters like the model or the request timeout, we can set the default value with * like below:

parameter: {
token: value: string
// +usage=the model name
model: *"gpt-3.5-turbo" | string
// +usage=the prompt to use
prompt: {
type: *"diagnose" | string
lang: *"English" | string
content: {...}
}
timeout: *"30s" | string
}

Let's complete this step type by writing the logic of the step. We can first import vela/op package in which we can use the op.#HTTPDo capability to send a request to the ChatGPT API. If the request fails, the step should be failed with op.#Fail. We can also set this step's log data with ChatGPT's answer. The complete step type is shown below:

// import packages
import (
"vela/op"
"encoding/json"
)

// this is the name of the step type
"chat-gpt": {
description: "Send request to chat-gpt"
type: "workflow-step"
}

// this is the logic of the step type
template: {
// send http request to chat gpt
http: op.#HTTPDo & {
method: "POST"
url: "https://api.openai.com/v1/chat/completions"
request: {
timeout: parameter.timeout
body: json.Marshal({
model: parameter.model
messages: [{
if parameter.prompt.type == "diagnose" {
content: """
You are a professional kubernetes administrator.
Carefully read the provided information, being certain to spell out the diagnosis & reasoning, and don't skip any steps.
Answer in \(parameter.prompt.lang).
---
\(json.Marshal(parameter.prompt.content))
---
What is wrong with this object and how to fix it?
"""
}
role: "user"
}]
})
header: {
"Content-Type": "application/json"
"Authorization": "Bearer \(parameter.token.value)"
}
}
}

response: json.Unmarshal(http.response.body)

fail: op.#Steps & {
if http.response.statusCode >= 400 {
requestFail: op.#Fail & {
message: "\(http.response.statusCode): failed to request: \(response.error.message)"
}
}
}
result: response.choices[0].message.content
log: op.#Log & {
data: result
}
parameter: {
token: value: string
// +usage=the model name
model: *"gpt-3.5-turbo" | string
// +usage=the prompt to use
prompt: {
type: *"diagnose" | string
lang: *"English" | string
content: {...}
}
timeout: *"30s" | string
}
}

That's it! Apply this step type and we can use it in our Workflow like the above.

vela def apply chat-gpt.cue

Case 2: Audit the resource

Now the ChatGPT is our Kubernetes expert and can diagnose the resource. Can it also give us some security advice for the resource? Definitely! It's just prompt. Let's modify the step type that we wrote in the previous case to add the audit feature. We can add a new prompt type audit and pass the resource to the prompt. You can check out the whole step type in GitHub.

In the Workflow, we can apply a Deployment with nginx image and pass it to the second step. The second step will use the audit prompt to audit the resource. The process of auditing the resource in workflow The complete Workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: chat-gpt-audit
namespace: default
spec:
workflowSpec:
steps:
- name: apply
type: apply-deployment
# output the resource to the next step
outputs:
- name: resource
valueFrom: output.value
properties:
image: nginx

- name: chat-audit
type: chat-gpt
# use the resource as inputs and pass it to prompt.content
inputs:
- from: resource
parameterKey: prompt.content
properties:
token:
value: <your token>
prompt:
type: audit

image.png

Use Diagnose & Audit in one Workflow

Now that we have the capability to diagnose and audit the resource, we can use them in one Workflow, and use the if condition to control the execution of the steps. For example, if the apply step fails, then diagnose the resource, if it succeeds, audit the resource.

Use diagnose &amp; audit in one workflow

The complete Workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: chat-gpt
namespace: default
spec:
workflowSpec:
steps:
- name: apply
type: apply-deployment
outputs:
- name: resource
valueFrom: output.value
properties:
image: nginx

# if the apply step fails, then diagnose the resource
- name: chat-diagnose
if: status.apply.failed
type: chat-gpt
inputs:
- from: resource
parameterKey: prompt.content
properties:
token:
value: <your token>
prompt:
type: diagnose

# if the apply step succeeds, then audit the resource
- name: chat-audit
if: status.apply.succeeded
type: chat-gpt
inputs:
- from: resource
parameterKey: prompt.content
properties:
token:
value: <your token>
prompt:
type: audit

Case 3: Use ChatGPT as a quality gate

If we want to apply the resources to a production environment, can we let ChatGPT rate the quality of the resource first, only if the quality is high enough, then apply the resource to the production environment? Absolutely!

Note that to make the score evaluated by chat-gpt more convincing, it's better to pass metrics than the resource in this case.

Let's write our Workflow. KubeVela Workflow has the capability to apply resources to multi clusters. The first step is to apply the Deployment to the test environment. The second step is to use the ChatGPT to rate the quality of the resource. If the quality is high enough, then apply the resource to the production environment.

The process of using quality gate in workflow

The complete Workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: chat-gpt-quality-gate
namespace: default
spec:
workflowSpec:
steps:
# apply the resource to the test environment
- name: apply
type: apply-deployment
# output the resource to the next step
outputs:
- name: resource
valueFrom: output.value
properties:
image: nginx
cluster: test

- name: chat-quality-check
# this step will always be executed
if: always
type: chat-gpt
# get the inputs from resource and pass it to the prompt.content
inputs:
- from: resource
parameterKey: prompt.content
# output the score of ChatGPT and use strconv.Atoi to convert the score string to int
outputs:
- name: chat-result
valueFrom: |
import "strconv"
strconv.Atoi(result)
properties:
token:
value: <your token>
prompt:
type: quality-gate

# if the score is higher than 60, then apply the resource to the production environment
- name: apply-production
type: apply-deployment
# get the score from chat-result
inputs:
- from: chat-result
# check if the score is higher than 60
if: inputs["chat-result"] > 60
properties:
image: nginx
cluster: prod

Apply this Workflow and we can see that if the score is higher than 60, then the resource will be applied to the production environment.

In the End

ChatGPT brings imagination to the world of Kubernetes. Diagnose, audit, rate is just the beginning. In the new AI era, the most precious thing is idea. What do you want to do with ChatGPT? Share your insights with us in the KubeVela Community.

· 阅读需要 1 分钟
孙健波

KubeVela 1.7 版本已经正式发布一段时间,在此期间 KubeVela 正式晋级成为了 CNCF 的孵化项目,开启了一个新的里程碑。而 KubeVela 1.7 本身也是一个转折点,由于 KubeVela 从一开始就专注于可扩展体系的设计,对于控制器核心功能的需求也开始逐步收敛,我们开始腾出手来更加专注于用户体验、易用性、以及性能。在本文中,我们将重点挑选 1.7 版本中的工作负载接管、性能优化等亮点功能进行介绍。

接管你的工作负载

接管已有的工作负载一直是社区里呼声很高的需求,其场景也非常明确,即已经存在的工作负载可以自然的迁移到 OAM 标准化体系中,被 KubeVela 的应用交付控制面统一管理,复用 VelaUX 的 UI 控制台功能,包括社区的一系列运维特征、工作流步骤以及丰富的插件生态。在 1.7 版本中,我们正式发布了该功能,在了解具体怎么操作之前,让我们先对其运行模式有个基本了解。

“只读” 和 “接管” 两种模式

为了适应不同的使用场景,KubeVela 提供了两种模式来满足你统一管理的需求,一种是只读模式,适用于内部已经有自建平台的系统,这些系统对于存量业务依旧具有主要的控制能力,而新的基于 KubeVela 的平台系统可以只读式的统一观测到这些应用。另一种是接管模式,适用于想直接做迁移的用户,可以把已有的工作负载自动的接管到 KubeVela 体系中,并且完全统一管理

· 阅读需要 1 分钟
CNCF

转载于 CNCF

CNCF TOC(Technical Oversight Committee,技术监督委员会)已经投票接受 KubeVela 作为 CNCF 的孵化项目。

KubeVela 是一个应用交付引擎,也是基于 Kubernetes 的扩展插件,它可以让你的应用交付在当今流行的混合、多云环境中变得更加简单、高效、可靠。KubeVela 可以通过基于工作流的应用交付模型来编排、部署和操作工作负载和云资源。KubeVela 的应用交付抽象由 OAM(Open Application Model,开放应用模型) 提供支持。

image.png

KubeVela 项目的前身是 oam-kubernetes-runtime 项目,它由来自八家不同组织的开发者一同在社区发起,包括阿里云、微软、Upbound 等。它于 2020 年 11 月发布正式对外开源,2021 年 4 月发布了 v1.0,2021 年 6 月加入 CNCF 成为沙箱项目。该项目目前的贡献者来自世界各地,有超过 260 多名贡献者,包括招商银行、滴滴、京东、极狐 GitLab、SHEIN 等。

· 阅读需要 1 分钟
Da Yin

Since Open Application Model invented in 2020, KubeVela has experienced tens of version changes and evolves advanced features towards modern application delivery. Recently, KubeVela has proposed to become a CNCF incubation project and delivered several public talks in the community. As a memorandum, this article will look back into the starting points and give a comprehensive introduction to the state of KubeVela in 2022.

What is KubeVela?

KubeVela is a modern software platform that makes delivering and operating applications across today's hybrid, multi-cloud environments easier, faster and more reliable. It has three main features:

  • Infrastructure agnotic: KubeVela is able to deploy your cloud-native application into various destinations, such as Kubernetes multi-clusters, cloud provider runtimes (like Alibaba Cloud, AWS or Azure) and edge devices.
  • Programmable: KubeVela has abstraction layers for modeling applications and delivery process. The abstraction layers allow users to use programmable ways to build higher level reusable modules for application delivery and integrate arbitrary third-party projects (like FluxCD, Crossplane, Istio, Prometheus) in the KubeVela system.
  • Application-centric: There are rich tools and eco-systems designed around the KubeVela applications, which add extra capabilities for deliverying and operating the applications, including CLI, UI, GitOps, Observability, etc.

KubeVela cares the whole lifecycle of the applications, including both the Day-1 Delivery and the Day-2 Operating stages. It is able to connect with a wide range of Continuous Integration tools, like Jenkins or GitLab CI, and help users deliver and operate applications across hybrid environments. Slide2.png

· 阅读需要 1 分钟
Daniel Higuero

Application Delivery on Kubernetes

The cloud-native landscape is formed by a fast-growing ecosystem of tools with the aim of improving the development of modern applications in a cloud environment. Kubernetes has become the de facto standard to deploy enterprise workloads by improving development speed, and accommodating the needs of a dynamic environment.

Kubernetes offers a comprehensive set of entities that enables any potential application to be deployed into it, independent of its complexity. This however has a significant impact from the point of view of its adoption. Kubernetes is becoming as complex as it is powerful, and that translates into a steep learning curve for newcomers into the ecosystem. Thus, this has generated a new trend focused on providing developers with tools that improve their day-to-day activities without losing the underlying capabilities of the underlying system.

· 阅读需要 1 分钟
孙健波

今天,阿里云云原生应用平台总经理丁宇在云栖大会隆重发布了 KubeVela 的全新升级!本次升级是 KubeVela 从应用交付到应用管理不断量变形成的一次质变,同时也开创了业界基于可扩展模型构建交付和管理一体化应用平台的先河

· 阅读需要 1 分钟
曾庆国

KubeVela 1.5 于近日正式发布。在该版本中为社区带来了更多的开箱即用的应用交付能力,包括新增系统可观测;新增Cloud Shell 终端,将 Vela CLI 搬到了浏览器;增强的金丝雀发布;优化多环境应用交付工作流等。进一步提升和打磨了 KubeVela 作为应用交付平台的高扩展性体验。另外,社区也正式开始推动项目提级到 CNCF Incubation 阶段,同时在多次社区会议中听取了多个社区标杆用户的实践分享,这也证明了社区的良性发展。项目的成熟度,采纳度皆取得了阶段性成绩。这非常感谢社区 200 多位开发者的贡献。

· 阅读需要 1 分钟

你可能已经从这篇博客 中了解到,我们可以通过 terraform 插件使用 vela 来管理云资源(如 s3 bucket、AWS EIP 等)。 我们可以创建一个包含一些云资源组件的应用,这个应用会生成这些云资源,然后我们可以使用 vela 来管理它们。

有时我们已经有一些 Terraform 云资源,这些资源可能由 Terraform 二进制程序或其他程序创建和管理。 为了获得 使用 KubeVela 管理云资源的好处 或者只是在管理云资源的方式上保持一致性,我们可能希望将这些现有的 Terraform 云资源导入 KubeVela 并使用 vela 进行管理。如果我们只是创建一个描述这些云资源的应用,这些云资源将被重新创建并可能导致错误。 为了解决这个问题,我们制作了 一个简单的 backup_restore 工具。 本博客将向你展示如何使用 backup_restore 工具将现有的 Terraform 云资源导入 KubeVela。

· 阅读需要 1 分钟

如果您正在寻找将 Terraform 生态系统与 Kubernetes 世界粘合在一起的东西,那么恭喜!你在这个博客中得到了你想要的答案。

随着各大云厂商产品版图的扩大,基础计算设施,中间件服务,大数据/AI 服务,应用运维管理服务等都可以直接被企业和开发者拿来即用。我们注意到也有不少企业基于不同云厂商的服务作为基础来建设自己的企业基础设施中台。为了更高效,统一的管理云服务,IaC 思想近年来盛行,其中 Terrafrom 更是成功得到了几乎所有的云厂商的采纳和支持。以 Terrafrom 模型为核心的云服务 IaC 生态已经形成。然而在 Kubernetes 大行其道的今天,IaC 被冠以更广大的想象空间,Terraform IaC 能力和生态成果如果融入 Kubernetes 世界,我们认为这是一种强强联合。

· 阅读需要 1 分钟
孙健波,曾庆国

KubeVela 是一个现代化的软件交付控制平面,目标是让应用的部署和运维在如今的混合多云环境下更简单、敏捷、可靠。自 1.1 版本发布以来,KubeVela 架构上天然打通了企业面向混合多云环境的交付难题,且围绕 OAM 模型提供了充分的可扩展性,赢得了大量企业开发者的喜爱,这也使得 KubeVela 的迭代速度不断加快。

1.2 版本我们发布了开箱即用的可视化控制台,终端用户可以通过界面发布和管理多样化的工作负载;1.3 版本 的发布则完善了以 OAM 模型为核心的扩展体系,提供了丰富的插件功能,并给用户提供了包括 LDAP 权限认证在内的大量企业级功能,同时为企业集成提供了巨大的便利。至今为止,你已经可以在 KubeVela 社区的插件中心里获得 30 多种插件,其中不仅包含了 argocd、istio、traefik 这样的 CNCF 知名项目,更有 flink、mysql 等数据库中间件,以及上百种不同云厂商资源可供直接使用。

在这次发布的 1.4 版本中,我们围绕让应用交付更安全、上手更简单、过程更透明三个核心,加入了包括多集群权限认证和授权、复杂资源拓扑展示、一键安装控制平面等核心功能,全面加固了多租户场景下的交付安全性,提升了应用开发和交付的一致性体验,也让应用交付过程更加透明化。