Skip to main content

· 阅读需要 1 分钟
Fog Dong

ChatGPT is taking the tech industry by storm, thanks to its unparalleled natural language processing capabilities. As a powerful AI language model, it has the ability to understand and generate human-like responses, revolutionizing communication in various industries. From streamlining customer service chatbots to enabling seamless language translation tools, ChatGPT has already proved its mettle in creating innovative solutions that improve efficiency and user experience.

Now the question is, can we leverage ChatGPT to transform the way we deliver applications? With the integration of ChatGPT into DevOps workflows, we are witnessing the possible emergence of a new era of automation called PromptOps. This advancement in AIOps technology is revolutionizing the way businesses operate, allowing for faster and more efficient application delivery.

In this article, we will explore how to integrate ChatGPT into your DevOps workflow to deliver applications.

Integrate ChatGPT into Your DevOps Workflow

When it comes to integrating ChatGPT into DevOps workflows, many developers are faced with the challenge of managing extra resources and writing complicated shells. However, there is a better way - KubeVela Workflow. This open-source cloud-native workflow project offers a streamlined solution that eliminates the need for pods or complex scripting.

In KubeVela Workflow, every step has a type that can be easily abstracted and reused. The step-type is programmed in CUE language, making it incredibly easy to customize and use atomic capabilities like a function call in every step. An important point to note is that with all these atomic capabilities, such as HTTP requests, it is possible to integrate ChatGPT in just 5 minutes by writing a new step.

Check out the Installation Guide to get started with KubeVela Workflow. The complete code of this chat-gpt step type is available at GitHub.

Now that we choose the right tool, let's see the capabilities of ChatGPT in delivery.

Case 1: Diagnose the resources

It's quite common in the DevOps world to encounter problems like "I don't know why the pod is not running" or "I don't know why the service is not available". In this case, we can use ChatGPT to diagnose the resource.

For example, In our workflow, we can apply a Deployment with an invalid image in the first step. Since the deployment will never be ready, we can add a timeout in the step to ensure the workflow is not stuck in this step. Then, passing the unhealthy resources deployed in the first step to the second step, we can use the chat-gpt step type to diagnose the resource to determine the issue. Note that the second step is only executed if the first one fails.

The process of diagnosing the resource in the workflow

The complete workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: chat-gpt-diagnose
namespace: default
spec:
workflowSpec:
steps:
# Apply an invalid deployment with a timeout
- name: apply
type: apply-deployment
timeout: 3s
properties:
image: invalid
# output the resource to the next step
outputs:
- name: resource
valueFrom: output.value

# Use chat-gpt to diagnose the resource
- name: chat-diagnose
# only execute this step if the `apply` step fails
if: status.apply.failed
type: chat-gpt
# use the resource as inputs and pass it to prompt.content
inputs:
- from: resource
parameterKey: prompt.content
properties:
token:
value: <your token>
prompt:
type: diagnose

Apply this Workflow and check the result, the first step will fail because of timeout. Then the second step will be executed and the result of chat-gpt will be shown in the log:

vela workflow logs chat-gpt-diagnose

The logs of diagnose step

Visualize in the dashboard

If you want to visualize the process and the result in the dashboard, it's time to enable the [velaux](https://kubevela.io/docs/reference/addons/velaux#install) addon.

vela addon enable velaux

Copy all the steps in the above yaml to create a pipeline.

Create the pipeline in VelaUX

Run this pipeline, and you can check out the failed reason analyzed by ChatGPT in the logs of the second step.

Run the pipeline in VelaUX

Write the chat-gpt step from scratch

How to write this chat-gpt step type? Is it simple for you to write a step type like this? Let's see how to complete this step type.

We can first define what this step type need from the user. That is: the users' token for ChatGPT, and the resource to diagnose. For some other parameters like the model or the request timeout, we can set the default value with * like below:

parameter: {
token: value: string
// +usage=the model name
model: *"gpt-3.5-turbo" | string
// +usage=the prompt to use
prompt: {
type: *"diagnose" | string
lang: *"English" | string
content: {...}
}
timeout: *"30s" | string
}

Let's complete this step type by writing the logic of the step. We can first import vela/op package in which we can use the op.#HTTPDo capability to send a request to the ChatGPT API. If the request fails, the step should be failed with op.#Fail. We can also set this step's log data with ChatGPT's answer. The complete step type is shown below:

// import packages
import (
"vela/op"
"encoding/json"
)

// this is the name of the step type
"chat-gpt": {
description: "Send request to chat-gpt"
type: "workflow-step"
}

// this is the logic of the step type
template: {
// send http request to chat gpt
http: op.#HTTPDo & {
method: "POST"
url: "https://api.openai.com/v1/chat/completions"
request: {
timeout: parameter.timeout
body: json.Marshal({
model: parameter.model
messages: [{
if parameter.prompt.type == "diagnose" {
content: """
You are a professional kubernetes administrator.
Carefully read the provided information, being certain to spell out the diagnosis & reasoning, and don't skip any steps.
Answer in \(parameter.prompt.lang).
---
\(json.Marshal(parameter.prompt.content))
---
What is wrong with this object and how to fix it?
"""
}
role: "user"
}]
})
header: {
"Content-Type": "application/json"
"Authorization": "Bearer \(parameter.token.value)"
}
}
}

response: json.Unmarshal(http.response.body)

fail: op.#Steps & {
if http.response.statusCode >= 400 {
requestFail: op.#Fail & {
message: "\(http.response.statusCode): failed to request: \(response.error.message)"
}
}
}
result: response.choices[0].message.content
log: op.#Log & {
data: result
}
parameter: {
token: value: string
// +usage=the model name
model: *"gpt-3.5-turbo" | string
// +usage=the prompt to use
prompt: {
type: *"diagnose" | string
lang: *"English" | string
content: {...}
}
timeout: *"30s" | string
}
}

That's it! Apply this step type and we can use it in our Workflow like the above.

vela def apply chat-gpt.cue

Case 2: Audit the resource

Now the ChatGPT is our Kubernetes expert and can diagnose the resource. Can it also give us some security advice for the resource? Definitely! It's just prompt. Let's modify the step type that we wrote in the previous case to add the audit feature. We can add a new prompt type audit and pass the resource to the prompt. You can check out the whole step type in GitHub.

In the Workflow, we can apply a Deployment with nginx image and pass it to the second step. The second step will use the audit prompt to audit the resource. The process of auditing the resource in workflow The complete Workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: chat-gpt-audit
namespace: default
spec:
workflowSpec:
steps:
- name: apply
type: apply-deployment
# output the resource to the next step
outputs:
- name: resource
valueFrom: output.value
properties:
image: nginx

- name: chat-audit
type: chat-gpt
# use the resource as inputs and pass it to prompt.content
inputs:
- from: resource
parameterKey: prompt.content
properties:
token:
value: <your token>
prompt:
type: audit

image.png

Use Diagnose & Audit in one Workflow

Now that we have the capability to diagnose and audit the resource, we can use them in one Workflow, and use the if condition to control the execution of the steps. For example, if the apply step fails, then diagnose the resource, if it succeeds, audit the resource.

Use diagnose &amp; audit in one workflow

The complete Workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: chat-gpt
namespace: default
spec:
workflowSpec:
steps:
- name: apply
type: apply-deployment
outputs:
- name: resource
valueFrom: output.value
properties:
image: nginx

# if the apply step fails, then diagnose the resource
- name: chat-diagnose
if: status.apply.failed
type: chat-gpt
inputs:
- from: resource
parameterKey: prompt.content
properties:
token:
value: <your token>
prompt:
type: diagnose

# if the apply step succeeds, then audit the resource
- name: chat-audit
if: status.apply.succeeded
type: chat-gpt
inputs:
- from: resource
parameterKey: prompt.content
properties:
token:
value: <your token>
prompt:
type: audit

Case 3: Use ChatGPT as a quality gate

If we want to apply the resources to a production environment, can we let ChatGPT rate the quality of the resource first, only if the quality is high enough, then apply the resource to the production environment? Absolutely!

Note that to make the score evaluated by chat-gpt more convincing, it's better to pass metrics than the resource in this case.

Let's write our Workflow. KubeVela Workflow has the capability to apply resources to multi clusters. The first step is to apply the Deployment to the test environment. The second step is to use the ChatGPT to rate the quality of the resource. If the quality is high enough, then apply the resource to the production environment.

The process of using quality gate in workflow

The complete Workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: chat-gpt-quality-gate
namespace: default
spec:
workflowSpec:
steps:
# apply the resource to the test environment
- name: apply
type: apply-deployment
# output the resource to the next step
outputs:
- name: resource
valueFrom: output.value
properties:
image: nginx
cluster: test

- name: chat-quality-check
# this step will always be executed
if: always
type: chat-gpt
# get the inputs from resource and pass it to the prompt.content
inputs:
- from: resource
parameterKey: prompt.content
# output the score of ChatGPT and use strconv.Atoi to convert the score string to int
outputs:
- name: chat-result
valueFrom: |
import "strconv"
strconv.Atoi(result)
properties:
token:
value: <your token>
prompt:
type: quality-gate

# if the score is higher than 60, then apply the resource to the production environment
- name: apply-production
type: apply-deployment
# get the score from chat-result
inputs:
- from: chat-result
# check if the score is higher than 60
if: inputs["chat-result"] > 60
properties:
image: nginx
cluster: prod

Apply this Workflow and we can see that if the score is higher than 60, then the resource will be applied to the production environment.

In the End

ChatGPT brings imagination to the world of Kubernetes. Diagnose, audit, rate is just the beginning. In the new AI era, the most precious thing is idea. What do you want to do with ChatGPT? Share your insights with us in the KubeVela Community.

· 阅读需要 1 分钟
孙健波

KubeVela 1.7 版本已经正式发布一段时间,在此期间 KubeVela 正式晋级成为了 CNCF 的孵化项目,开启了一个新的里程碑。而 KubeVela 1.7 本身也是一个转折点,由于 KubeVela 从一开始就专注于可扩展体系的设计,对于控制器核心功能的需求也开始逐步收敛,我们开始腾出手来更加专注于用户体验、易用性、以及性能。在本文中,我们将重点挑选 1.7 版本中的工作负载接管、性能优化等亮点功能进行介绍。

接管你的工作负载

接管已有的工作负载一直是社区里呼声很高的需求,其场景也非常明确,即已经存在的工作负载可以自然的迁移到 OAM 标准化体系中,被 KubeVela 的应用交付控制面统一管理,复用 VelaUX 的 UI 控制台功能,包括社区的一系列运维特征、工作流步骤以及丰富的插件生态。在 1.7 版本中,我们正式发布了该功能,在了解具体怎么操作之前,让我们先对其运行模式有个基本了解。

“只读” 和 “接管” 两种模式

为了适应不同的使用场景,KubeVela 提供了两种模式来满足你统一管理的需求,一种是只读模式,适用于内部已经有自建平台的系统,这些系统对于存量业务依旧具有主要的控制能力,而新的基于 KubeVela 的平台系统可以只读式的统一观测到这些应用。另一种是接管模式,适用于想直接做迁移的用户,可以把已有的工作负载自动的接管到 KubeVela 体系中,并且完全统一管理

· 阅读需要 1 分钟
CNCF

转载于 CNCF

CNCF TOC(Technical Oversight Committee,技术监督委员会)已经投票接受 KubeVela 作为 CNCF 的孵化项目。

KubeVela 是一个应用交付引擎,也是基于 Kubernetes 的扩展插件,它可以让你的应用交付在当今流行的混合、多云环境中变得更加简单、高效、可靠。KubeVela 可以通过基于工作流的应用交付模型来编排、部署和操作工作负载和云资源。KubeVela 的应用交付抽象由 OAM(Open Application Model,开放应用模型) 提供支持。

image.png

KubeVela 项目的前身是 oam-kubernetes-runtime 项目,它由来自八家不同组织的开发者一同在社区发起,包括阿里云、微软、Upbound 等。它于 2020 年 11 月发布正式对外开源,2021 年 4 月发布了 v1.0,2021 年 6 月加入 CNCF 成为沙箱项目。该项目目前的贡献者来自世界各地,有超过 260 多名贡献者,包括招商银行、滴滴、京东、极狐 GitLab、SHEIN 等。

· 阅读需要 1 分钟
董天欣

Serverless 应用引擎(SAE) 是一款底层基于 Kubernetes,实现了 Serverless 架构与微服务架构结合的云产品。作为一款不断迭代的云产品,在快速发展的过程中也遇到了许多挑战。如何在蓬勃发展的云原生时代中解决这些挑战,并进行可靠快速的云架构升级?SAE 团队和 KubeVela 社区针对这些挑战开展了紧密合作,并给出了云原生下的开源可复制解决方案——KubeVela Workflow。

本文将详细介绍 SAE 使用 KubeVela Workflow 进行架构升级的解决方案,并对多个实践场景进行一一解读。

Serverless 时代下的挑战

Serverless 应用引擎(SAE)是面向业务应用架构、微服务架构的一站式应用托管平台,是一款底层基于 Kubernetes,实现了 Serverless 架构与微服务架构结合的云产品。

image.png

如上架构图,SAE 的用户可以将多种不同类型的业务应用托管在 SAE 之上。而在 SAE 底层,则会通过 JAVA 业务层处理相关的业务逻辑,以及与 Kubernetes 资源进行交互。在最底层,则依靠高可用,免运维,按需付费的弹性资源池。

在这个架构下,SAE 主要依托其 JAVA 业务层为用户提供功能。这样的架构在帮助用户一键式部署应用的同时,也带来了不少挑战。

· 阅读需要 1 分钟
乔中沛

背景

随着万物互联场景的逐渐普及,边缘设备的算力也不断增强,如何借助云计算的优势满足复杂多样化的边缘应用场景,让云原生技术延伸到端和边缘成为了新的技术挑战,“云边协同”正在逐渐成为新的技术焦点。本文将围绕 CNCF 的两大开源项目 KubeVela 和 OpenYurt,以一个实际的 Helm 应用交付的场景,为大家介绍云边协同的解决方案。

OpenYurt 专注于以无侵入的方式将 Kubernetes 扩展到边缘计算领域。OpenYurt 依托原生 Kubernetes 的容器编排、调度能力,将边缘算力纳入到 Kubernetes 基础设施中统一管理,提供了诸如边缘自治、高效运维通道、边缘单元化管理、边缘流量拓扑、安全容器、边缘 Serverless/FaaS、异构资源支持等能力。以 Kubernetes 原生的方式为云边协同构建了统一的基础设施。

KubeVela 孵化于 OAM 模型,专注于帮助企业构建统一的应用交付和管理能力,为业务开发者屏蔽底层基础设施复杂度,提供灵活的扩展能力,并提供开箱即用的微服务容器管理、云资源管理、版本化和灰度发布、扩缩容、可观测性、资源依赖编排和数据传递、多集群、CI 对接、GitOps 等特性。最大化的提升开发者自助式应用管理的研发效能,提升也满足平台长期演进的扩展性诉求。

· 阅读需要 1 分钟
Gokhan Karadas

This document aims to explain the integration of Kubevela and ArgoCD. We have two approaches to integrate this flow. This doc is trying to explain the pros and cons of two different approaches. Before diving deep into details, we can describe Kubevela and ArgoCD.

KubeVela is a modern software delivery platform that makes deploying and operating applications across multi environments easier, faster, and more reliable.

KubeVela is infrastructure agnostic and application-centric. It allows you to build robust software and deliver them anywhere! Kubevela provides an Open Application Model (OAM) based abstraction approach to ship applications and any resource across multiple environments.

Open Application Model (OAM) is a set of standard yet higher-level abstractions for modeling cloud-native applications on top of today’s hybrid and multi-cloud environments. You can find more conceptual details here.

· 阅读需要 1 分钟
Da Yin

Since Open Application Model invented in 2020, KubeVela has experienced tens of version changes and evolves advanced features towards modern application delivery. Recently, KubeVela has proposed to become a CNCF incubation project and delivered several public talks in the community. As a memorandum, this article will look back into the starting points and give a comprehensive introduction to the state of KubeVela in 2022.

What is KubeVela?

KubeVela is a modern software platform that makes delivering and operating applications across today's hybrid, multi-cloud environments easier, faster and more reliable. It has three main features:

  • Infrastructure agnotic: KubeVela is able to deploy your cloud-native application into various destinations, such as Kubernetes multi-clusters, cloud provider runtimes (like Alibaba Cloud, AWS or Azure) and edge devices.
  • Programmable: KubeVela has abstraction layers for modeling applications and delivery process. The abstraction layers allow users to use programmable ways to build higher level reusable modules for application delivery and integrate arbitrary third-party projects (like FluxCD, Crossplane, Istio, Prometheus) in the KubeVela system.
  • Application-centric: There are rich tools and eco-systems designed around the KubeVela applications, which add extra capabilities for deliverying and operating the applications, including CLI, UI, GitOps, Observability, etc.

KubeVela cares the whole lifecycle of the applications, including both the Day-1 Delivery and the Day-2 Operating stages. It is able to connect with a wide range of Continuous Integration tools, like Jenkins or GitLab CI, and help users deliver and operate applications across hybrid environments. Slide2.png

· 阅读需要 1 分钟
Daniel Higuero

Application Delivery on Kubernetes

The cloud-native landscape is formed by a fast-growing ecosystem of tools with the aim of improving the development of modern applications in a cloud environment. Kubernetes has become the de facto standard to deploy enterprise workloads by improving development speed, and accommodating the needs of a dynamic environment.

Kubernetes offers a comprehensive set of entities that enables any potential application to be deployed into it, independent of its complexity. This however has a significant impact from the point of view of its adoption. Kubernetes is becoming as complex as it is powerful, and that translates into a steep learning curve for newcomers into the ecosystem. Thus, this has generated a new trend focused on providing developers with tools that improve their day-to-day activities without losing the underlying capabilities of the underlying system.

· 阅读需要 1 分钟
孙健波

今天,阿里云云原生应用平台总经理丁宇在云栖大会隆重发布了 KubeVela 的全新升级!本次升级是 KubeVela 从应用交付到应用管理不断量变形成的一次质变,同时也开创了业界基于可扩展模型构建交付和管理一体化应用平台的先河

· 阅读需要 1 分钟
姜洪烨

KubeVela 插件(addon)可以方便地扩展 KubeVela 的能力。正如我们所知,KubeVela 是一个微内核高度可扩展的平台,用户可以通过 模块定义(Definition)扩展 KubeVela 的系统能力,而 KubeVela 插件正是方便将这些自定义扩展及其依赖打包并分发的核心功能。不仅如此,KubeVela 社区的插件中心也在逐渐壮大,如今已经有超过 50 款插件,涵盖可观测性、微服务、FinOps、云资源、安全等大量场景功能。

这篇博客将会全方位介绍 KubeVela 插件的核心机制,教你如何编写一个自定义插件。在最后,我们将展示最终用户使用插件的体验,以及插件将如何融入到 KubeVela 平台,为用户提供一致的体验。