We are excited to introduce llvmbpf, a new project aimed at empowering developers with a high-performance, multi-architecture eBPF virtual machine (VM) that leverages the LLVM framework for Just-In-Time (JIT) and Ahead-Of-Time (AOT) compilation.
This component is part of the bpftime project but focuses solely on the core VM. It operates as a standalone eBPF VM library or a compiler tool. This library is optimized for performance, flexibility, and minimal dependencies, making it easy to integrate into various environments without unnecessary overhead.
Why llvmbpf?
Although there are several userspace eBPF runtimes available, we built llvmbpf to address specific needs that existing solutions may not fully satisfy:
AOT Compiler: The ability to compile eBPF bytecode into native ELF object files allows developers to deploy pre-compiled eBPF programs, ensuring high performance and efficiency, especially in resource-constrained environments. Additionally, it can allow you to experiment with different optimization techniques based on LLVM IR, providing more flexibility and control over the compilation process.
Standalone Deployment: With llvmbpf, you can build eBPF programs into standalone binaries that don’t require external dependencies. This feature is particularly useful for deploying eBPF programs on embedded systems, microcontrollers, or other environments where installing additional software is impractical. Compared to native C code development, this ensures the eBPF part is verified after integration with the verifier.
All-Architecture Support: llvmbpf is designed to be compatible across multiple architectures, making it versatile for a wide range of hardware platforms.
Maps and Relocation Support: Unlike many other userspace eBPF solutions, llvmbpf provides robust support for maps, data relocation, and lddw helper functions, allowing for the creation of more complex and powerful eBPF programs.
Extensible Optimization Approaches: Leveraging LLVM’s powerful optimization capabilities, llvmbpf allows for advanced optimizations such as inlining maps and helper functions, as well as using original LLVM IR for enhanced performance.
In this blog, we’ll walk through some practical examples of how to use llvmbpf, highlighting its core features and capabilities.
For a comprehensive userspace eBPF runtime that includes support for maps, helpers, and seamless execution of Uprobe, syscall trace, XDP, and other eBPF programs—similar to kernel functionality but in userspace—please refer to the bpftime project.
Getting Started with llvmbpf
Using llvmbpf as a Library
llvmbpf can be used as a library within your application to load and execute eBPF programs. Here’s a basic example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
voidrun_ebpf_prog(constvoid *code, size_t code_len){ uint64_t res = 0; llvmbpf_vm vm;
res = vm.load_code(code, code_len); if (res) { return; } vm.register_external_function(2, "print", (void *)ffi_print_func); auto func = vm.compile(); if (!func) { return; } int err = vm.exec(&bpf_mem, sizeof(bpf_mem), res); if (err != 0) { return; } printf("res = %" PRIu64 "\n", res); }
This snippet shows how you can load eBPF bytecode, register external functions, and execute the program within the VM.
Using llvmbpf as an AOT Compiler
One of the most powerful features of llvmbpf is its ability to function as an AOT compiler, converting eBPF bytecode into native ELF object files. This approach not only boosts performance but also simplifies the deployment of eBPF programs.
You can use the CLI to generate LLVM IR from eBPF bytecode:
The resulting ELF object file can be linked with other object files or loaded directly into the llvmbpf runtime, making it highly versatile for different use cases.
Loading eBPF Bytecode from ELF Files
llvmbpf supports loading eBPF bytecode directly from ELF files, which is a common format for storing compiled eBPF programs. This feature is particularly useful when working with existing eBPF toolchains.
However, the bpf.o ELF file has no map and data relocation support. We recommend using bpftime to load and relocate the eBPF bytecode from an ELF file. This includes:
Writing a loader similar to the kernel eBPF loader to load the eBPF bytecode (see an example here).
Using libbpf, which supports:
Relocation for maps, where the map ID is allocated by the loader and bpftime. You can use the map ID to access maps through the helpers.
Accessing data through the lddw helper function.
After loading the eBPF bytecode and completing relocation, you can use the bpftimetool to dump the map information and eBPF bytecode.
Maps and Data Relocation Support
llvmbpf offers extensive support for maps and data relocation, allowing developers to write more complex eBPF programs that interact with different data sources. For instance, you can use helper functions to access maps or define maps as global variables in your eBPF programs.
The eBPF can work with maps in two ways:
Using helper functions to access the maps, like bpf_map_lookup_elem, bpf_map_update_elem, etc.
Using maps as global variables in the eBPF program and accessing the maps directly.
One of the standout features of llvmbpf is the ability to compile eBPF programs into standalone binaries. This makes it possible to deploy eBPF applications in environments where installing dependencies is not feasible, such as microcontrollers or other embedded systems.
You can build the eBPF program into a standalone binary that does not rely on any external libraries and can be executed like normal C code with helper and map support.
This approach offers several benefits:
Easily deploy the eBPF program to any machine without needing to install dependencies.
Avoid the overhead of loading the eBPF bytecode and maps at runtime.
Make it suitable for microcontrollers or embedded systems that do not have an OS.
intmain() { printf("The value of cntrs_array[0] is %" PRIu64 "\n", cntrs_array[0]); printf("calling ebpf program...\n"); bpf_main(bpf_mem, sizeof(bpf_mem)); printf("The value of cntrs_array[0] is %" PRIu64 "\n", cntrs_array[0]); return0; }
Compile the C code with the LLVM IR:
1
clang -g main.c xdp-counter.ll -o standalone
You can then run the standalone eBPF program directly. Compared to native C code development, this ensures that the eBPF part is verified after integration with the verifier.
Optimization Techniques
llvmbpf provides several optimization techniques to enhance the performance of eBPF programs. Two notable methods include:
Inlining Maps and Helper Functions
By inlining maps and helper functions, llvmbpf reduces the overhead of function calls, enabling more efficient execution of eBPF programs.
Instead of relying solely on eBPF instructions, llvmbpf allows developers to use original LLVM IR generated from C code. This flexibility opens the door for more advanced optimizations and higher performance.
eBPF is an instruction set designed for verification, but it may not be the best for performance. llvmbpf also supports using the original LLVM IR from C code. See example/load-llvm-ir for an example. You can:
Compile the C code to eBPF for verification.
Compile the C code to LLVM IR and native code for execution in the VM.
Conclusion
llvmbpf is a powerful tool for developers looking to leverage eBPF outside the kernel. With features like AOT compilation, standalone deployment, and extensive support for maps and relocation, it offers a flexible and high-performance solution for a wide range of use cases. Whether you’re working on networking, security, or performance monitoring applications, llvmbpf provides the tools you need to build efficient and portable eBPF programs.
Kernel programming can be intimidating, requiring deep knowledge of operating system internals and programming constraints. Our latest tool, Kgent, aims to change that by making it easier than ever to create extended Berkeley Packet Filters (eBPF) programs. Kgent leverages the power of large language models (LLMs) to translate natural language prompts into eBPF code, opening up kernel programming to a wider audience.
Our paper, “Kgent: Kernel Extensions Large Language Model Agent,” was recently presented at eBPF ‘24: Proceedings of the ACM SIGCOMM 2024 Workshop on eBPF and Kernel Extensions. Let’s dive into what makes Kgent a game-changer for kernel programming.
The Key Idea Behind Kgent
Kgent simplifies the traditionally complex process of writing eBPF programs. By translating user prompts in natural language to eBPF code, it eliminates the need for deep OS kernel knowledge. This tool combines program comprehension, symbolic execution, and feedback loops to ensure the synthesized program is accurate and aligns with the user’s intent.
Highlights
Natural Language to eBPF: Kgent can take user prompts in plain English and convert them into functional eBPF programs.
Combination of Techniques: It employs a mix of program comprehension, symbolic execution, and feedback loops to ensure high accuracy.
Evaluation: Our tests show that Kgent achieves a 2.67x improvement over GPT-4 in producing correct eBPF programs, with a high accuracy rate and minimal false positives.
Potential Use Cases
Kgent can be utilized in various scenarios to facilitate kernel development and management:
System Administrators: Helps junior sys admins create and maintain eBPF programs without needing extensive OS kernel knowledge.
DevOps Personnel: Assists in writing and deploying kernel extensions for monitoring and tracing applications, enhancing system performance and security.
Patch Makers: Simplifies the creation of patches by translating natural language descriptions of issues and fixes into eBPF programs.
Kernel Developers: Speeds up the prototyping and validation of kernel extensions, saving time and reducing errors.
Educational Purposes: Serves as a learning aid for students and new developers to understand eBPF programming through natural language interactions.
Research and Experimentation: Provides a platform for researchers to explore new eBPF applications and test hypotheses without diving into complex coding.
Network Tools Development: Eases the creation of custom network monitoring, security, and performance analysis tools by translating high-level requirements into efficient eBPF programs.
Why we need kgent instead of just ask GPT?
While large language models (LLMs) like GPT-4 can suggest code, they often recommend incorrect helpers or non-existent APIs—a phenomenon known as hallucination. Given the small and limited set of helpers and kfuncs in eBPF, these issues can be fixed relatively easily. Another common issue is incorrect attach points. In eBPF, programs must attach to specific kernel events, such as kprobes, tracepoints, and perf events. Incorrect attach events can either be rejected by the kernel or, worse, pass the verifier and load incorrectly, leading to wrong results.
The eBPF verifier adds another layer of complexity. For instance, loop code often cannot pass the verifier due to safety checks. Although the verifier prevents harmful code, it cannot always prevent incorrect code. For example, when asked to write a program to trace TCP connect events, GPT-4’s generated code failed to read the port number correctly and didn’t consider IPv6.
To help the LLM learn about new knowledge like eBPF, common approaches include fine-tuning or Retrieval-Augmented Generation (RAG). However, publicly available examples of eBPF are insufficient, and eBPF abilities can change across kernel versions. RAG is a promising solution, as it allows the model to retrieve the most up-to-date and relevant information from external sources. This method combines language model generation with relevant information retrieval from a vector database.
The LLM Agent Framework
To address these issues, we built an LLM agent with three core components: planning, tools, and memory.
Plan Component The agent follows a predefined workflow:
Prompter: Retrieves related examples, attach points, and specs based on user input.
Synthesis Engine: Creates eBPF candidates from the prompt.
Comprehension Engine: Annotates the eBPF candidate, adding necessary assumptions and assertions for verification.
Symbolic Verifier: Verifies the candidate’s behavior. If invalid, the process iterates until a valid program is produced, forming a feedback loop. For some cases, it can also use ReAct mode for decision-making.
Tools Component The agent can use various tools like clang to compile eBPF programs, Seahorn for verification, and bpftrace for obtaining attach points and running eBPF programs.
Memory Component The agent uses short-term in-context memory to remember past actions, errors, and decisions, ensuring the feedback loop is successful.
Example Workflow Let’s take a simple bpftrace program as an example. Suppose a user requests: “Trace tcp_connect events for both IPv4 and IPv6 connection attempts, and display the source and destination IP addresses.” The agent forms a prompt based on a predefined template and asks the LLM to generate the program. We use in-context learning and few-shot techniques, including examples in the template’s context. The examples vector database contains samples from BCC, bpftrace, and our own collection. The agent searches for similar examples based on user input and includes these examples in the prompt.
We also built a pipeline to generate specifications and descriptions for each hook point and helper function from the kernel source code. For instance, when building the spec database, we generate the spec for the tcp_connect_init function in the kernel using the LLM. During the synthesis step, the agent can search for related function specs with user input in the vector database.
Limitations and Future Work
While Kgent is a significant step forward, it has some limitations. Currently, our implementation focuses on small programs under 100 lines due to the LLM’s context window limit. Additionally, our eBPF program dataset is relatively small, which restricts the tool’s ability to handle more complex and varied tasks. Right now, Kgent’s use cases are mostly limited to simple trace programs and network functions.
We are exploring ways to extend Kgent’s capabilities. For example, we know that tools like ChatGPT can handle many tasks using its Python code interpreter. This raises exciting possibilities: can we automate larger tasks like auto-monitoring and auto-performance tuning? Could an LLM help analyze results from different tools and even find these tools automatically? Could it play a role in rapidly developing solutions for urgent problems?
To tackle these challenges, we are considering splitting larger tasks into smaller, manageable parts, similar to the approach used by AutoGPT. This would allow the LLM to plan the overall structure of the program, generate each component, and then merge them together. Additionally, involving users in the iteration process could provide interactive feedback, improving the quality of the generated programs.
We also acknowledge that writing correct Hoare contracts is challenging for LLMs, and current verification methods may not cover all behaviors of the generated eBPF programs. To improve this, we need better background descriptions and more robust Hoare expressions. Incorporating more software engineering practices, such as counterexample generation and test-driven development, could help ensure comprehensive verification.
Another critical concern is security. Since eBPF runs in the kernel, any flaws could lead to significant issues. We plan to involve users more in the review process to mitigate these risks and ensure the safety of the generated programs.
Conclusion
Kgent is revolutionizing the way we approach kernel programming by making eBPF program creation accessible to a broader audience. By translating natural language into functional eBPF code, it opens up kernel extension development to system administrators, DevOps personnel, patch makers, and more. Our paper, presented at eBPF ‘24, highlights the potential of this tool to democratize kernel programming and foster innovation.
We invite you to explore Kgent and see how it can transform your approach to kernel development. For more details, check out our eBPF’24 paper and visit our GitHub repository. For additional details, refer to the earlier Arxiv version: KEN: Kernel Extensions using Natural Language. For a more usable and simplified tool, check out GPTtrace. You can also try the GPTtrace simplified web demo here.
By lowering the barrier to entry for writing eBPF programs, Kgent is promoting innovation and enhancing system capabilities, one natural language prompt at a time.
In today’s technology landscape, with the rise of microservices, cloud-native applications, and complex distributed systems, observability of systems has become a crucial factor in ensuring their health, performance, and security. Especially in a microservices architecture, application components may be distributed across multiple containers and servers, making traditional monitoring methods often insufficient to provide the depth and breadth needed to fully understand the behavior of the system. This is where observing seven-layer protocols such as HTTP, gRPC, MQTT, and more becomes particularly important.
Seven-layer protocols provide detailed insights into how applications interact with other services and components. In a microservices environment, understanding these interactions is vital, as they often serve as the root causes of performance bottlenecks, failures, and security issues. However, monitoring these protocols is not a straightforward task. Traditional network monitoring tools like tcpdump, while effective at capturing network traffic, often fall short when dealing with the complexity and dynamism of seven-layer protocols.
This is where eBPF (extended Berkeley Packet Filter) technology comes into play. eBPF allows developers and operators to delve deep into the kernel layer, observing and analyzing system behavior in real-time without the need to modify or insert instrumentation into application code. This presents a unique opportunity to handle application layer traffic more simply and efficiently, particularly in microservices environments.
In this tutorial, we will delve into the following:
Tracking seven-layer protocols such as HTTP and the challenges associated with them.
eBPF’s socket filter and syscall tracing: How these two technologies assist in tracing HTTP network request data at different kernel layers, and the advantages and limitations of each.
eBPF practical tutorial: How to develop an eBPF program and utilize eBPF socket filter or syscall tracing to capture and analyze HTTP traffic.
As network traffic increases and applications grow in complexity, gaining a deeper understanding of seven-layer protocols becomes increasingly important. Through this tutorial, you will acquire the necessary knowledge and tools to more effectively monitor and analyze your network traffic, ultimately enhancing the performance of your applications and servers.
This article is part of the eBPF Developer Tutorial, and for more detailed content, you can visit here. The source code is available on the GitHub repository.
Challenges in Tracking HTTP, HTTP/2, and Other Seven-Layer Protocols
In the modern networking environment, seven-layer protocols extend beyond just HTTP. In fact, there are many seven-layer protocols such as HTTP/2, gRPC, MQTT, WebSocket, AMQP, and SMTP, each serving critical roles in various application scenarios. These protocols provide detailed insights into how applications interact with other services and components. However, tracking these protocols is not a simple task, especially within complex distributed systems.
Diversity and Complexity: Each seven-layer protocol has its specific design and workings. For example, gRPC utilizes HTTP/2 as its transport protocol and supports multiple languages, while MQTT is a lightweight publish/subscribe messaging transport protocol designed for low-bandwidth and unreliable networks.
Dynamism: Many seven-layer protocols are dynamic, meaning their behavior can change based on network conditions, application requirements, or other factors.
Encryption and Security: With increased security awareness, many seven-layer protocols employ encryption technologies such as TLS/SSL. This introduces additional challenges for tracking and analysis, as decrypting traffic is required for in-depth examination.
High-Performance Requirements: In high-traffic production environments, capturing and analyzing traffic for seven-layer protocols can impact system performance. Traditional network monitoring tools may struggle to handle a large number of concurrent sessions.
Data Completeness and Continuity: Unlike tools like tcpdump, which capture individual packets, tracking seven-layer protocols requires capturing complete sessions, which may involve multiple packets. This necessitates tools capable of correctly reassembling and parsing these packets to provide a continuous session view.
Code Intrusiveness: To gain deeper insights into the behavior of seven-layer protocols, developers may need to modify application code to add monitoring functionalities. This not only increases development and maintenance complexity but can also impact application performance.
As mentioned earlier, eBPF provides a powerful solution, allowing us to capture and analyze seven-layer protocol traffic in the kernel layer without modifying application code. This approach not only offers insights into system behavior but also ensures optimal performance and efficiency. This is why eBPF has become the preferred technology for modern observability tools, especially in production environments that demand high performance and low latency.
eBPF Socket Filter vs. Syscall Tracing: In-Depth Analysis and Comparison
eBPF Socket Filter
What Is It? eBPF socket filter is an extension of the classic Berkeley Packet Filter (BPF) that allows for more advanced packet filtering directly within the kernel. It operates at the socket layer, enabling fine-grained control over which packets are processed by user-space applications.
Key Features:
Performance: By handling packets directly within the kernel, eBPF socket filters reduce the overhead of context switches between user and kernel spaces.
Flexibility: eBPF socket filters can be attached to any socket, providing a universal packet filtering mechanism for various protocols and socket types.
Programmability: Developers can write custom eBPF programs to define complex filtering logic beyond simple packet matching.
Use Cases:
Traffic Control: Restrict or prioritize traffic based on custom conditions.
Security: Discard malicious packets before they reach user-space applications.
Monitoring: Capture specific packets for analysis without affecting other traffic.
eBPF Syscall Tracing
What Is It? System call tracing using eBPF allows monitoring and manipulation of system calls made by applications. System calls are the primary mechanism through which user-space applications interact with the kernel, making tracing them a valuable way to understand application behavior.
Key Features:
Granularity: eBPF allows tracing specific system calls, even specific parameters within those system calls.
Low Overhead: Compared to other tracing methods, eBPF syscall tracing is designed to have minimal performance impact.
Security: Kernel validates eBPF programs to ensure they do not compromise system stability.
How It Works: eBPF syscall tracing typically involves attaching eBPF programs to tracepoints or kprobes related to the system calls being traced. When the traced system call is invoked, the eBPF program is executed, allowing data collection or even modification of system call parameters.
Comparison of eBPF Socket Filter and Syscall Tracing
Aspect
eBPF Socket Filter
eBPF Syscall Tracing
Operational Layer
Socket layer, primarily dealing with network packets received from or sent to sockets.
System call layer, monitoring and potentially altering the behavior of system calls made by applications.
Primary Use Cases
Mainly used for filtering, monitoring, and manipulation of network packets.
Used for performance analysis, security monitoring, and debugging of interactions with the network.
Granularity
Focuses on individual network packets.
Can monitor a wide range of system activities, including those unrelated to networking.
Tracking HTTP Traffic
Can be used to filter and capture HTTP packets passed through sockets.
Can trace system calls associated with networking operations, which may include HTTP traffic.
In summary, both eBPF socket filters and syscall tracing can be used to trace HTTP traffic, but socket filters are more direct and suitable for this purpose. However, if you are interested in the broader context of how an application interacts with the system (e.g., which system calls lead to HTTP traffic), syscall tracing can be highly valuable. In many advanced observability setups, both tools may be used simultaneously to provide a comprehensive view of system and network behavior.
Capturing HTTP Traffic with eBPF Socket Filter
eBPF code consists of user-space and kernel-space components, and here we primarily focus on the kernel-space code. Below is the main logic for capturing HTTP traffic in the kernel using eBPF socket filter technology, and the complete code is provided:
This is the entry point of the eBPF program, defining a function named socket_handler that the kernel uses to handle incoming network packets. This function is located in an eBPF section named socket, indicating that it is intended for socket handling.
In this code block, several variables are defined to store information needed during packet processing. These variables include struct so_event *e for storing event information, verlen, proto, nhoff, ip_proto, tcp_hdr_len, tlen, payload_offset, payload_length, and hdr_len for storing packet information.
struct so_event *e;: This is a pointer to the so_event structure for storing captured event information. The specific definition of this structure is located elsewhere in the program.
__u8 verlen;, __u16 proto;, __u32 nhoff = ETH_HLEN;: These variables are used to store various pieces of information, such as protocol types, packet offsets, etc. nhoff is initialized to the length of the Ethernet frame header, typically 14 bytes, as Ethernet frame headers include destination MAC address, source MAC address, and frame type fields.
__u32 ip_proto = 0;: This variable is used to store the type of the IP protocol and is initialized to 0.
__u32 tcp_hdr_len = 0;: This variable is used to store the length of the TCP header and is initialized to 0.
__u16 tlen;: This variable is used to store the total length of the IP packet.
__u32 payload_offset = 0;, __u32 payload_length = 0;: These two variables are used to store the offset and length of the HTTP request payload.
__u8 hdr_len;: This variable is used to store the length of the IP header.
1 2 3 4
bpf_skb_load_bytes(skb, 12, &proto, 2); proto = __bpf_ntohs(proto); if (proto != ETH_P_IP) return0;
Here, the code loads the Ethernet frame type field from the packet, which tells us the network layer protocol being used in the packet. It then uses the __bpf_ntohs function to convert the network byte order type field into host byte order. Next, the code checks if the type field is not equal to the Ethernet frame type for IPv4 (0x0800). If it’s not equal, it means the packet is not an IPv4 packet, and the function returns 0, indicating that the packet should not be processed.
Key concepts to understand here:
Ethernet Frame: The Ethernet frame is a data link layer (Layer 2) protocol used for transmitting data frames within a local area network (LAN). Ethernet frames typically include destination MAC address, source MAC address, and frame type fields.
Network Byte Order: Network protocols often use big-endian byte order to represent data. Therefore, data received from the network needs to be converted into host byte order for proper interpretation on the host. Here, the type field from the network is converted to host byte order for further processing.
IPv4 Frame Type (ETH_P_IP): This represents the frame type field in the Ethernet frame, where 0x0800 indicates IPv4.
1 2
if (ip_is_fragment(skb, nhoff)) return0;
This part of the code checks if IP fragmentation is being handled. IP fragmentation is a mechanism for splitting larger IP packets into multiple smaller fragments for transmission. Here, if the packet is an IP fragment, the function returns 0, indicating that only complete packets will be processed.
The above code is a helper function used to check if the incoming IPv4 packet is an IP fragment. IP fragmentation is a mechanism where, if the size of an IP packet exceeds the Maximum Transmission Unit (MTU) of the network, routers split it into smaller fragments for transmission across the network. The purpose of this function is to examine the fragment flags and fragment offset fields within the packet to determine if it is a fragment.
Here’s an explanation of the code line by line:
__u16 frag_off;: Defines a 16-bit unsigned integer variable frag_off to store the fragment offset field.
bpf_skb_load_bytes(skb, nhoff + offsetof(struct iphdr, frag_off), &frag_off, 2);: This line of code uses the bpf_skb_load_bytes function to load the fragment offset field from the packet. nhoff is the offset of the IP header within the packet, and offsetof(struct iphdr, frag_off) calculates the offset of the fragment offset field within the IPv4 header.
frag_off = __bpf_ntohs(frag_off);: Converts the loaded fragment offset field from network byte order (big-endian) to host byte order. Network protocols typically use big-endian to represent data, and the conversion to host byte order is done for further processing.
return frag_off & (IP_MF | IP_OFFSET);: This line of code checks the value of the fragment offset field using a bitwise AND operation with two flag values:
IP_MF: Represents the “More Fragments” flag. If this flag is set to 1, it indicates that the packet is part of a fragmented sequence and more fragments are expected.
IP_OFFSET: Represents the fragment offset field. If the fragment offset field is non-zero, it indicates that the packet is part of a fragmented sequence and has a fragment offset value. If either of these flags is set to 1, the result is non-zero, indicating that the packet is an IP fragment. If both flags are 0, it means the packet is not fragmented.
It’s important to note that the fragment offset field in the IP header is specified in units of 8 bytes, so the actual byte offset is obtained by left-shifting the value by 3 bits. Additionally, the “More Fragments” flag (IP_MF) in the IP header indicates whether there are more fragments in the sequence and is typically used in conjunction with the fragment offset field to indicate the status of fragmented packets.
In this part of the code, the length of the IP header is loaded from the packet. The IP header length field contains information about the length of the IP header in units of 4 bytes, and it needs to be converted to bytes. Here, it is converted by performing a bitwise AND operation with 0x0f and then multiplying it by 4.
Key concept:
IP Header: The IP header contains fundamental information about a packet, such as the source IP address, destination IP address, protocol type, total length, identification, flags, fragment offset, time to live (TTL), checksum, source port, and destination port.
1 2 3 4
if (hdr_len < sizeof(struct iphdr)) { return0; }
This code segment checks if the length of the IP header meets the minimum length requirement, typically 20 bytes. If the length of the IP header is less than 20 bytes, it indicates an incomplete or corrupted packet, and the function returns 0, indicating that the packet should not be processed.
Key concept:
struct iphdr: This is a structure defined in the Linux kernel, representing the format of an IPv4 header. It includes fields such as version, header length, service type, total length, identification, flags, fragment offset, time to live, protocol, header checksum, source IP address, and destination IP address, among others.
Here, the code loads the protocol field from the IP header to determine the transport layer protocol used in the packet. Then, it checks if the protocol field is not equal to the value for TCP (IPPROTO_TCP). If it’s not TCP, it means the packet is not an HTTP request or response, and the function returns 0.
Key concept:
Transport Layer Protocol: The protocol field in the IP header indicates the transport layer protocol used in the packet, such as TCP, UDP, or ICMP.
1
tcp_hdr_len = nhoff + hdr_len;
This line of code calculates the offset of the TCP header. It adds the length of the Ethernet frame header (nhoff) to the length of the IP header (hdr_len) to obtain the starting position of the TCP header.
1
bpf_skb_load_bytes(skb, nhoff + 0, &verlen, 1);
This line of code loads the first byte of the TCP header from the packet, which contains information about the TCP header length. This length field is specified in units of 4 bytes and requires further conversion.
This line of code loads the total length field of the IP header from the packet. The IP header’s total length field represents the overall length of the IP packet, including both the IP header and the data portion.
This piece of code is used to calculate the length of the TCP header. It loads the Data Offset field (also known as the Header Length field) from the TCP header, which represents the length of the TCP header in units of 4 bytes. The code clears the high four bits of the offset field, then shifts it right by 4 bits, and finally multiplies it by 4 to obtain the actual length of the TCP header.
Key points to understand:
TCP Header: The TCP header contains information related to the TCP protocol, such as source port, destination port, sequence number, acknowledgment number, flags (e.g., SYN, ACK, FIN), window size, and checksum.
These two lines of code calculate the offset and length of the HTTP request payload. They add the lengths of the Ethernet frame header, IP header, and TCP header together to obtain the offset to the data portion of the HTTP request. Then, by subtracting the total length, IP header length, and TCP header length from the total length field, they calculate the length of the HTTP request data.
Key point:
HTTP Request Payload: The actual data portion included in an HTTP request, typically consisting of the HTTP request headers and request body.
This portion of the code loads the first 7 bytes of the HTTP request line and stores them in a character array named line_buffer. It then checks if the length of the HTTP request data is less than 7 bytes or if the offset is negative. If these conditions are met, it indicates an incomplete HTTP request, and the function returns 0. Finally, it uses the bpf_printk function to print the content of the HTTP request line to the kernel log for debugging and analysis.
This piece of code uses the bpf_strncmp function to compare the data in line_buffer with HTTP request methods (GET, POST, PUT, DELETE, HTTP). If there is no match, indicating that it is not an HTTP request, it returns 0, indicating that it should not be processed.
1 2 3
e = bpf_ringbuf_reserve(&rb, sizeof(*e), 0); if (!e) return0;
This section of the code attempts to reserve a block of memory from the BPF ring buffer to store event information. If it cannot reserve the memory block, it returns 0. The BPF ring buffer is used to pass event data between the eBPF program and user space.
Key point:
BPF Ring Buffer: The BPF ring buffer is a mechanism for passing data between eBPF programs and user space. It can be used to store event information for further processing or analysis by user space applications.
Finally, this code segment stores the captured event information in the e structure and submits it to the BPF ring buffer. It includes information such as the captured IP protocol, source and destination ports, packet type, interface index, payload length, source IP address, and destination IP address. Finally, it returns the length of the packet, indicating that the packet was successfully processed.
This code is primarily used to store captured event information for further processing. The BPF ring buffer is used to pass this information to user space for additional handling or logging.
In summary, this eBPF program’s main task is to capture HTTP requests. It accomplishes this by parsing the Ethernet frame, IP header, and TCP header of incoming packets to determine if they contain HTTP requests. Information about the requests is then stored in the so_event structure and submitted to the BPF ring buffer. This is an efficient method for capturing HTTP traffic at the kernel level and is suitable for applications such as network monitoring and security analysis.
Potential Limitations
The above code has some potential limitations, and one of the main limitations is that it cannot handle URLs that span multiple packets.
Cross-Packet URLs: The code checks the URL in an HTTP request by parsing a single data packet. If the URL of an HTTP request spans multiple packets, it will only examine the URL in the first packet. This can lead to missing or partially capturing long URLs that span multiple data packets.
To address this issue, a solution often involves reassembling multiple packets to reconstruct the complete HTTP request. This may require implementing packet caching and assembly logic within the eBPF program and waiting to collect all relevant packets until the HTTP request is detected. This adds complexity and may require additional memory to handle cases where URLs span multiple packets.
User-Space Code
The user-space code’s main purpose is to create a raw socket and then attach the previously defined eBPF program in the kernel to that socket, allowing the eBPF program to capture and process network packets received on that socket. Here’s an example of the user-space code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
/* Create raw socket for localhost interface */ sock = open_raw_sock(interface); if (sock < 0) { err = -2; fprintf(stderr, "Failed to open raw socket\n"); goto cleanup; }
/* Attach BPF program to raw socket */ prog_fd = bpf_program__fd(skel->progs.socket_handler); if (setsockopt(sock, SOL_SOCKET, SO_ATTACH_BPF, &prog_fd, sizeof(prog_fd))) { err = -3; fprintf(stderr, "Failed to attach to raw socket\n"); goto cleanup; }
sock = open_raw_sock(interface);: This line of code calls a custom function open_raw_sock, which is used to create a raw socket. Raw sockets allow a user-space application to handle network packets directly without going through the protocol stack. The interface parameter might specify the network interface from which to receive packets, determining where to capture packets from. If creating the socket fails, it returns a negative value, otherwise, it returns the file descriptor of the socket sock.
If the value of sock is less than 0, indicating a failure to open the raw socket, it sets err to -2 and prints an error message on the standard error stream.
prog_fd = bpf_program__fd(skel->progs.socket_handler);: This line of code retrieves the file descriptor of the socket filter program (socket_handler) previously defined in the eBPF program. It is necessary to attach this program to the socket. skel is a pointer to an eBPF program object, and it provides access to the program collection.
setsockopt(sock, SOL_SOCKET, SO_ATTACH_BPF, &prog_fd, sizeof(prog_fd)): This line of code uses the setsockopt system call to attach the eBPF program to the raw socket. It sets the SO_ATTACH_BPF option and passes the file descriptor of the eBPF program to the option, letting the kernel know which eBPF program to apply to this socket. If the attachment is successful, the socket starts capturing and processing network packets received on it.
If setsockopt fails, it sets err to -3 and prints an error message on the standard error stream.
Capturing HTTP Traffic Using eBPF Syscall Tracepoints
eBPF provides a powerful mechanism for tracing system calls at the kernel level. In this example, we’ll use eBPF to trace the accept and read system calls to capture HTTP traffic. Due to space limitations, we’ll provide a brief overview of the code framework.
// Define a tracepoint at the entry of the accept system call SEC("tracepoint/syscalls/sys_enter_accept") intsys_enter_accept(struct trace_event_raw_sys_enter *ctx) { u64 id = bpf_get_current_pid_tgid(); // ... Get and store the arguments of the accept call bpf_map_update_elem(&active_accept_args_map, &id, &accept_args, BPF_ANY); return0; }
// Define a tracepoint at the exit of the accept system call SEC("tracepoint/syscalls/sys_exit_accept") intsys_exit_accept(struct trace_event_raw_sys_exit *ctx) { // ... Process the result of the accept call structaccept_args_t *args = bpf_map_lookup_elem(&active_accept_args_map, &id); // ... Get and store the socket file descriptor obtained from the accept call __u64 pid_fd = ((__u64)pid << 32) | (u32)ret_fd; bpf_map_update_elem(&conn_info_map, &pid_fd, &conn_info, BPF_ANY); // ... }
// Define a tracepoint at the entry of the read system call SEC("tracepoint/syscalls/sys_enter_read") intsys_enter_read(struct trace_event_raw_sys_enter *ctx) { // ... Get and store the arguments of the read call bpf_map_update_elem(&active_read_args_map, &id, &read_args, BPF_ANY); return0; }
// Helper function to check if it's an HTTP connection staticinlineboolis_http_connection(constchar *line_buffer, u64 bytes_count) { // ... Check if the data is an HTTP request or response }
// Helper function to process the read data staticinlinevoidprocess_data(struct trace_event_raw_sys_exit *ctx, u64 id, conststructdata_args_t *args, u64 bytes_count) { // ... Process the read data, check if it's HTTP traffic, and send events if (is_http_connection(line_buffer, bytes_count)) { // ... bpf_probe_read_kernel(&event.msg, read_size, args->buf); // ... bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, &event, sizeof(structsocket_data_event_t)); } }
// Define a tracepoint at the exit of the read system call SEC("tracepoint/syscalls/sys_exit_read") intsys_exit_read(struct trace_event_raw_sys_exit *ctx) { // ... Process the result of the read call structdata_args_t *read_args = bpf_map_lookup_elem(&active_read_args_map, &id); if (read_args != NULL) { process_data(ctx, id, read_args, bytes_count); } // ... return0; }
char _license[] SEC("license") = "GPL";
This code briefly demonstrates how to use eBPF to trace system calls in the Linux kernel to capture HTTP traffic. Here’s a detailed explanation of the hook locations and the flow, as well as the complete set of system calls that need to be hooked for comprehensive request tracing:
Hook Locations and Flow
The code uses eBPF Tracepoint functionality. Specifically, it defines a series of eBPF programs and binds them to specific system call Tracepoints to capture entry and exit events of these system calls.
First, it defines two eBPF hash maps (active_accept_args_map and active_read_args_map) to store system call parameters. These maps are used to track accept and read system calls.
Next, it defines multiple Tracepoint tracing programs, including:
sys_enter_accept: Defined at the entry of the accept system call, used to capture the arguments of the accept system call and store them in the hash map.
sys_exit_accept: Defined at the exit of the accept system call, used to process the result of the accept system call, including obtaining and storing the new socket file descriptor and related connection information.
sys_enter_read: Defined at the entry of the read system call, used to capture the arguments of the read system call and store them in the hash map.
sys_exit_read: Defined at the exit of the read system call, used to process the result of the read system call, including checking if the read data is HTTP traffic and sending events.
In sys_exit_accept and sys_exit_read, there is also some data processing and event sending logic, such as checking if the data is an HTTP connection, assembling event data, and using bpf_perf_event_output to send events to user space for further processing.
Complete Set of System Calls to Hook
To fully implement HTTP request tracing, the system calls that typically need to be hooked include:
socket: Used to capture socket creation for tracking new connections.
bind: Used to obtain port information where the socket is bound.
listen: Used to start listening for connection requests.
accept: Used to accept connection requests and obtain new socket file descriptors.
read: Used to capture received data and check if it contains HTTP requests.
write: Used to capture sent data and check if it contains HTTP responses.
The provided code already covers the tracing of accept and read system calls. To complete HTTP request tracing, additional system calls need to be hooked, and corresponding logic needs to be implemented to handle the parameters and results of these system calls.
In today’s complex technological landscape, system observability has become crucial, especially in the context of microservices and cloud-native applications. This article explores how to leverage eBPF technology for tracing the seven-layer protocols, along with the challenges and solutions that may arise in this process. Here’s a summary of the content covered in this article:
Introduction:
Modern applications often consist of multiple microservices and distributed components, making it essential to observe the behavior of the entire system.
Seven-layer protocols (such as HTTP, gRPC, MQTT, etc.) provide detailed insights into application interactions, but monitoring these protocols can be challenging.
Role of eBPF Technology:
eBPF allows developers to dive deep into the kernel layer for real-time observation and analysis of system behavior without modifying or inserting application code.
eBPF technology offers a powerful tool for monitoring seven-layer protocols, especially in a microservices environment.
Tracing Seven-Layer Protocols:
The article discusses the challenges of tracing seven-layer protocols, including their complexity and dynamism.
Traditional network monitoring tools struggle with the complexity of seven-layer protocols.
Applications of eBPF:
eBPF provides two primary methods for tracing seven-layer protocols: socket filters and syscall tracing.
Both of these methods help capture network request data for protocols like HTTP and analyze them.
eBPF Practical Tutorial:
The article provides a practical eBPF tutorial demonstrating how to capture and analyze HTTP traffic using eBPF socket filters or syscall tracing.
The tutorial covers the development of eBPF programs, the use of the eBPF toolchain, and the implementation of HTTP request tracing.
Through this article, readers can gain a deep understanding of how to use eBPF technology for tracing seven-layer protocols, particularly HTTP traffic. This knowledge will help enhance the monitoring and analysis of network traffic, thereby improving application performance and security. If you’re interested in learning more about eBPF and its practical applications, you can visit our tutorial code repository at https://github.com/eunomia-bpf/bpf-developer-tutorial or our website at https://eunomia.dev/tutorials/ for more examples and complete tutorials.
SYSTEM 你将得到一个由三重引号分隔的文档和一个问题。你的任务是只使用提供的文档来回答问题,并引用用来回答问题的文档段落。如果文档不包含回答此问题所需的信息,那么只需写下:“信息不足”。如果提供了问题的答案,必须用引文进行注释。使用以下格式引用相关段落 ({"citation": …})。 USER """<插入文档>"""
Below is an example of how you can modify your code to perform an inline hook for the my_function. This is a simplistic approach and works specifically for this case. This is just an illustrative example. For real-world scenarios, a more complex method would need to be employed, considering thread-safety, re-entrant code, and more.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
voidinline_hook(void *orig_func, void *hook_func) { // Store the original bytes of the function. unsignedchar orig_bytes[5]; memcpy(orig_bytes, orig_func, 5);
// Make the memory page writable. mprotect(get_page_addr(orig_func), getpagesize(), PROT_READ | PROT_WRITE | PROT_EXEC);
// Write a jump instruction at the start of the original function. *((unsignedchar *)orig_func + 0) = 0xE9; // JMP instruction *((void **)((unsignedchar *)orig_func + 1)) = (unsignedchar *)hook_func - (unsignedchar *)orig_func - 5;
// Make the memory page executable only. mprotect(get_page_addr(orig_func), getpagesize(), PROT_READ | PROT_EXEC); }
In this example, my_function is the original function that is hooked. my_hook_function is the function that gets called instead of my_function. The inline_hook function performs the actual hook by overwriting the start of my_function with a jump (JMP) instruction to my_hook_function.
When you now call my_function in your main, my_hook_function is called instead.
Please note that this code is simplified and makes a few assumptions:
The functions my_function and my_hook_function are in the same memory page. If they aren’t, the jump offset from my_function to my_hook_function might not fit in the 4 bytes available in the jump instruction.
The first 5 bytes of my_function can be safely overwritten. If there’s a multi-byte instruction that starts within the first 5 bytes but doesn’t end before the 6th byte, this will crash.
The functions my_function and my_hook_function don’t move in memory. If they do (for example, if they’re in a shared library that gets unloaded and reloaded at a different address), the jump instruction will jump to the wrong place and likely crash.
1 2 3 4 5
$ make $ ./maps Hello, world! Hello from hook! Hello, world!
for arm32
Note that in ARM32, the Program Counter (PC) is usually 2 instructions ahead, which is why we subtract 8 (2 instructions * 4 bytes/instruction) when calculating the offset. This might differ between different ARM versions or modes (Thumb vs ARM, etc.) so please adjust accordingly to your target’s specifics.
Also, you need to increase the SIZE_ORIG_BYTES from 16 to 20 because the minimal branch instruction in ARM is 4 bytes and you’re going to replace 5 instructions. This is needed because the branch instruction uses a relative offset and you cannot be sure how far your hook function will be. If your function and hook are within 32MB of each other, you could only replace the first 4 bytes with a branch and wouldn’t need to touch the rest.
Remember that manipulating code at runtime can be error-prone and architecture-specific. The code can behave differently based on where it’s loaded in memory, how the compiler has optimized it, whether it’s running in Thumb or ARM mode, and so on. Always thoroughly test the code in the exact conditions where it will be used.
1 2 3 4 5
$ make arm $ ./maps-arm32 Hello, world! Hello from hook! Hello, world!
for arm64
Similar to ARM32, ARM64 uses the ARM instruction set. However, there are differences and specifics to consider for ARM64. For example, the encoding of the branch instruction is different and because of the larger address space, you have to create a trampoline for larger offsets that can’t be reached by a single branch instruction. The trampoline should be close to the original function so it can be reached by a branch instruction and from there, it will load the full 64 bit address of the hook function.
1 2 3 4 5
$ make arm64 $ ./maps-arm64 Hello, world! Hello from hook! Hello, world!
# we use LLaMA here, but any GPT-style model will do llama = guidance.llms.Transformers("your_path/llama-7b", device=0)
# we can pre-define valid option sets valid_weapons = ["sword", "axe", "mace", "spear", "bow", "crossbow"]
# define the prompt character_maker = guidance("""The following is a character profile for an RPG game in JSON format. ```json { "id": "{{id}}", "description": "{{description}}", "name": "{{gen 'name'}}", "age": {{gen 'age' pattern='[0-9]+' stop=','}}, "armor": "{{#select 'armor'}}leather{{or}}chainmail{{or}}plate{{/select}}", "weapon": "{{select 'weapon' options=valid_weapons}}", "class": "{{gen 'class'}}", "mantra": "{{gen 'mantra' temperature=0.7}}", "strength": {{gen 'strength' pattern='[0-9]+' stop=','}}, "items": [{{#geneach 'items' num_iterations=5 join=', '}}"{{gen 'this' temperature=0.7}}"{{/geneach}}] }```""")
# generate a character character_maker( id="e1f491f7-7ab8-4dac-8c20-c92b5e7d883d", description="A quick and nimble fighter.", valid_weapons=valid_weapons, llm=llama )
# set the default language model used to execute guidance programs guidance.llm = guidance.llms.OpenAI("text-davinci-003")
# define the few shot examples examples = [ {'input': 'I wrote about shakespeare', 'entities': [{'entity': 'I', 'time': 'present'}, {'entity': 'Shakespeare', 'time': '16th century'}], 'reasoning': 'I can write about Shakespeare because he lived in the past with respect to me.', 'answer': 'No'}, {'input': 'Shakespeare wrote about me', 'entities': [{'entity': 'Shakespeare', 'time': '16th century'}, {'entity': 'I', 'time': 'present'}], 'reasoning': 'Shakespeare cannot have written about me, because he died before I was born', 'answer': 'Yes'} ]
# define the guidance program structure_program = guidance( '''Given a sentence tell me whether it contains an anachronism (i.e. whether it could have happened or not based on the time periods associated with the entities). ----
{{~! place the real question at the end }} Sentence: {{input}} Entities and dates: {{gen "entities"}} Reasoning:{{gen "reasoning"}} Anachronism:{{#select "answer"}} Yes{{or}} No{{/select}}''')
# execute the program out = structure_program( examples=examples, input='The T-rex bit my dog' )
Your task is to devise up to 5 highly effective goals and an appropriate role-based name (_GPT) for an autonomous agent, ensuring that the goals are optimally aligned with the successful completion of its assigned task.
The user will provide the task, you will provide only the output in the exact format specified below with no explanation or conversation.
Example input: Help me with marketing my business
Example output: Name: CMOGPT Description: a professional digital marketer AI that assists Solopreneurs in growing their businesses by providing world-class expertise in solving marketing problems for SaaS, content products, agencies, and more. Goals: - Engage in effective problem-solving, prioritization, planning, and supporting execution to address your marketing needs as your virtual Chief Marketing Officer.
- Provide specific, actionable, and concise advice to help you make informed decisions without the use of platitudes or overly wordy explanations.
- Identify and prioritize quick wins and cost-effective campaigns that maximize results with minimal time and budget investment.
- Proactively take the lead in guiding you and offering suggestions when faced with unclear information or uncertainty to ensure your marketing strategy remains on track.
更常见的方案是引入人类监督和交互。可以通过人类每隔一段时间,或者在有需要的时候去监督一下 AI 的执行情况,并确保AutoGPT的行为符合现实世界的商业惯例和法律要求。如果不符合人类的意图的话,通过对话可以对 Agent 进行调整,要求它做更符合人类意图的事情(实际上这在多人合作完成任务的场景下,例如公司或组织中,也是非常常见的)。但相对来说,这种方式经常低效而且缓慢:如果我需要监督 AI 才能保证它不出错,为什么我不自己做呢?
有没有更好的方式?
但实际上,在现实世界中,也许我们有更好的方式让 AI agent 和人类的意图对齐。想象这样的场景:你希望一个对某件复杂任务的流程不了解的人,完成一项特定任务:例如上手一个代码项目的开发和环境配置,学习一门新的编程语言,或者编写一本长篇小说、分析某个商业投资的可行性等等。也许这种情况下,我们可以有一本手册或者教程,它并不需要是精确的、一步一步的指令,但是它会包含一个大致的流程和任务分解,让人类能够快速上手完成对应的任务。那么,我们为什么不能用非常轻松的方式,给 AI 一些大概的指示和任务描述,让它根据这些任务来完成对应的工作呢?
相比 AutoGPT,我们实际上需要的是:
更强的可控性,让它和人类意图进行充分的对齐;
比 CoT(思维链)走的更远,让 AI 能够完成更加复杂的任务,同时不仅限于一步步执行,还需要有递归、循环、条件判断等等特性;
根据 wiki 百科,计算机程序(Computer Program)可以定义为指一组指示电子计算机或其他具有消息处理能力的电子设备每一步动作的指令序列。也许,某种意义上它也是一种 “程序”,但并不是传统的编程语言:自然语言适合模糊化、灵活、可高效扩展的需求,而传统的程序设计语言实际上是一种精确的抽象和计算,二者缺一不可,它们可以相互转化,但是并不一定需要是从自然语言描述转化为精确的计算机指令。未来,每个人都可以是程序员,只要能够用自然语言描述出对应的需求和步骤,无论清晰或者模糊。
自然语言编程
自然语言编程不应该是:
1 2 3 4 5 6 7 8 9 10 11 12
+++ proc1 -- Return five random emojis +++
+++ proc2 -- Modify proc1 to return random numbers instead -- Let $n = [the number of countries inLatinAmerica] -- Insteadof five, use $n /execute proc1 +++
自然语言编程不是,也不应该是通常意义上的编程语言编程。我们并不进行自然语言到代码的转换,没有确定的语法、语言和编程范式,大语言模型就是我们的解释器、CPU 和内存;自然语言适合处理需求模糊、信息密度高的应用,代码适合处理精确、可靠的部分。或者说,自然语言编程实际上是 prompt engineering 的高阶形态,自然语言的指令不再局限于单次和 AI 的交互上下文,并且希望能够借助它来增加和扩展 AI 完成复杂推理、复杂任务执行的能力。
{ "schema_version": "v1", "name_for_human": "TODO Plugin", "name_for_model": "todo", "description_for_human": "Plugin for managing a TODO list. You can add, remove and view your TODOs.", "description_for_model": "Plugin for managing a TODO list. You can add, remove and view your TODOs.", "auth": { "type": "none" }, "api": { "type": "openapi", "url": "http://localhost:3333/openapi.yaml", "is_user_authenticated": false }, "logo_url": "http://localhost:3333/logo.png", "contact_email": "[email protected]", "legal_info_url": "http://www.example.com/legal" }
Name the model will use to target the plugin (no spaces allowed, only letters and numbers). 50 character max.
✅
name_for_human
String
Human-readable name, such as the full company name. 20 character max.
✅
description_for_model
String
Description better tailored to the model, such as token context length considerations or keyword usage for improved plugin prompting. 8,000 character max.
✅
description_for_human
String
Human-readable description of the plugin. 100 character max.
✅
auth
ManifestAuth
Authentication schema
✅
api
Object
API specification
✅
logo_url
String
URL used to fetch the logo. Suggested size: 512 x 512. Transparent backgrounds are supported.
✅
contact_email
String
Email contact for safety/moderation, support, and deactivation
✅
legal_info_url
String
Redirect URL for users to view plugin information
✅
HttpAuthorizationType
HttpAuthorizationType
“bearer” or “basic”
✅
ManifestAuthType
ManifestAuthType
“none”, “user_http”, “service_http”, or “oauth”
interface BaseManifestAuth
BaseManifestAuth
type: ManifestAuthType; instructions: string;
ManifestNoAuth
ManifestNoAuth
No authentication required: BaseManifestAuth & { type: ‘none’, }
type ManifestOAuthAuth = BaseManifestAuth & { type: 'oauth';
# OAuth URL where a user is directed to for the OAuth authentication flow to begin. client_url: string;
# OAuth scopes required to accomplish operations on the user's behalf. scope: string;
# Endpoint used to exchange OAuth code with access token. authorization_url: string;
# When exchanging OAuth code with access token, the expected header 'content-type'. For example: 'content-type: application/json' authorization_content_type: string;
# When registering the OAuth client ID and secrets, the plugin service will surface a unique token. verification_tokens: { [service: string]?: string; }; }
上述清单文件中某些字段的长度有限制,这些限制可能会发生变化。我们还对 API 响应正文强制实施 10 万字符的最大值,这个值也可能会随时间变化而改变。
openapi: 3.0.1 info: title: TODO Plugin description: A plugin that allows the user to create and manage a TODO list using ChatGPT. version: 'v1' servers: - url: http://localhost:3333 paths: /todos: get: operationId: getTodos summary: Get the list of todos responses: "200": description: OK content: application/json: schema: $ref: '#/components/schemas/getTodosResponse' components: schemas: getTodosResponse: type: object properties: todos: type: array items: type: string description: The list of todos.