Troubleshooting HTTP Request Timeout Issues Caused by HTTP Headers

In actual business, we encountered a very strange problem: when service A accessed Go language service B through HTTP requests, a small portion of the requests would time out. Further analysis revealed that if a request timed out, its retry would definitely time out as well, indicating that for specific request content, the timeout was an inevitable problem. By checking the processing logs of service B, it was found that for timed-out requests, the processing time of the business logic was normal.

Initially, we used the process of elimination to analyze, gradually replacing suspected problematic modules, but we couldn’t locate the problem. Later, through packet capture, we analyzed the differences between normal packets and timeout packets, made reasonable guesses about the problematic parts, and verified them. We finally located that it was the Expect: 100-continue request HTTP header that caused the timeout here. The entire troubleshooting and fixing process encountered many pitfalls, which are recorded here for everyone’s reference.

WireShark packet capture of HTTP expect: 100-continue

Read More

Dive into the ChatGPT Data Leak caused by Redis Bug

On March 20, 2023, OpenAI’s ChatGPT service was interrupted for a period of time. Subsequently, OpenAI published an announcement March 20 ChatGPT outage: Here’s what happened explaining the ins and outs of this incident. In the announcement, OpenAI detailed the scope of impact, remediation strategies, some technical details, and improvement measures for this incident, which is quite worth learning from.

The specific timeline of this incident handling is also publicly available on ChatGPT Web Interface Incident, as shown in the following image:

Overall timeline of ChatGPT fault repair

This fault was caused by a bug in Redis’s Python client, and there are discussions about this bug on Github. The fixing process of this bug was not smooth, with many discussions and fix attempts, such as Issue 2624, PR 2641, Issue 2665, PR 2695. After reading these, I still couldn’t fully understand the fix here, so I had to delve into the code to see what exactly happened with the bug’s cause and fixing process, and organize it into this article.

Read More

How to Bypass ChatGPT's Security Checks

Large language models (LLMs) like ChatGPT have made significant breakthroughs this year and are now playing important roles in many fields. As prompts serve as the medium for interaction between humans and large language models, they are also frequently mentioned. I’ve written several articles discussing best practices for prompts in ChatGPT, such as the first one: GPT4 Prompting Technique 1: Writing Clear Instructions.

However, as our understanding and use of these large language models deepen, new issues are beginning to surface. Today, we’ll explore one important issue: prompt attacks. Prompt attacks are a new type of attack method, including prompt injection, prompt leaking, and prompt jailbreaking. These attack methods may lead to models generating inappropriate content, leaking sensitive information, and more. In this blog post, I will introduce these attack methods in detail to help everyone gain a better understanding of the security of large language models.

ChatGPT Prompt Attacks

Read More