[rank_math_breadcrumb]

News

China’s radar killer drone shoots better than AI, humans with new ChatGPT-like brain

Updated: 01-11-2024, 06.19 PM

China’s Chengdu Aircraft Design Institute, the company that designed the J-20 fighter jet, has allegedly developed a large language model (LLM) for electronic warfare (EW) drones.

The LLM, similar to ChatGPT, can interfere with enemy radar and radio communications at lightning-fast speed.

According to tests, LLM-powered decision-making surpasses traditional artificial intelligence (AI), such as reinforcement learning. It allegedly also proved to be vastly superior to experienced human EW experts.

According to the South China Morning Post (SCMP), the new LLM model was jointly developed by Chengdu, the Aviation Industry Corporation of China, and Northwestern Polytechnical University in Xian, Shaanxi province.

Information about the model and testing was published on October 24 in the peer-reviewed Journal of Detection & Control, a Chinese-only publication. According to the paper, the work is still experimental but shows promise.

LLM-powered electronic warfare

The LLM effectively hyper-boosts the speed at which the drone can perform EW thrusts and parries when engaging enemy targets. This includes attempting to quell enemy radar installations using specific electromagnetic signals.

The defender will try to evade the attack by constantly changing signals, forcing the opponent to adapt, usually in real time. You can liken it to how the Borg adapts and changes to energy weapon frequencies in Star Trek.

The new LLM has been designed to tip the balance in favor of the attacker by reducing reaction times. The choice of an LLM is interesting; however, before this, they were believed to be unable to perform such tasks—especially the need to interpret sensor-collected data.

It has also been shown to react too slowly, requiring longer contemplation times that are far too slow for the millisecond response times typically needed. But, if the team’s claims are true, they appear to have overcome this. But how?

According to the paper, the first step was to train the LLM using a “wealth” of books on EW. This includes a “book series on radar, electronic warfare, and related literature collections.”

They also fed the model more sensitive information, like air combat records, weapons inventory set-up records, and electronic warfare operation manuals. According to the researchers, most of the material used was in Chinese.

Lightning-fast responses

However, to speed up decisions, the team also coupled the LLM with a raw data processor to translate outputs to the LLM, where it does its thing. From there, the translator takes the LLMs outputs and processes them into instructions for the EW jammer gear.

The research team claims test results confirm the feasibility of this technology. They also claim that reinforcement learning algorithms and generative AI can swiftly adjust attack strategies up to ten times per second.

The team also found that LLMs are more effective at generating numerous false targets on enemy radar screens compared to traditional AI and human expertise. This strategy is more advantageous in EW than simply suppressing radar signals with noise or redirecting radar waves away from real targets.

“They still have many practical issues to address, including chips, the size of the model, and security risks,” an unnamed Beijing-based AI scientist told SCMP. “But there’s little doubt that ‘words can kill’ is evolving from a philosophical concept to reality,” he added.

Leave a Comment

Design by proseoblogger