Logic is an abstract expression that creates life and creates systems. Living things and most functioning things, including machines, have logics and operate according to these logics. In the universe, various logics can conflict with each other and therefore logics need to be updated. Evolution itself is a logic, and it acts as an intermediary for living things to update their logic. In the computer world, the updating of logic takes place through software updates. If the logic is not updated, it dies/hacked etc. due to other logics.

Computing often uses examples from biology. Computers are developed similarly to biological systems and also often have similar characteristics to living beings. Humans are mainly responsible for the development of computers. The detection of bugs in software depends on the perception of people and the success of the tools they use.

Errors found in software can be related to various logic errors. The more logic, the greater the margin for error. Sometimes these logic errors are technically obvious at a glance, such as a variable being set incorrectly. Other errors may be hidden in the steps in the execution of a function. It is easier to detect an error in a single state than in several steps. Detecting a bug that occurs over several steps requires people to memorize a succession of events. While thinking about other technical details of the code, humans find it difficult to keep in mind the details of the succession of these events.

The recently emerged artificial intelligence tool is a successful tool for analyzing such successive events. Humans may find it difficult to debug successive events, especially in software. AIs, on the other hand, have a larger temporal memory frame. They can analyze more data at a time.

Recently, the ksmb bug (CVE-2025-37899) in the Linux kernel was found with OpenAI’s o3 model. This bug is related to a so-called use-after-free bug, which is caused by successive steps. It is an example of the success of artificial intelligence in detecting software bugs due to its large temporal memory.

The use of artificial intelligence in cybersecurity is currently a topic of research and has yet to be fully realized. It is not known what impact this cutting-edge technology can have and what vulnerabilities it can expose, and this uncertainty is attracting the attention of major defense agencies. The DARPA agency of the United States Department of Defense has organized a competition called the DARPA AI Cyber Challenge (AIxCC). This competition was announced in 2023 and provided the necessary support for researchers to conduct their research. In 2025, the final (at DEF CON 33) will be organized and what has been developed as a result of 2 years of research will be examined. At the end of the competition, a total prize pool of 20 million dollars will be distributed to the successful teams. These prizes aim to encourage innovative work in the field of AI-based cybersecurity.

To summarize, the use of artificial intelligence in cybersecurity is still a research topic and what it can achieve is still being examined. It attracts attention with its long-term memory, which is different from humans.