A systematic review published in Frontiers of Engineering Management (2025) reveals that large language models (LLMs) present both transformative potential and serious security and ethical risks that require immediate attention from developers, regulators, and users. The study, conducted by researchers from Shanghai Jiao Tong University and East China Normal University, analyzed 73 key papers from over 10,000 documents to map the hidden threats behind these increasingly ubiquitous AI systems.
The research published https://doi.org/10.1007/s42524-025-4082-6 categorizes LLM-related security threats into two major domains: misuse-based risks and malicious attacks targeting the models themselves. Misuse includes phishing emails crafted with near-native fluency, automated malware scripting, identity spoofing, and large-scale false information production. Malicious attacks occur at both data/model levels—such as model inversion, poisoning, and extraction—and user interaction levels including prompt injection and jailbreak techniques that can bypass safety filters or induce harmful content output.
These findings matter because LLMs like GPT, BERT, and T5 have become central tools in writing, coding, and problem-solving across sectors ranging from education and healthcare to digital governance. Their ability to generate fluent, human-like text enables automation and accelerates information workflows, but this same capability increases exposure to cyber-attacks, model manipulation, misinformation, and biased outputs that can mislead users or amplify social inequalities. Academic researchers warn that without systematic regulation and defense mechanisms, LLM misuse may threaten data security, public trust, and social stability.
The study evaluates existing defense strategies including adversarial training, input preprocessing, and watermark-based detection. Detection technologies like semantic watermarking and CheckGPT can identify model-generated text with up to 98–99% accuracy. However, the review finds these defenses often lag behind evolving attack techniques, indicating an urgent need for scalable, low-cost, multilingual-adaptive solutions. The researchers emphasize that technical safeguards must coexist with ethical governance, arguing that hallucination, bias, privacy leakage, and misinformation are social-level risks, not merely engineering problems.
The implications of this research extend across industries and society. Secure and ethical development of LLMs will shape how societies adopt AI, with robust defense systems potentially protecting financial systems from sophisticated phishing attacks, reducing medical misinformation, and maintaining scientific integrity. The researchers suggest that watermark-based traceability and red-teaming may become industry standards for model deployment. They encourage future work toward AI responsible governance, unified regulation frameworks, safer training datasets, and model transparency reporting to ensure LLMs evolve into reliable tools supporting education, digital healthcare, and innovation ecosystems while minimizing risks linked to cybercrime and social misinformation.



