
At this year’s Mobile World Congress (MWC), Artificial Intelligence (AI) was a hot topic across the conference halls in Barcelona. Along with the excitement about the potential benefits of generative AI, AI’s evil twin, aka deepfakes, received equal attention for the threats they pose across the telecom industry, particularly for network security.

AI and deepfake technology present unprecedented challenges for network security as they can be used to manipulate, deceive, and compromise digital assets, posing serious threats to both individuals and organisations. Navigating these threats requires a multifaceted approach that combines technological advancements, robust cybersecurity protocols, and heightened awareness among users, says Vadim Eretsky, Enghouse Product Manager.
Firstly, AI itself can be both an advantage and disadvantage for network security. While AI-powered systems can enhance threat detection and response capabilities, they can also be exploited by malicious actors to automate attacks and evade traditional security measures. Thus, organisations must deploy AI-driven defense mechanisms that continuously adapt to evolving threats. Deepfake technology, on the other hand, introduces a new dimension of deception into the cybersecurity landscape. By leveraging AI algorithms to generate highly realistic but entirely fabricated audio or video content, deepfakes can be used to manipulate public opinion, impersonate individuals, and undermine trust in digital communications.
Because the very essence of deepfakes is to deceive, education and awareness play a critical role in navigating the threats of AI and deepfakes in network security. By empowering users with the knowledge and skills to recognise and respond to suspicious activities, organisations can significantly reduce the likelihood of successful cyber-attacks. But the war against deepfakes cannot be won in isolation. Instead, greater collaboration and information sharing within the cybersecurity community is needed to respond quickly and proactively defend against emerging threats. Cross-sector partnerships between industry, government agencies, and research institutions are essential for developing effective countermeasures and regulatory frameworks to mitigate the risks posed by AI-driven cyber threats.

Protecting your network security against deepfakes requires organisations to fortify their defenses with advanced detection and protection techniques, such as those provided by Session Border Controller (SBC) technology. SBCs are network devices that help regulate and secure real-time communication sessions over IP networks, such as VoIP calls and video. In the context of AI and deepfake threats, SBCs provide several key functionalities that contribute to network security:
- Traffic Encryption and Privacy: SBCs often incorporate encryption capabilities to secure the transmission of sensitive data across the network. This helps protect against eavesdropping and data interception, mitigating the risk of unauthorised access to communications that could be manipulated or exploited by AI-driven attacks or deepfake content distribution.
- Deep Packet Inspection (DPI): SBCs can perform DPI to analyse and inspect the content of communication packets in real-time. By examining the characteristics of data flows, SBCs can detect anomalies, patterns indicative of AI-generated content, or signs of deepfake manipulation. This enables proactive threat detection and response, allowing organisations to mitigate the risks posed by malicious AI and deepfake activities.
- Quality of Service (QoS) Management: SBCs help optimise the performance and reliability of real-time communication sessions by prioritising traffic, ensuring adequate bandwidth allocation, and managing network congestion. By maintaining the quality of service, SBCs help mitigate the impact of AI-driven attacks or deepfake content distribution on critical communications, such as emergency calls or business-critical meetings.
- Policy Enforcement and Compliance: SBCs enforce security policies and compliance regulations governing communication sessions, such as data protection laws or industry-specific regulations. By enforcing policies related to data privacy, integrity, and confidentiality, SBCs help mitigate the risks associated with AI-driven cyber threats and deepfake content distribution, ensuring that organisations remain compliant with regulatory requirements.
The rise of AI and deepfake technologies presents complex challenges for network security, but with proactive measures and collaborative efforts, these threats can be mitigated. SBC technology plays a critical role in enhancing network security by providing access control, authentication, traffic encryption, DPI capabilities, QoS management, and policy enforcement.
