[フレーム] [フレーム]

Special issue on privacy and security challenges of generative AI

Rollup Image
Page Content 10

​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​ITU Journal Provacy and security challenges of generative AI

The​​​me 

The use of Large Language Model (LLM) and Generative Pre-trained Transforms (GPT) technology based on Generative Artificial Intelligence (GAI) has become ubiquitous across various industries and societal domains due to their powerful capabilities in extracting, processing and expanding data, information, and knowledge. GAI can address the escalating demands of our digital life, encompassing   cost, power, capacity, coverage, latency, efficiency, flexibility, compatibility, quality of experience and services.

However, as GAI application systems proliferate, privacy and security concerns have assumed an increasing pivotal role in their rapid development and massive deployment. Private and secure generative AI technology not only prevents unauthorized data and model parameters usage but also safeguards highly sensitive, proprietary, classified or private information during both training and inference phases. This adherence to security standards and privacy laws, such as the European GDPR rules or the US HIPPA rules, is crucial.

Fully Homomorphic Encryption (FHE) technology emerges as the most promising solution to address privacy and security concerns in GAI. Unlike GAI operating in plaintext formats, FHE based on GAI conducts all computations and operations in encrypted ciphertext formats. However, this comes with a substantial increase in implementation complexity— on the order of 1,000 times compared to plaintext formats. Consequently, this imposes great limitations and challenges on processing architecture, memory access, computational capability, inference latency, data interfaces and bandwidths of hardware and silicon convergence for FHE-based GAI. Realizing secure and private GAI is a very challenging task and requires significant efforts from the related industry, research, and regulatory authorities for success.

This special issue aims to catalyze and steer the advancement of novel and improved systems to enable private and secure Generative AI, by fostering collaboration among scientists, engineers, broadcasters, manufacturers, software developers, and other related professionals.

Keyw​​​ords

Fully Homomorphic Encryption (FHE), information security, data privacy, machine learning, neural networks, Generative Pre-trained Transforms (GPT), Large Language Model (LLM), Generative Artificial Intelligence (GAI), learning and inference, fine-tuning, transfer learning, attention and query

​Suggest topics (but not limited to)

Algorithms, architectures and applications:
  • Encryption and decryption for private and secure GAI
  • Pipelining, parallel and distributed processing with co-design of algorithm and hardware 
  • Ciphertexts-data driven programming platform and models 
  • New computing architecture, memory access and data-interface
  • FHE based training and inference in GAI 
Deployment, standardization and development: 
  • Standardization, technical regulations and specifications for GAI 
  • Secure multiple-party computation, differential privacy and federal learning
  • FHE based development libraries and open source software  
Information and signal processing:
  • Learning with noise, bootstrapping and programming bootstrapping
  • Private LLM with fine-tuning, transfer learning and lower-rank adaption  
  • FHE based transforms and neural networks​ 

​Download the FULL call for papers here​ .

Leading Guest Editor

​​​ Fa-Long Luo, University of Washington, USA  

​Guest Editors

Rosario Cammarota, Intel Labs, USA
​​​ Paul Master, Cornami, USA 
​​​ Nir Drucker​, IBM-Europe, Israel​
​​​ Donghoon Yoo, Desilo, Korea ​ 
​​​ Konstantinos Plataniotis​, University of Toronto, Canada​ 







Page Content 2
Page Content 3
Page Content 4
Page Content 5
Page Content 17
Page Content 18
Page Content 19
Page Content 20
Page Content 11
Page Content 12
Page Content 13

READ

Page Content 15


View and download articles​
of this special issue freely​​​

Volume 6, Issue 3, September 2025​

​​
Page Content 6
Page Content 7
Page Content 8
Page Content 16

AltStyle によって変換されたページ (->オリジナル) /