| logo | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
松浦研究室のメンバー(2008年卒業以降)2025年3月卒業/退職廣澤 佑亮
研究テーマ
Publications
内藤 晋作
研究テーマ
Publications
林 リウヤ
研究テーマ
Publications
abstract
Private information retrieval (PIR) allows a client to obtain records from a database without revealing the retrieved index to the server.
In the single-server model, it has been known that (plain) PIR is vulnerable to selective failure attacks, where a (malicious) server intends to learn information of an index by getting a client's decoded result.
Recently, as one solution for this problem, Ben-David et al. (TCC 2022) proposed verifiable PIR (vPIR) that allows a client to verify that the queried database satisfies certain properties.
However, the existing vPIR scheme is not practically efficient, especially when we consider the multi-query setting, where a client makes multiple queries for a server to retrieve some records either in parallel or in sequence.
In this paper, we introduce a new formalization of multi-query vPIR and provide an efficient scheme based on authenticated PIR (APIR) and succinct non-interatctive arguments of knowledge (SNARKs).
More precisely, thanks to the nice property of APIR, the communication cost of our multi-query vPIR scheme is O(n|a| + |\pi|), where n is the number of queries, |a| is the APIR communication size, and |\pi| is the SNARK proof size.
That is, the communication includes only one SNARK proof. In addition to this result, to show the effectiveness of our multi-query vPIR scheme in a real-world scenario, we present a practical application of vPIR on the online certificate status protocol (OCSP) and provide a comprehensive theoretical evaluation on our scheme in this scenario.
Especially in the setting of our application, we observe that integrating SNARK proofs (for verifiability) does not significantly increase the communication cost.
abstract
While the integrity of digital data can be ensured via digital signatures,
ensuring the integrity of physical data,
i.e., objects, is a more challenging task.
For example, constructing a digital signature on data extracted
from an object does not necessarily guarantee that an adversary
has not tampered with the object or replaced this
with a cleverly constructed counterfeit.
This paper proposes a new concept called signatures for objects
to guarantee the integrity of objects cryptographically.
We first need to consider a mechanism that allows us to mathematically
treat objects which exist in the physical world.
Thus, we define a model called an object setting in which
we define physical actions, such as a way to extract data
from objects and test whether two objects are identical.
Modeling these physical actions via oracle access enables
us to naturally enhance probabilistic polynomial-time algorithms
to algorithms having access to objects - we denote
these physically enhanced algorithms (PEAs).
Based on the above formalization, we introduce two security definitions
for adversaries modeled as PEAs.
The first is unforgeability, which is the natural extension
of EUF-CMA security, meaning that any adversary
cannot forge a signature for objects.
The second is confidentiality, which is a privacy notion,
meaning that signatures do not leak any information about signed objects.
With these definitions in hand,
we show two generic constructions: one satisfies unforgeability
by signing extracted data from objects; the other satisfies unforgeability
and confidentiality by combining a digital signature with obfuscation.
2024年7月卒業/退職
2024年3月卒業/退職石井 龍
研究テーマ
Publications
abstract
Fault-tolerant aggregate signature (FT-AS) is a special type of aggregate signature
that is equipped with the functionality for tracing signers who generated invalid signatures
in the case an aggregate signature is detected as invalid.
In existing FT-AS schemes (whose tracing functionality requires multi-rounds),
a verifier needs to send a feedback to an aggregator for efficiently tracing the invalid signer(s).
However, in practice, if this feedback is not responded to the aggregator
in a sufficiently fast and timely manner, the tracing process will fail.
Therefore, it is important to estimate whether this feedback can be responded and
received in time on a real system.
In this work, we measure the total processing time required for the feedback
by implementing an existing FT-AS scheme, and evaluate whether the scheme works
without problems in real systems. Our experimental results show that the time
required for the feedback is 605.3 ms for a typical parameter setting,
which indicates that if the acceptable feedback time is significantly larger than a few hundred ms,
the existing FT-AS scheme would effectively work in such systems.
However, there are situations where such feedback time is not acceptable,
in which case the existing FT-AS scheme cannot be used. Therefore,
we further propose a novel FT-AS scheme that does not require any feedback.
We also implement our new scheme and show that a feedback in this scheme is completely eliminated
but the size of its aggregate signature (affecting the communication cost
from the aggregator to the verifier) is 144.9 times larger than that of the existing FT-AS scheme
(with feedbacks) for a typical parameter setting,
and thus has a trade-off between the feedback waiting time and the communication cost
from the verifier to the aggregator with the existing FT-AS scheme.
abstract
A fault-tolerant aggregate signature (FT-AS) scheme is
a variant of an aggregate signature scheme with the additional functionality
to trace signers that create invalid signatures in case an aggregate signature is invalid.
Several FT-AS schemes have been proposed so far,
and some of them trace such rogue signers in multi-rounds,
i.e., the setting where the signers repeatedly send their individual signatures.
However, it has been overlooked that there exists a potential attack on the efficiency
of bandwidth consumption in a multi-round FT-AS scheme.
Since one of the merits of aggregate signature schemes is the efficiency of bandwidth consumption,
such an attack might be critical for multi-round FT-AS schemes.
In this paper, we propose a new multi-round FT-AS scheme that is tolerant of such an attack.
We implement our scheme and experimentally show that it is more efficient
than the existing multi-round FT-AS scheme if rogue signers randomly create invalid signatures
with low probability, which for example captures spontaneous failures of devices in IoT systems.
abstract
Aggregate signature schemes enable us to aggregate multiple signatures
into a single short signature. One of its typical applications is sensor networks,
where a large number of users and devices measure their environments,
create signatures to ensure the integrity of the measurements,
and transmit their signed data. However, if an invalid signature is
mixed into aggregation, the aggregate signature becomes invalid,
thus if an aggregate signature is invalid, it is necessary to identify
the invalid signature. Furthermore, we need to deal with a situation
where an invalid sensor generates invalid signatures probabilistically.
In this paper, we introduce a model of aggregate signature schemes with
interactive tracing functionality that captures such a situation, and
define its functional and security requirements and propose aggregate
signature schemes that can identify all rogue sensors. More concretely,
based on the idea of Dynamic Traitor Tracing, we can trace rogue sensors
dynamically and incrementally, and eventually identify all rogue sensors
of generating invalid signatures even if the rogue sensors adaptively collude.
In addition, the efficiency of our proposed method is also sufficiently practical.
abstract
Fault-tolerant aggregate signature (FT-AS) is a special type of
aggregate signature that is equipped with the functionality for
tracing signers who generated invalid signatures in the case an
aggregate signature is detected as invalid. In existing FT-AS
schemes (whose tracing functionality requires multi-rounds), a
verifier needs to send a feedback to an aggregator for efficiently
tracing the invalid signer(s). However, in practice, if this feedback
is not responded to the aggregator in a sufficiently fast and timely
manner, the tracing process will fail. Therefore, it is important to
estimate whether this feedback can be responded and received in time on a real system.
In this work, we measure the total processing time required for the
feedback by implementing an existing FT-AS scheme, and evaluate
whether the scheme works without problems in real systems.
Our experimental results show that the time required for the feedback
is 605.3 ms for a typical parameter setting, which indicates that
if the acceptable feedback time is significantly larger than a few hundred ms,
the existing FT-AS scheme would effectively work in such systems. However,
there are situations where such feedback time is not acceptable,
in which case the existing FT-AS scheme cannot be used. Therefore,
we further propose a novel FT-AS scheme that does not require any feedback.
We also implement our new scheme and show that a feedback in this scheme
is completely eliminated but the size of its aggregate signature
(affecting the communication cost from the aggregator to the verifier)
is 144.9 times larger than that of the existing FT-AS scheme (with feedbacks)
for a typical parameter setting, and thus has a trade-off between the feedback
waiting time and the communication cost from the verifier to the aggregator
with the existing FT-AS scheme.
abstract
Aggregate signature schemes enable us to aggregate multiple signatures into a single short signature.
One of its typical applications is sensor networks, where a large number of users and devices measure their environments,
create signatures to ensure the integrity of the measurements, and transmit their signed data.
However, if an invalid signature is mixed into aggregation, the aggregate signature becomes invalid,
thus if an aggregate signature is invalid, it is necessary to identify the invalid signature.
Furthermore, we need to deal with a situation where an invalid sensor generates invalid signatures probabilistically.
In this paper, we introduce a model of aggregate signature schemes with interactive tracing functionality that captures such a situation,
and define its functional and security requirements and propose aggregate signature schemes that can identify all rogue sensors.
More concretely, based on the idea of Dynamic Traitor Tracing, we can trace rogue sensors dynamically and incrementally,
and eventually identify all rogue sensors of generating invalid signatures even if the rogue sensors adaptively collude.
In addition, the efficiency of our proposed method is also sufficiently practical.
五十嵐 太一
研究テーマ
Publications
abstract
In recent years, the number of crimes using smart contracts
has increased. In particular, fraud using tokens, such as rug-pull,
has become an ignorable issue in the field of decentralized finance because a lot
of users have been scammed. Therefore, constructing a detection system
for scam tokens is an urgent need. Existing methods are based on machine
learning, and they use transaction and liquidity data as features.
However, they cannot completely remove the risk of being scammed because
these features can be extracted after scam tokens are deployed
to blockchain. In this paper, we propose a scam token detection system
based on static analysis. In order to detect scam tokens before deployment,
we utilize code-based data, such as bytecodes and opcodes, because they can
be obtained before contract deployment. Since N-gram
includes information regarding the order of code sequences and scam tokens
have the specific order of code-based data, we adopt N-gram of them
as features. Furthermore, for the purpose of achieving a high detection
performance, each feature is categorized into a scam-oriented feature or
benign-oriented one to make differences in the values of feature vectors
between scam and benign token. Our results show the effectiveness of
code-based data for detection by achieving a higher F1-score compared
to the methods of another field of fraud detection in Ethereum based
on code-based data. In addition, we also confirmed that the position of
effective code for detection is near the start position of runtime code in
our experiments.
abstract
With the rapid growth of Internet of Things (IoT) devices,
a lot of IoT malware has been created,
and the security against IoT malware,
especially the family classification, has become a more important issue.
There exist three requirements which classification systems
must achieve: detection of new families,
precise classification for sequential inputs,
and being independent of computer architectures.
However, existing methods do not satisfy them simultaneously.
In this paper, we propose a realtime IoT malware classification system
based on pending samples.
In order to detect new families and to classify sequential inputs precisely,
we introduce the concept of "pending samples".
This concept is useful when heterogeneous inputs which are difficult
to classify instantly come into the system.
This is because the system can postpone classifying them until
similar samples come.
Once similar samples are gathered, we regard these samples
as a new cluster, meaning that detecting new families is achieved.
Moreover, we use printable strings to satisfy the requirement
of being independent of architectures
because strings are common among different architectures.
Our results show the ability to detect new families demonstrated
by finding new clusters after applying our algorithm to
the initial clusters.
Furthermore, our new clustering algorithms achieves a 0.130 higher
V-measure compared to the k-means algorithm,
which is the representative clustering algorithm.
abstract
With the rapid growth of blockchain, smart contract,
which is the computer program executed on blockchain systems,
has played an important role especially in the trade of cryptcurrency.
However, smart contracts are utilized to commit some crimes or attacks
because they often hold a large amount of cryptcurrency.
Thus, to enhance the security of smart contract is an urgent need.
There exists three types of crimes regarding smart contract, namely,
attack using vulnerabilities,
trade between criminals, and fraud.
Some researchers reported that many smart contracts have vulnerabilities,
and attackers exploit them to steal cryptcurrency or attack system itself
like DoS attack.
Another type of crime is to trade criminal information for rewards
between criminals using smart contracts.
Especially in recent years, fraud acts including phishing have become
a big problem on blockchain, and smart contracts are utilized to support them.
These crimes have occured due to the presense of Malicious Smart Contract
(MSC).
Thus, the systems which detect MSC are needed to prevent these crimes.
Though MSC is a smart contract which shows malicious activities,
there does not exist a clear definition.
As a result, the word "malicious" is used in different ways among researchers.
In this situation, it is difficult to detect whole MSCs.
This is because different types of MSC have different malicious activities,
meaning that detection systems corresponded to each type of MSC are needed.
Therefore, the classification of MSC is required.
Some researchers classify MSC into
two types: Vulnerable Smart Contract (VSC) related to vulnerability
and Criminal Smart Contract (CSC) related to trade between criminals.
In this classification model, however,
there does not exist a type of MSC corresponded to fraud activities.
To overcome this problem,
we propose a new standpoint that MSC should be classified into VSC,
CSC and Fraudulent Smart Contract (FSC),
which supports frauds. By introducing this standpoint,
to detect whole MSCs will be realized by constructing detection systems
of each MSC simultaneously.
While there exists a lot of works of detecting VSC and a few CSC works
have also proposed, a field of FSC detection has not developed.
Some researchers proposed detection systems of malicious accounts.
In this field, "malicious" means fraud activities.
Thus, we consider that these kinds of works are similar to detection
of FSC and can be applied.
These studies mainly use machine learning to detect malicious accounts.
However, their models only consider graph-based features constructed
from external transaction and address data,
and not focus on other features.
Therefore, as a future work,
considering internal transaction and smart contract code-based feature
like opcode is worth working.
2023年3月卒業/退職
宮前 剛
研究テーマ
Publications
abstract
Unlinkability is a crucial property of cryptocurrencies that protects users from deanonymization attacks.
However, currently, even anonymous cryptocurrencies do not necessarily attain unlinkability under specific
conditions. For example, Mimblewimble, which is considered to attain coin unlinkability using its
transaction kernel offset technique, is vulnerable under the assumption that privacy adversaries can
send their coins to or receive coins from the challengers. This paper first illustrates the privacy
issue in Mimblewimble that could allow two colluded adversaries to merge a person's two independent
chunks of personally identifiable information (PII) into a single PII. To analyze the privacy issue,
we formulate unlinkability between two sets of objects and a privacy adversary model in cryptocurrencies
called the counterparty adversary model. On these theoretical bases, we define an abstract model of
blockchain-based cryptocurrency transaction protocols called the coin transfer system, and unlinkability
over it called coin transfer unlinkability (CT-unlinkability). Furthermore, we introduce zero-knowledgeness
for the coin transfer systems to propose a method to easily prove the CT-unlinkability of cryptocurrency
transaction protocols. Finally, we prove that Zerocash is CT-unlinkable by using our proving method to demonstrate its effectiveness.
2022年9月卒業/退職
Kittiphop Phalakarn
研究テーマ
Publications
abstract
In oblivious finite automata evaluation, one party holds a private automaton,
and the other party holds a private string of characters. The objective is
to let the parties know whether the string is accepted by the automaton or not,
while keeping their inputs secret. The applications include DNA searching,
pattern matching, and more. Most of the previous works are based on asymmetric
cryptographic primitives, such as homomorphic encryption and oblivious transfer.
These primitives are significantly slower than symmetric ones. Moreover,
some protocols also require several rounds of interaction. As our main contribution,
we propose an oblivious finite automata evaluation protocol via conditional disclosure
of secrets (CDS), using one (potentially malicious) outsourcing server.
This results in a constant-round protocol, and no heavy asymmetric-key primitives
are needed. Our protocol is based on a building block called "an oblivious CDS scheme
for deterministic finite automata" which we also propose in this paper. In addition,
we propose a standard CDS scheme for deterministic finite automata as an independent interest.
abstract
Secret sharing is a cryptographic primitive that divides a
secret into several shares, and allows only some combinations of shares
to recover the secret. As it can also be used in secure multi-party computation
protocol with outsourcing servers, several variations of secret sharing
are devised for this purpose. Most of the existing protocols require
the number of computing servers to be determined in advance. However,
in some situations we may want the system to be "evolving". We may
want to increase the number of servers and strengthen the security guarantee
later in order to improve availability and security of the system.
Although evolving secret sharing schemes are available, they do not support
computing on shares. On the other hand, "homomorphic" secret
sharing allows computing on shares with small communication, but they
are not evolving. As the contribution of our work, we give the definition of
"evolving homomorphic" secret sharing supporting both properties. We
propose two schemes, one with hierarchical access structure supporting
multiplication, and the other with partially hierarchical access structure
supporting computation of low degree polynomials. Comparing to the
work with similar functionality of Choudhuri et al. (IACR ePrint 2020),
our schemes have smaller communication costs.
abstract
This paper proposes t-secure homomorphic secret sharing
schemes for low degree polynomials. Homomorphic secret sharing is a
cryptographic technique to outsource the computation to a set of servers
while restricting some subsets of servers from learning the secret inputs.
Prior to our work, at Asiacrypt 2018, Lai, Malavolta, and Schroder proposed
a 1-secure scheme for computing polynomial functions. They also
alluded to t-secure schemes without giving explicit constructions; constructing
such schemes would require solving set cover problems, which
are generally NP-hard. Moreover, the resulting implicit schemes would
require a large number of servers. In this paper, we provide a constructive
solution for threshold-t structures by combining homomorphic encryption
with the classic secret sharing scheme for general access structure
by Ito, Saito, and Nishizeki. Our scheme also quantitatively improves the
number of required servers from O(t^2) to O(t), compared to the implicit
scheme of Lai et al. We also suggest several ideas for future research
directions.
2022年3月卒業/退職林田 淳一郎
研究テーマ
Publications
abstract
Private information retrieval (PIR) allows a client to retrieve data from a database
without the database server learning what data are being retrieved.
Although many PIR schemes have been proposed in the literature,
almost all of these focus on retrieval of a single database element,
and do not consider more flexible retrieval queries such as basic range queries.
Furthermore, while practically-oriented database schemes aiming at providing flexible
and privacy-preserving queries have been proposed, to the best of our knowledge,
no formal treatment of range queries has been considered for these.
In this paper, we firstly highlight that a simple extension of the standard PIR security notion
to range queries is insufficient in many usage scenarios,
and propose a stronger security notion aimed at addressing this.
We then show a simple generic construction of a PIR scheme meeting our stronger security notion,
and propose a more efficient direct construction based on function secret sharing while the former
has a round complexity logarithmic in the size of the database,
the round complexity of the latter is constant.
After that, we report on the practical performance of our direct construction.
Finally, we extend the results to the case of multi-dimensional databases
and show the construction of PIR scheme supporting multi-dimensional range queries.
The communication round complexity of our scheme is O(klogn) in worst case,
where n is the size of database and k is the number of elements retrieved by the query.
abstract
Private information retrieval (PIR) allows a client to retrieve data from
a database without the database server learning what data is being retrieved.
Most of the existing PIR schemes consider searching simple one-dimensional
databases and the supported query types are often limited to index queries
only, which retrieve a single element from the databases. However, most
real-world applications require more complex databases and query types.
In this paper, we build upon the notion of query indistinguishability by
Hayata et al. (ESORICS2020), and formalize query indistinguishability for
multi-dimensional range queries. We then give a construction of a secure
multi-server scheme based on function secret sharing. This is the first
instantiation of a PIR scheme supporting multi-dimensional range queries
while being capable of hiding the type of query being made and, in the case
of multi-dimensional range queries, the number of elements retrieved in each
query, when considering a stream of queries.
abstract
Replayable chosen ciphertext (RCCA) security was introduced by Canetti,
Krawczyk, and Nielsen (CRYPTO'03) in order to handle an encryption scheme
that is "non-malleable except tampering which pre- serves the plaintext."
RCCA security is a relaxation of CCA security and a useful security notion
for many practical applications such as authentication and key exchange.
Canetti et al. defined non-malleability against RCCA (NM-RCCA),
indistinguishability against RCCA (IND-RCCA), and universal composability
against RCCA (UC-RCCA). Moreover, they proved that these three security
notions are equivalent when considering a PKE scheme whose plaintext space
is super-polynomially large. Among these three security notions, NM-RCCA
seems to play the central role since RCCA security was introduced in order
to capture "non-malleability except tampering which preserves the plaintext."
However, their definition of NM-RCCA is not a natural extension of that of
original non-malleability, and it is not clear whether their NM-RCCA captures
the requirement of original non- malleability. In this paper, we propose
definitions of indistinguishability- based and simulation-based non-malleability
against RCCA by extending definitions of original non-malleability. We then
prove that these two notions of non-malleability and IND-RCCA are equivalent
regardless of the size of plaintext space of PKE schemes.
abstract
Private information retrieval (PIR) allows a client to retrieve data from a
database without the database server learning what data is being retrieved.
Although many PIR schemes have been proposed in the literature, almost all of
these focus on retrieval of a single database element, and do not consider more
flexible retrieval queries such as basic range queries. Furthermore, while
practically-oriented database schemes aiming at providing flexible and
privacy-preserving queries have been proposed, to the best of our knowledge,
no formal treatment of range queries has been considered for these.
In this paper, we firstly highlight that a simple extension of the standard PIR
security notion to range queries, is insufficient in many usage scenarios, and
propose a stronger security notion aimed at addressing this.
We then show a simple generic construction of a PIR scheme meeting our stronger
security notion, and propose a more efficient direct construction based on
function secret sharing - while the former has a round complexity logarithmic
in the size of the database, the round complexity of the latter is constant.
Finally, we report on the practical performance of our direct construction.
abstract
Private information retrieval (PIR) allows a client to retrieve data from a database
without the database server learning what data is being retrieved.
Although many PIR schemes have been proposed in the literature, almost all of these
focus on retrieval of a single database element, and do not consider more exible
retrieval queries such as basic range queries.
In addition to this, to the best of our knowledge, all PIR schemes that do support
range queries, are not formally shown secure.
In this paper, we formalize a security model for PIR schemes that support range
queries and construct a secure multi-server scheme based on function secret sharing.
abstract
Public-key encryption with keyword search (PEKS) is a cryptographic primitive that allows us to search for particular keywords over ciphertexts without recovering plaintexts.
By using PEKS in cloud services, users can outsource their data in encrypted form without sacrificing search functionality.
Concerning PEKS that can specify logical disjunctions and logical conjunctions as a search condition, it is known that such PEKS can be (generically) constructed from anonymous attribute-based encryption (ABE).
However, it is not clear whether it is possible to construct this types of PEKS without using ABE which may require large computational/communication costs and strong mathematical assumptions.
In this paper, we show that ABE is crucial for constructing PEKS with the above functionality.
More specifically, we give a generic construction of anonymous key-policy ABE from PEKS whose search condition is specified by logical disjunctions and logical conjunctions.
Our result implies such PEKS always requires large computational/communication costs and strong mathematical assumptions corresponding to those of ABE.
abstract
Replayable chosen ciphertext (RCCA) security was introduced by Canetti, Krawczyk,
and Nielsen (CRYPTO 03) in order to handle an encryption scheme that is "non-malleable
except tampering which preserves the plaintext". RCCA security is a relaxation of
CCA security and a useful security notion for many practical applications such as
authentication and key exchange. Canetti et al. defined non-malleability against
RCCA (NM-RCCA), indistinguishability against RCCA (IND-RCCA), and universal
composability against RCCA (UC-RCCA). Moreover, they proved that these three security
notions are equivalent when considering a PKE scheme whose plaintext space is
super-polynomially large. Among these three security notions, NM-RCCA seems to play
the central role since RCCA security was introduced in order to capture
"non-malleability except tampering which preserves the plaintext." However, their
definition of NM-RCCA is not a natural extension of that of classical
non-malleability, and it is not clear whether their NM-RCCA captures the requirement
of classical non-malleability. In this paper, we propose definitions of
indistinguishability-based and simulation-based non-malleability against RCCA by
extending definitions of classical non-malleability. We then prove that these two
notions of non-malleability and IND-RCCA are equivalent regardless of the size of
plaintext space of PKE schemes.
abstract
Public-key encryption with keyword search (PEKS) is a cryptographic primitive
that allows us to search encrypted data for those of including particular keywords
without decrypting them. PEKS is expected to be used for enhancing security of
cloud storages. It is known that PEKS can be constructed from anonymous
identity-based encryption (IBE), anonymous attribute-based encryption (ABE) and so on.
It is believed that it is difficult to construct PEKS schemes that can specify a
flexible search condition such as logical disjunctions and logical conjunctions
from weaker cryptographic tools than ABE. However, this intuition has not been
rigorously justified. In this paper, we formally prove it by constructing key-policy
ABE from PEKS for monotone boolean formulas.
久野 朔
研究テーマ
Publications
abstract
Penetration testing (PT) that assesses vulnerabilities by considering and executing all possible attacks
is important in security engineering but very expensive due to the need of experienced professionals.
As a countermeasure, there are attempts to partially automate and improve the efficiency of PT.
Their common feature is the use of existing PT tools (e.g. Metasploit) and machine learning (ML).
Such approaches do not embed ML in PT tools, and would not improve the tools themselves.
In this work, we use deep reinforcement learning to automate search and exploit executions for various vulnerabilities
existing in Web applications so that a wide variety of PT tools can be integrated in an effective manner with such embedded ML.
This poster will show two preliminary experiments in this direction.
2021年9月卒業/退職
碓井 利宣
研究テーマ
Publications
abstract
Data flow analysis is an essential technique for understanding the complicated behavior of malicious scripts.
For tracking the data flow in scripts, dynamic taint analysis has been widely adopted by existing studies.
However, the existing taint analysis techniques have a problem that each script engine needs to be separately
designed and implemented. Given the diversity of script languages that attackers can choose for their malicious
scripts, it is unrealistic to prepare taint analysis tools for the various script languages and engines.
In this paper, we propose an approach that automatically builds a taint analysis framework for scripts on top
of the framework designed for native binaries. We first conducted experiments to reveal that the semantic gaps
in data types between binaries and scripts disturb our approach by causing under-tainting. To address this problem,
our approach detects such gaps and bridges them by generating force propagation rules, which can eliminate the
under-tainting. We implemented a prototype system with our approach called STAGER T. We built taint analysis
frameworks for Python and VBScript with STAGER T and found that they could effectively analyze the data flow
of real-world malicious scripts.
abstract
Script languages are designed to be easy-to-use and require low learning costs.
These features provide attackers options to choose a script language for developing
their malicious scripts. This diversity of choice in the attacker side unexpectedly
imposes a significant cost on the preparation for analysis tools in the defense side.
That is, we have to prepare for multiple script languages to analyze malicious scripts
written in them. We call this unbalanced cost for script languages asymmetry problem.
To solve this problem, we propose a method for automatically detecting the hook and
tap points in a script engine binary that is essential for building a script Application
Programming Interface (API) tracer. Our method allows us to reduce the cost of reverse
engineering of a script engine binary, which is the largest portion of the development
of a script API tracer, and build a script API tracer for a script language with minimum
manual intervention. This advantage results in solving the asymmetry problem.
The experimental results showed that our method generated the script API tracers for
the three script languages popular among attackers (Visual Basic for Applications (VBA),
Microsoft Visual Basic Scripting Edition (VBScript), and PowerShell). The results also
demonstrated that these script API tracers successfully analyzed real-world malicious scripts.
abstract
Return-oriented programming (ROP) has been crucial for attackers to evade
the security mechanisms of recent operating systems. Although existing ROP
detection approaches mainly focus on host-based intrusion detection systems
(HIDSes), network-based intrusion detection systems (NIDSes) are also
desired to protect various hosts including IoT devices on the network.
However, existing approaches are not enough for network-level protection
due to two problems: (1) Dynamic approaches take the time with second- or
minute-order on average for inspection. For applying to NIDSes,
millisecond-order is required to achieve near real time detection.
(2) Static approaches generates false positives because they use heuristic
patterns. For applying to NIDSes, false positives should be minimized to
suppress false alarms. In this paper, we propose a method for statically
detecting ROP chains in malicious data by learning the target libraries
(i.e., the libraries that are used for ROP gadgets). Our method accelerates
its inspection by exhaustively collecting feasible ROP gadgets in the target
libraries and learning them separated from the inspection step. In addition,
we reduce false positives inevitable for existing static inspection by
statically verifying whether a suspicious byte sequence can link properly
when they are executed as a ROP chain. Experimental results showed that our
method has achieved millisecond-order ROP chain detection with high precision.
abstract
Malicious scripts have been crucial attack vectors in recent attacks such as
malware spam (malspam) and fileless malware. Since malicious scripts are generally
obfuscated, statically analyzing them is difficult due to reflections. Therefore,
dynamic analysis, which is not affected by obfuscation, is used for malicious
script analysis. However, despite its wide adoption, some problems remain unsolved.
Current designs of script analysis tools do not fulfill the following three
requirements important for malicious script analysis. (1) Universally applicable
to various script languages, (2) capable of outputting analysis logs that can
precisely recover the behavior of malicious scripts, and (3) applicable to
proprietary script engines. In this paper, we propose a method for automatically
generating script API tracer by analyzing the target script engine binaries. The
method mine the knowledge of script engine internals that are required to append
behavior analysis capability. This enables the addition of analysis functionalities
to arbitrary script engines and generation of script API tracers that can fulfill
the above requirements. Experimental results showed that we can apply this method
for building malicious script analysis tools.
2021年3月卒業/退職田村 研輔
研究テーマ
Publications
abstract
Since cyber attacks such as cyberterrorism against Industrial
Control Systems (ICSs) and cyber espionage against companies managing
them have increased, the techniques to detect anomalies in early
stages are required. To achieve the purpose, several studies have developed
anomaly detection methods for ICSs. In particular, some techniques
using packet flow regularity in industrial control networks have achieved
high-accuracy detection of attacks disrupting the regularity, i.e. normal
behavior, of ICSs. However, these methods cannot identify scanning attacks
employed in cyber espionage because the probing packets assimilate
into a number of normal ones. For example, the malware called Havex is
customized to clandestinely acquire information from targeting ICSs using
general request packets. The techniques to detect such scanning attacks
using widespread packets await further investigation. Therefore, the goal of
this study was to examine high performance methods to identify anomalies
even if elaborate packets to avoid alert systems were employed for attacks
against industrial control networks. In this paper, a novel detection model
for anomalous packets concealing behind normal traffic in industrial control
networks was proposed. For the proposal of the sophisticated detection
method, we took particular note of packet flow regularity and employed the
Markov-chain model to detect anomalies. Moreover, we regarded not only
original packets but similar ones to them as normal packets to reduce false
alerts because it was indicated that an anomaly detection model using the
Markov-chain suffers from the ample false positives affected by a number
of normal, irregular packets, namely noise. To calculate the similarity between
packets based on the packet flow regularity, a vector representation
tool called word2vec was employed. Whilst word2vec is utilized for the
calculation of word similarity in natural language processing tasks, we applied
the technique to packets in ICSs to calculate packet similarity. As a
result, the Markov-chain with word2vec model identified scanning packets
assimilating into normal packets in higher performance than the conventional
Markov-chain model. In conclusion, employing both packet flow
regularity and packet similarity in industrial control networks contributes
to improving the performance of anomaly detection in ICSs.
角田 大輔
研究テーマ
Publications
宮里 俊太郎
研究テーマ
Publications
2020年3月卒業/退職長嶺 隆寛
研究テーマ
Publications
abstract
A technique called transaction replacement using timelocks is used in payment
channels for Bitcoin. When closing a payment channel uncooperatively, the latest
time-locked transaction is broadcasted to the Bitcoin network. However, if the
Bitcoin network is crowded, the latest one with a lower fee might not be added
to a block preferentially, and hence the transaction replacement might fail.
This problem can be solved by adding a fee to the latest one (e.g. using
SIGHASH_ANYONECANPAY). However, it is difficult to divide the additional fee
cooperatively because this scenario happens in the uncooperative case.
We propose a protocol that allows the transaction fee added by a single party
to be divided equally between two parties. In this protocol, each party deposits
funds for the additional fee to the payment channel in advance. A party can add
the transaction fee alone by creating a child transaction referring to the funds
(Child Pays for Parent). Then, the remains of the funds are returned to each
party on two outputs of the child transaction. Regarding these two outputs, one
party decides the values of outputs, and the other has a right to choose either
output. As a result, a party who decides the values is motivated to specify the same value.
黄 珂
研究テーマ
Publications
abstract
The top-k algorithm is to search for k smallest(largest) numbers in the given dataset.
In some situations, the dataset is distributed to two or more parties to keep the privacy of the data.
In previous research, privacy preserving algorithms are considered in low-latency networks, and the computation cost of the algorithms are more important than the communication cost in data transmission between different parties.
In high-latency networks, both time complexity and round complexity should be taken into consideration.
In this paper, we focus on privacy preserving algorithm in high-latency network such as wireless network.
We proposed a kind of approximate method for privacy preserving top-k algorithm based on secure multi-party computation.
This method has lower communication rounds than the previous methods and has better performance in high-latency networks.
abstract
Secure multi-party computation (MPC) allows a set of parties to jointly
compute a function, while keeping their inputs private. MPC has many
applications, and we focus on privacy-preserving nearest neighbor search
(NNS) in this paper. The purpose of the NNS is to find the closest vector
to a query from a given database, and NNS arises in many fields of
applications such as computer vision. Recently, some approximation methods
of NNS have been proposed for speeding up the search. In this paper, we
consider the combination between approximate NNS based on "short code"
(searching with quantization) and MPC. We implement a short code-based
privacy-preserving approximate NNS on secret sharing-based secure two-party
computation and report some experimental results. These results help us to
explore more efficient privacy-preserving approximate NNS in the future.
2019年3月卒業/退職
石坂 理人
研究テーマ
Publications
abstract
Leakage-resilience guarantees that even if some information about the secret
key is partially leaked, the security is maintained. Several security models
considering leakage-resilience have been proposed. Among them, auxiliary
leakage model proposed by Dodis et al. in STOC'09 is especially important,
since it can deal with a leakage caused by a function which information-theoretically
reveals the secret key, e.g., one-way permutation.
Contribution of this work is two-fold. Firstly, we propose an identity based
encryption (IBE) scheme and prove that it is fully secure and resilient to the
auxiliary leakage under the decisional linear assumption in the standard model.
Secondly, although the IBE scheme proposed by Yuen et al. in Eurocrypt'12 has
been considered to be the only IBE scheme resilient to auxiliary leakage, we
prove that the security proof for the IBE scheme is defective. We insist that
our IBE scheme is the only IBE scheme resilient to auxiliary leakage.
abstract
A signature scheme is said to be weakly unforgeable, if it is hard to forge a
signature on a message not signed before. A signature scheme is said to be
strongly unforgeable, if it is hard to forge a signature on any message. In
some applications, the weak unforgeability is not enough and the strong
unforgeability is required, e.g., the Canetti, Halevi and Katz transformation.
Leakage-resilience is a property which guarantees that even if secret information
such as the secret-key is partially leaked, the security is maintained. Some
security models with leakage-resilience have been proposed. The auxiliary (input)
leakage model, or hard-to-invert leakage model, proposed by Dodis et al. in
STOC'09 is especially meaningful one, since the leakage caused by a function
which information-theoretically reveals the secret-key, e.g., one-way permutation,
is considered. In this work, we propose a generic construction of a signature
scheme strongly unforgeable and resilient to polynomially hard-to-invert leakage
which can be instantiated under standard assumptions such as the decisional
linear assumption. We emphasize that our signature scheme is not only the first
one resilient to polynomially hard-to-invert leakage under standard assumptions,
but also the first one which is strongly unforgeable and has hard-to-invert leakage-resilience.
abstract
Ciphertext-policy attribute-based signcryption (CP-ABSC) is a cryptographic
primitive which performs simultaneously both the functionalities of
ciphertext-policy attribute-based encryption and signature-policy attribute-based
signature. CP-ABSC guarantees both message confidentiality and authenticity and
is considered to be a useful tool for fine-grained data access control in
attribute-based environments such as a cloud service. In this paper, we provide
a generic construction of CP-ABSC which achieves ciphertext indistinguishability
under adaptively chosen ciphertext attacks in the adaptive predicate model
(AP-IND-CCA), strongly existentially unforgeability of signcryptext under adaptively
chosen message attacks in the adaptive predicate model (AP-sEUF-CMA) and perfect
privacy. Our generic construction uses as building blocks, ciphertext-policy
attribute-based key encapsulation mechanism, signature-policy attribute-based
signature and data encapsulation mechanism.
2018年8月卒業/退職
2018年3月卒業/退職先崎 佑弥
研究テーマ
Publications
abstract
Research on adversarial examples for machine learning has received much
attention in recent years. Most of previous approaches are white-box attacks;
this means the attacker needs to obtain before-hand internal parameters of a
target classifier to generate adversarial examples for it. This condition is
hard to satisfy in practice. There is also research on black-box attacks, in
which the attacker can only obtain partial information about target classifiers;
however, it seems we can prevent these attacks, since they need to issue many
suspicious queries to the target classifier. In this paper, we show that a naive
defense strategy based on surveillance of number query will not suffice. More
concretely, we propose to generate not pixel-wise but block-wise adversarial
perturbations to reduce the num ber of queries. Our experiments show that such
rough perturbations can confuse the target classifier. We succeed in reducing
the number of queries to generate adversarial examples in most cases. Our simple
method is an untargeted attack and may have low success rates compared to previous
results of other black-box attacks, but needs in average fewer queries.
Surprisingly, the minimum number of queries (one and three in MNIST and CIFAR-10
dataset, respectively) is enough to generate adversarial examples in some cases.
Moreover, based on these results, we propose a detailed classification for
black-box attackers and discuss countermeasures against the above attacks.
今田 丈雅
研究テーマ
Publications
2017年3月卒業/退職竹之内 玲
研究テーマ
Publications
林 昌吾
研究テーマ
Publications
2016年3月卒業/退職
大畑 幸矢
研究テーマ
Publications
abstract
Many online services adopt a password-based user authentication system because
of its usability. However, several problems have been pointed out on it, and
one of the well-known problems is that a user forgets his/her password and
cannot login the services. To solve this problem, most online services support
a mechanism with which a user can recover a password. In this poster, we discuss
rigorous security for a password recovery protocol.
abstract
The concept of threshold public key encryption (TPKE) with the special
property called key re-splittability (re-splittable TPKE, for short) was
introduced by Hanaoka et al.(CT-RSA 2012), and used as one of the building
blocks for constructing their proxy re-encryption scheme.
In a re-splittable TPKE scheme, a secret key can be split into a set of
secret key shares not only once, but also multiple times, and the security
of the TPKE scheme is guaranteed as long as the number of corrupted secret
key shares under the same splitting is smaller than the threshold. In this
paper, we show several new constructions of re-splittable TPKE scheme by
extending the previous (ordinary) TPKE schemes. Our results suggest that
key re-splittability is a very natural property for TPKE.
abstract
The concept of threshold public key encryption (TPKE) with the special property
called key re-splittability (re-splittable TPKE, for short) was introduced by
Hanaoka et al. (CT-RSA 2012), and used as one of the building blocks for
constructing their proxy re-encryption scheme. In a re-splittable TPKE scheme,
a secret key can be split into a set of secret key shares not only once, but
also multiple times, and the security of the TPKE scheme is guaranteed as long
as the number of corrupted secret key shares under the same splitting is smaller
than the threshold. In this paper, we show several new constructions of
re-splittable TPKE scheme by extending the previous (ordinary) TPKE schemes.
中田 謙二郎
研究テーマ
Publications
abstract
A model of an encryption approach is analyzed from an information-theoretic
point of view. In the model, an attacker faces the problem of observing messages
through a concatenation of a binary symmetric channel and a channel with randomly
inserted bits. The paper points out to a number of security related implications
resulting from employing an insertion channel. It is shown that deliberate and
secret-key-controlled insertions of random bits into the basic ciphertext provide
a security enhancement of the resulting encryption scheme.
篠田 詩織
研究テーマ
Publications
abstract
Loyalty program (LP) is a popular marketing activity of enterprises.
As a result of firms' effort to increase customers' loyalty, point exchange
or redemption services are now available worldwide. These services attract not
only customers but also attackers. In pioneering research, which first focused
on this LP security problem, an empirical analysis based on Japanese data is
shown to see the effects of LP-point liquidity on damages caused by security
incidents. We revisit the empirical models in which the choice of variables is
inspired by the Gordon-Loeb formulation of security in-vestment: damage,
investment, vulnerability, and threat. The liquidity of LP points corresponds
to the threat in the formulation and plays an important role in the empirical
study because it particu-larly captures the feature of LP networks. However,
the actual proxy used in the former study is ar-tificial. In this paper, we
reconsider the liquidity definition based on a further observation of LP security
incidents. By using newly defined proxies corresponding to the threat as well as
other re-fined proxies, we test hypotheses to derive more implications that help
LP operators to manage partnerships; the implications are consistent with recent
changes in the LP network. Thus we can see the impacts of security investment
models include a wider range of empirical studies.
2015年3月卒業/退職
Bongkot Jenjarrussakul
研究テーマ
Publications
abstract
Virtual currency is an important medium of exchange in cyber space,
and loyalty program (LP) can be considered as a type of virtual currency.
In the U.S., according to a report in COLLOQUY talk[5], the total number
of LP memberships is more than 2.6 billion in 2012 after 26.7% growth from 2010.
In addition, the number of LPs is also reported to show a clear increasing trend.
LPs are very popular in Japan, too; there are more than 200 LPs in Japan and
the use of them is widespread among Japanese people. People collect their LP
points and redeem them to obtain goods and enjoy services. In addition, many LP
points can be converted into points of different LPs. LPs in Japan are thus
increasing redemption options, and getting more and more popular and liquid
virtual currencies. This situation can motivate malicious people to abuse such
increasingly useful LPs for crimes, and in fact, there are some reports of such
crimes. However, the security issues of LPs have not been well studied.
In this paper, we investigate Japanese LPs with focuses on their liquidity,
their operating firms' security efforts, and the LP systems' actual security levels.
abstract
Although there are some studies on inter-sectoral information security
interdependency, the lack of regional interdependency analysis is one of their
limitations. In this empirical study, we used an inter-regional input-output table in
order to analyze both sectoral and regional interdependencies under the influence of
information technology and the information security of Japanese firms. Our analysis
showed that the economic scale of a region has a great influence on the characteristics
of the interdependency. Furthermore,we found that the demand-side sectors can
be classified into five classes based on the characteristics. Among them, the groups
with high self-dependency get more benefits from simultaneous understanding of
regional characteristics; for the sectors in these classes, investment advice obtained
from sectoral characteristics only is very limited, whereas they can obtain much
more from regional characteristics. Since these classes include a majority of the
sectors, we can recognize the importance of regional interdependency analysis. In
the above basic study, what we see is the situation before the Great East Japan
Earthquake on March 11, 2011.
As an extended study, we estimated the impact of the earthquake on the
interdependency. Our main finding from the regional perspective is that the interdependency
characteristics of the most damaged region (Tohoku) and of the
economically largest region (Kanto) are impacted most significantly. This feature
is not changed by the limitation of damage through prior security investment.
Both in the basic study and in the extended study, we can see that considering
not only sectoral but also regional characteristics is an effective approach to the task of empirically deriving implications related to the interdependency. There are many
possibilities of more extended studies based on our methodology.
abstract
One of the concerns in economics of information security is about optimal investment
in information security. In Gordon-Loeb's model, the general economic model which determines
the optimal amount to invest in order to protect a given set of information security is
introduced. Here they focus on optimal investment regarding reduction of vulnerability.
An extension work by Matsuura shows productivity space of information security by considering
productivity regarding threat and vulnerability reduction according to class-II of security
breach probability function in Gordon-Loeb's model. Here we try to work on a gap by considering
productivity regarding both threat and vulnerability reductions with a focus on class-I of
security breach probability function stated in Gordon-Loeb's model. As a result, we found that
when consider security breach probability function and security threat probability function
which form as class-I of Gordon-Loeb's model, the optimal level of information security
investment equals zero until a specific value of v, and then this optimal level of the
investment increases at a decreasing rate.
keywords
Information security investment, optimal investment model, threat reduction, vulnerability reduction
abstract
The Great East Japan Earthquake on March 11, 2011 introduced vast impact on supply
chain in Tohoku region. Although there are some reports regarding impact in economic
viewpoint as well as information, communication and telecommunication (ICT) viewpoint,
non of them shows relation to information security (IS). The methodology in here is
applied to simulate the possible outcome with impact from the earthquake. We observe
impacts that relate to information technology(IT) and IS. In addition, assumption that
investment in IS helps reducing impact from the earthquake is also applied. With this concept,
Japanese official statistical economic data, offcial data regarding IT and IS are used.
The results show that limited effect from the loss in IT-related capital stock due to the
earthquake likely affects some regions and industrial sectors such as sector of other
manufacturing and services.
keywords
The Great East Japan Earthquake, Information Security, Regional and Sectoral Impact
abstract
The Great East Japan Earthquake on March 11, 2011 introduced vast impact on supply-chain
in Tohoku region. Although there are some reports regarding impact in economic viewpoint as well as information,
communication, and telecommunication (ICT) viewpoint, non of them shows possibility about
impact on information security. Here we simulate the possible effect from the impact of the earthquake. The
methodology is applied to predict impact from the earthquake. With this concept, Japanese offcial statistical
economic data as well as offcial data regarding information technology and information security are used.
The results show that limited effect from the loss in IT-related capital stock due to the Great East Japan
Earthquake likely affects some regions and industrial sectors such as other manufacturing(6) and services(12).
abstract
Information security plays a significant role in information systems due to their higher adoption
rate in basic infrastructures. This widespread usage of information technology brings higher probability of
risks and attacks to information systems. Moreover, higher number of firms and organizations concern more
about expenditure on information security. From this fact, just understanding technologies is insufficient for
appropriate adoption of information security. Hence understanding other aspects such as economics is also
required. This paper introduces existing studies on information security economics and discusses some future
directions; existing analyses based on economics theories have successfully explained a number of problems
related to information security, and future steps would need more synthesis-oriented approaches as well as
empirical studies.
abstract
This paper broadens the concept of measurement methodology of information security interdependency
in industrial sectoral perspective into industrial regional perspective to analyze inter-regional and
inter-sectoral interdependency in specific industrial sectors and regions on demand-side perspective.
Previous study of cross-sectoral information security interdependency showed that industrial sector
is one of the factors that affect interdependency in
information security. Nevertheless, regional interdependency analysis was one of their limitations. In our
implementation, we apply methodology to the statistical economic data of Japanese industrial sectors
separated into regions in order to show their information security interdependency influenced by information
technology and the level of information security measure.
abstract
This paper broadens the concept of measurement methodology of information security interdependency
in industrial sectoral perspective into industrial regional perspective to analyze inter-regional and
inter-sectoral interdependency between specific industrial sectors and regions on demand-side perspective.
Previous study of cross-sectoral information security interdependency demonstrated that different industrial
sectors is one of the factors that affect interdependency in information security. Nevertheless, regional
interdependency analysis was one of their limitations. In our implementation, we apply methodology to the
statistical economic data of Japanese industrial sectors separated into regions in order to show their
information security interdependency influenced by information technology and the level of information security
measure.
馮 菲
研究テーマ
Publications
abstract
Tor is the most popular anonymous communication tool in the world. Its anonymity,
however, has not been thoroughly evaluated. For example, it is possible for an
adversary to restrict access to the Tor network by blocking all the publicly listed
relays. In response, Tor utilizes bridges, which are unlisted relays, as alternative
entry points. However, the vulnerabilities of the current bridge mechanism have not
been thoroughly investigated yet. We first investigate the vulnerabilities of the
current bridge mechanism under different adversarial models. Then we compare the
current bridge mechanism with our two proposals and discuss their effects on the
security and performance of Tor.
碓井 利宣
研究テーマ
Publications
abstract
Return-oriented programming (ROP) has been crucial for attackers to evade
the security mechanisms of recent operating systems. Although existing ROP
detection approaches mainly focus on host-based intrusion detection systems
(HIDSes), network-based intrusion detection systems (NIDSes) are also
desired to protect various hosts including IoT devices on the network.
However, existing approaches are not enough for network-level protection
due to two problems: (1) Dynamic approaches take the time with second- or
minute-order on average for inspection. For applying to NIDSes,
millisecond-order is required to achieve near real time detection.
(2) Static approaches generates false positives because they use heuristic
patterns. For applying to NIDSes, false positives should be minimized to
suppress false alarms. In this paper, we propose a method for statically
detecting ROP chains in malicious data by learning the target libraries
(i.e., the libraries that are used for ROP gadgets). Our method accelerates
its inspection by exhaustively collecting feasible ROP gadgets in the target
libraries and learning them separated from the inspection step. In addition,
we reduce false positives inevitable for existing static inspection by
statically verifying whether a suspicious byte sequence can link properly
when they are executed as a ROP chain. Experimental results showed that our
method has achieved millisecond-order ROP chain detection with high precision.
2014年3月卒業/退職村上 隆夫
研究テーマ
Publications
abstract
Biometric identification has recently attracted attention because of its convenience:
it does not require a user ID nor a smart card. However, both the identification error
rate and response time increase as the number of enrollees increases. In this paper,
we combine a score level fusion scheme and a metric space indexing scheme to improve
the accuracy and response time in biometric identification, using only scores as
information sources. We firstly propose a score level indexing and fusion framework
which can be constructed from the following three schemes: (I) a pseudo-score based
indexing scheme, (II) a multi-biometric search scheme, and (III) a score level fusion
scheme which handles missing scores. A multi-biometric search scheme can be newly
obtained by applying a pseudo-score based indexing scheme to multi-biometric identification.
We secondly propose the NBS (Naive Bayes search) scheme as a multi-biometric search
scheme and discuss its optimality with respect to the retrieval error rate.
We evaluated our proposal using the datasets of multiple fingerprints and face scores
from multiple matchers. The results showed that our proposal significantly improved the
accuracy of the unimodal biometrics while reducing the average number of score
computations in both the datasets.
abstract
It is known that different users have different degrees of accuracy in biometric authentication,
and claimants and enrollees who cause false accepts against many others are referred to as
wolves and lambs, respectively. The aim of this paper is to develop a fusion algorithm, which
has security against both of the animals while minimizing the number of query samples a
genuine claimant has to input. To achieve our aim, we first introduce a taxonomy of wolves
and lambs, and propose a minimum log-likelihood ratio-based sequential fusion scheme (MLR scheme).
We prove that this scheme keeps wolf attack probability and lamb accept probability,
the maximum of the claimant-specific false accept probability (FAP), and the enrolleespecific
FAP, less than a desired value if log-likelihood ratios are perfectly estimated, except in
the case of adaptive spoofing wolves. We also prove that this scheme is optimal with regard to
false reject probability (FRP), and asymptotically optimal with respect to the average number
of inputs (ANIs) under some conditions. We further propose an input order decision scheme based
on the Kullback.Leibler (KL) divergence, which maximizes the expectation of a genuine
log-likelihood ratio, to further reduce ANI of the MLR scheme in the case where the KL
divergence differs from one modality to another. The results of the experimental evaluation
using a virtual multimodal (one face and eight fingerprints) data set showed the effectiveness
of our schemes.
abstract
Some approximate indexing schemes have been recently proposed in metric spaces which sort the objects in the database according to pseudo-scores. It is known that (1) some of them provide a very good trade-off between response time and accuracy, and (2) probability-based pseudo-scores can provide an optimal trade-off in range queries if the probabilities are correctly estimated. Based on these facts, we propose a probabilistic enhancement scheme which can be applied to any pseudo-score based scheme. Our scheme computes probability-based pseudo-scores using pseudo-scores obtained from a pseudo-score based scheme. In order to estimate the probability-based pseudoscores, we use the object-specific parameters in logistic regression and learn the parameters using MAP (Maximum a Posteriori) estimation and the empirical Bayes method. We also propose a technique which speeds up learning the parameters using pseudo-scores. We applied our scheme to the two state-of-the-art schemes: the standard pivot-based scheme and the permutation-based scheme, and evaluated them using various kinds of datasets from the Metric Space Library. The results showed that our scheme outperformed the conventional schemes, with regard to both the number of distance computations and the CPU time, in all the datasets.
abstract
We aim at reducing the number of distance computations as much as possible in the inexact
indexing schemes which sort the objects according to some promise values. To achieve this aim, we propose
a new probability-based indexing scheme which can be applied to any inexact indexing scheme that uses
the promise values. Our scheme (1) uses the promise values obtained from any inexact scheme to compute
the new probability-based promise values. In order to estimate the new promise values, we (2) use the
object-specific parameters in logistic regression and learn the parameters using MAP (Maximum a Posteriori)
estimation. We also propose a technique which (3) speeds up learning the parameters using the promise
values. We applied our scheme to the standard pivot-based scheme and the permutation-based scheme, and
evaluated them using various kinds of datasets from the Metric Space Library. The results showed that our
scheme improved the conventional schemes, in all cases.
横手 健一
研究テーマ
Publications
高木 哲平
研究テーマ
Publications
2013年12月卒業/退職
Andreas Gutmann
研究テーマ
Publications
2012年3月卒業/退職
市川 顕
研究テーマ
Publications
2011年3月卒業/退職松田 隆宏
研究テーマ
Publications
abstract
A hierarchical key assignment scheme is a cryptographic mechanism for enforcing access control
in hierarchies. Its role is fundamentally important in some computer security applications
but its provable security is hard to achieve in the case of dynamic schemes. Therefore, in
order to alleviate problems resulting from solely heuristic approaches, we need systematic
views regarding design and implementation both from technical viewpoints and from managerial
viewpoints. In this commentary, we aim at providing those views in the following manner.
The first one is from technical viewpoints: we describe a progressive construction of
hierarchical key assignment schemes to make design issues as systematic as possible.
The constructed schemes are basically from existing literatures but with some refinements
for security reasons and/or to make the construction more instructive. The second one is from
managerial viewpoints: based on security economics, we suggest the importance of deterrents to
attacks in system implementations. Our discussions include the applications in which a large
hierarchy is required like secure outsourcing of data on cloud.
abstract
In [BK05], Boneh and Katz introduced a primitive
called encapsulation scheme,
which is a special kind of commitment scheme.
Using the encapsulation scheme,
they improved the generic transformation
by Canetti, Halevi, and Katz[CHK04]
which transforms any semantically secure
identity-based encryption (IBE) scheme into
a chosen-ciphertext secure public key encryption (PKE) scheme
(we call the BK transformation).
The ciphertext size of the transformed PKE scheme directly
depends on the parameter sizes of the underlying encapsulation scheme.
In this paper,
by designing a size-efficient encapsulation scheme,
we further improve the BK transformation.
With our proposed encapsulation scheme,
the ciphertext overhead of a transformed PKE scheme
via the BK transformation can be that
of the underlying IBE scheme plus 384-bit,
while the original BK scheme yields that of the underlying IBE scheme plus at least 704-bit,
for 128-bit security.
Our encapsulation scheme is constructed from
a pseudorandom generator (PRG) that has a special property
called near collision resistance,
which is a fairly weak primitive.
As evidence of it,
we also address how to generically
construct a PRG with such a property
from any one-way permutation.
abstract
In this paper, we present a simple and generic method for constructing
public key encryption (PKE) secure against chosen ciphertext attacks (CCA)
from identity-based encryption (IBE).
Specifically, we show that a CCA-secure PKE scheme can be generically obtained by encrypting (m||r)
under identity ``f(r)'' with the encryption algorithm of the given IBE scheme,
assuming that the IBE scheme is non-malleable and f is one-way.
In contrast to the previous generic methods (such as Canetti-Halevi-Katz),
our method requires stronger security for the underlying IBE schemes, non-malleability,
and thus cannot be seen as a direct improvement of the previous methods.
However, once we have an IBE scheme which is proved (or can be assumed) to be
non-malleable, we will have a PKE scheme via our simple method,
and we believe that the simpleness of our proposed transformation itself
is theoretically interesting.
Our proof technique for security of the proposed scheme is also novel.
In the security proof, we show how to deal with certain types of decryption queries
which cannot be handled by straightforwardly using conventional techniques.
abstract
Unforgeability of digital signatures is closely related to
the security of hash functions since hashing messages,
such as hash-and-sign paradigm, is necessary
in order to sign (arbitrarily) long messages.
Recent successful collision finding attacks against practical hash functions
would indicate that constructing practical collision resistant hash functions
is difficult to achieve.
Thus, it is worth considering to relax the requirement of collision resistance for
hash functions that is used to hash messages in signature schemes.
Currently, the most efficient strongly unforgeable signature scheme
in the standard model which is based on the CDH assumption (in bilinear groups)
is the Boneh-Shen-Waters (BSW) signature proposed in 2006.
In their scheme, however, a collision resistant hash function
is necessary to prove its security.
In this paper, we construct a signature scheme
which has the same properties as the BSW scheme
but does not rely on collision resistant hash functions.
Instead, we use a target collision resistant hash function,
which is a strictly weaker primitive than a collision resistant hash function.
Our scheme is, in terms of the signature size and the computational cost,
as efficient as the BSW scheme.
abstract
Unforgeability of digital signatures is closely related to the
security of hash functions since hashing messages, such as hash-and-sign
paradigm, is necessary in order to sign (arbitrarily) long messages. Recent
successful collision finding attacks against practical hash functions
would indicate that constructing practical collision resistant hash functions
is difficult to achieve. Thus, it is worth considering to relax the
requirement of collision resistance for hash functions that is used to hash
messages in signature schemes. Currently, the most efficient strongly unforgeable
signature scheme in the standard model which is based on the
CDH assumption (in bilinear groups) is the Boneh-Shen-Waters (BSW)
signature proposed in 2006. In their scheme, however, a collision resistant hash
function is necessary to prove its security. In this paper, we
construct a signature scheme which has the same properties as the BSW
scheme but does not rely on collision resistant hash functions. Instead,
we use a target collision resistant hash function, which is a strictly weaker
primitive than a collision resistant hash function. Our scheme is, in terms
of the signature size and the computational cost, as efficient as the BSW
scheme.
linksabstract
Several content distribution services via the Internet have been developed,
and a number of bidirectional broadcasting services will be provided in the near future.
Since such bidirectional broadcasting treats personal information of the users,
provider authentication is necessary.
Taking the currently existing broadcasting system using CAS cards into account,
Ohtake et al. recently proposed the provider authentication system which utilizes
key-insulated signature (KIS) schemes.
However, the authors did not refer to details of what kind of KIS should be used.
In this paper we supplement their works in terms of KIS specification.
We carefully identify what kind of KIS should be used and propose concrete KIS schemes
which realize both the reliability and the robustness required for the bidirectional broadcasting service.
links千葉 大輝
研究テーマ
Publications
2010年11月卒業/退職
北川 隆
研究テーマ
Publications
2010年9月卒業/退職
Jacob Schuldt
研究テーマ
Publications
abstract
Undeniable signatures, introduced by Chaum and van Antwerpen, and designated confirmer signatures, introduced by Chaum, allow a signer to control the verifiability of his signatures by requiring a verifier to interact with the signer to verify a signature. An important security requirement for these types of signature schemes is nontransferability which informally guarantees that even though a verifier has confirmed the validity of a signature by interacting with the signer, he cannot prove this knowledge to a third party. Recently Liskov and Micali pointed out that the commonly used notion of non-transferability only guarantees security against an off-line attacker which cannot influence the verifier while he interacts with the signer, and that almost all previous schemes relying on interactive protocols are vulnerable to online attacks. To address this, Liskov and Micali formalized on-line nontransferable signatures which are resistant to on-line attacks, and proposed a generic construction based on a standard signature scheme and an encryption scheme.
In this paper, we revisit on-line non-transferable signatures.
Firstly, we extend the security model of Liskov and Micali to cover not only the sign protocol, but also the confirm and disavow protocols executed by the confirmer. Our security model furthermore considers the use of multiple (potentially corrupted or malicious) confirmers, and guarantees security against attacks related to the use of signer specific confirmer keys.
We then present a new approach to the construction of on-line non-transferable signatures, and propose a new concrete construction which is provably secure in the standard model. Unlike the construction by Liskov and Micali, our construction does not require the signer to issue ``fake'' signatures to maintain security, and allows the confirmer to both confirm and disavow signatures.
Lastly, our construction provides noticeably shorter signatures than the construction by Liskov and Micali.
linksabstract
We provide an enhanced security model for proxy signatures
that captures a more realistic set of attacks than previous models of Boldyreva et al. and of Malkin et al.
Our model is motivated by concrete attacks on existing schemes in scenarios
in which proxy signatures are likely to be used.
We provide a generic construction for proxy signatures secure in our enhanced model
using sequential aggregate signatures;
our construction provides a benchmark by which future specific constructions may be judged.
Finally, we consider the extension of our model and constructions to the identity-based setting.
abstract
We propose new instantiations of chosen-ciphertext secure identity-based encryption schemes with wildcards (WIBE).
Our schemes outperform all existing alternatives in terms of efficiency as well as security.
We achieve these results by extending the hybrid encryption (KEM?DEM) framework to the case of WIBE schemes.
We propose and prove secure one generic construction in the random oracle model, and one direct construction in the standard model.
施 屹
研究テーマ
Publications
abstract
Tor is a state-of-art low-latency anonymous communication system and provides TCP services
for applications on the Internet. It involves several techniques to defend the attacks. We have presented a
paper to introduce the fingerprinting attack on the Tor system. In this paper, we present a modified threat
model towards the leaky pipe technique which Tor used, to achieve higher success rate. In this model, the
malicious attacker could collude with a malicious onion router. If this onion router is not an exit router, we
may still achieve higher success rate by fingerprinting attack. We also make some discussions towards the
defending methods.
abstract
We present a novel way to implement a fingerprinting attack against Onion Routing anonymity
systems such as Tor. Our attack is a realistic threat in the sense that it can be mounted by nothing but
controller of entrance routers; the required resource is very small. However, the conventional fingerprinting
attack based on incoming traffic does not work straightforwardly against Tor due to its multiplex and quantized
nature of traffic. By contrast, our novel attack can degrade this Tor's anonymity by a metric based on
both incoming and outgoing packets. In addition, our method keeps the fingerprinting attack's advantage
of being realistic in terms of the required small resource. Regarding more evaluation, the effectiveness of
our method is discussed in a comprehensive manner: experimentally and theoretically. In order to enhance
further studies and show the significance of our idea, we also discuss methods for defending against our attack
and other applications of our idea.
2010年6月卒業/退職
付 紹静
研究テーマ
Publications
abstract
We provide two new construction methods for nonlinear resilient S-boxes with
given degree. The first method is based on the use of linear error correcting codes together
with highly nonlinear S-boxes. Given a [u,m, t+1] linear code where u = n-d-1, d> m,
we showthat it is possible to construct (n,m, t, d) resilient S-boxes which have currently best
known nonlinearity. Our second construction provides highly nonlinear (n,m, t, d) resilient
S-boxes which do not have linear structure, then an improved version of this construction is
given.
keywords
Cryptography, Linear code, Resiliency, Linear structure, Nonlinearity
abstract
Rotation symmetric Boolean functions (RSBFs) have been used as components of different cryptosystems.
In this paper, we investigate n-variable (n even and n>= 12) RSBFs to achieve maximum algebraic
immunity (AI), and provide a construction of RSBFs with maximum AI and nonlinearity. These functions have
higher nonlinearity than the previously known nonlinearity of RSBFs with maximum AI. We also prove that
our construction provides high algebraic degree in some case.
keywords
Boolean function, rotation symmetry, algebraic immunity, nonlinearity
abstract
In this paper, we study the construction of Rotation Symmetric Boolean Functions (RSBFs)
which achieve a maximum algebraic immunity (AI). For the first time, a construction of
balanced 2p-variable (p is an odd prime) RSBFs with maximum AI was provided, and the
nonlinearity of the constructed RSBFs is not less than 2^(2p-1)-(2p-1)C(p)+(p-2)(p-3)+2;
this nonlinearity result is significantly higher than the previously best known nonlinearity
of RSBFs with maximum AI.
keywords
Stream cipher, Rotation symmetry, Boolean function, Algebraic immunity
abstract
Constructing degree-optimized resilient Boolean functions with high nonlinearity
is a significant study area in Boolean function.
In this letter, we provide a construction of degree-optimized n-variable (n odd and n>=35)
resilient Boolean functions, and it is shown that the resultant functions achieve
the currently best known nonlinearity.
keywords
stream cipher, boolean function, resiliency, nonlinearity
2010年3月卒業/退職楊 鵬
研究テーマ
Publications
abstract
We propose an identity based encryption scheme with forward security. Especially in our scheme,
the top secret, called the master key, evolves through time. Our scheme is provably secure in the sense of
FS-IND-ID-CPA based on DBDH assumption in standard model.
keywords
Foward Security, identity based encryption, master key update.
abstract
We propose an identity based encryption scheme with forward security.
Especially in our scheme, the top secret, called the master key, evolves through time.
Our scheme is provably secure in the sense of FS-IND-ID-CPA based on DBDH assumption in standard model.
keywords
Foward Security, identity based encryption, master key update.
abstract
Stateful public key encryption schemes are introduced recently with much efficiency improvement
over traditional stateless schemes. However, previous proposals are either based on strong assumptions, or
admitting loose security reductions (a barrier for the proofs being practically-meaningful). In this paper, we
present a stateful public key encryption scheme with tight security reduction to the computational Diffie
Hellman assumption (cf. gap Diffie-Hellman), as well as a stateful identity based encryption scheme with
tighter security reduction to the computational bilinear Diffie-Hellman problem.
keywords
Stateful public key encryption, security reduction.
abstract
Identity based encryption (IBE) schemes have been flourishing since the very
beginning of this century. In IBE it is widely believed that proving the security
of a scheme in the sense of IND-ID-CCA2 is sufficient to claim the scheme is also
secure in the senses of both SS-ID-CCA2 and NM-ID-CCA2. The justification for
this belief is the relations among indistinguishability (IND), semantic security
(SS) and non-malleability (NM). But these relations are proved only for conventional
public key encryption (PKE) schemes in previous works. The fact is
that between IBE and PKE, there exists a difference of special importance, i.e.
only in IBE the adversaries can perform a particular attack, namely the chosen
identity attack.
This paper shows that security proved in the sense of IND-ID-CCA2 is validly
sufficient for implying security in any other sense in IBE. This is to say the security
notion, IND-ID-CCA2, captures the essence of security for all IBE schemes.
To achieve this intention, we first describe formal definitions of the notions of
security for IBE, and then present the relations among IND, SS and NM in
IBE, along with rigorous proofs. All of these results are proposed with the
consideration of the chosen identity attack.
keywords
Identity based encryption, security notions.
abstract
Fujisaki-Okamoto (FOpkc) conversion [13] and REACT[17] are widely known to be able to generically convert a weak public key encryption scheme to a strong encryption scheme.
In this paper, we discuss applications of FOpkc conversion and REACT to Identity Based Encryptions (IBE).
It has not been formally verified yet that whether these conversions are generic in the IBE setting.
Our results show that both conversions are effective in the IBE case:
plain REACT already achieves a good security reduction while the plain FOpkc conversion results in bad running time of the simulator.
We further propose a simple modification to the plain FOpkc that solves this problem.
Finally, we choose some concrete parameters to explain (visually) the effect of how the modified FOpkc substantially improves reduction cost regarding the plain conversion.
keywords
Fujisaki-Okamoto, identity based encryption, security enhancement.
abstract
This paper shows that the standard security notion for iden-
tity based encryption schemes (IBE), that is IND-ID-CCA2, captures the
essence of security for all IBE schemes. To achieve this intention, we first
describe formal definitions of the notions of security for IBE, and then
present the relations among OW, IND, SS and NM in IBE, along with
rigorous proofs. With the aim of comprehensiveness, notions of security
for IBE in the context of encryption of multiple messages and/or to mul-
tiple receivers are finally presented. All of these results are proposed with
the consideration of the particular attack in IBE, namely the adaptive
chosen identity attack.
keywords
Identity based encryption, security notions.
abstract
The Fujisaki-Okamoto (FO) conversion is widely known to be able to generically convert a weak public key encryption scheme,
say one-way against chosen plaintext attacks (OW-CPA), to a strong one, namely, indistinguishable against adaptive chosen ciphertext attacks (IND-CCA).
It is not known that if the same holds for identity-based encryption (IBE) schemes,
though many IBE and variant schemes are in fact specifically using the FO conversion.
In this paper, we investigate this issue and confirm that the FO conversion is generically effective also in the IBE case.
However, straightforward application of the FO conversion only leads to an IBE scheme with a loose (but polynomial) reduction.
We then propose a simple modification to the FO conversion, which results in considerably more efficient security reduction.
keywords
Fujisaki-Okamoto, identity based encryption, security enhancement.
abstract
The Fujisaki-Okamoto (FO) conversion is a very powerful security enhancement method in public key encryption (PKE) schemes.
The generality of the plain FO in identity based encryption (IBE) schemes was verified, and a slightly different version, the modified FO, was proposed.
Both of the plain FO and the modified FO could achieve the goal of
converting a weak IBE scheme, i.e., one-way against adaptive chosen identity and chosen plaintext attacks ({\sf OW-ID-CPA}),
to the strongest one, namely, indistinguishability against adaptive chosen identity and adaptive chosen ciphertext attacks (\textsf{IND-ID-CCA}).
This work aims to evaluate the plain FO and the modified FO by substituting proper concrete values.
By mainly focusing on the time costs of security reductions, we show the modified FO is better than the plain one.
keywords
Fujisaki-Okamoto, identity based encryption, security enhancement.
中井 泰雅
研究テーマ
Publications
abstract
In 2005, Hwang et al. proposed a concept of timed-release
encryption with pre-open capability (TRE-PC), where a receiver can decrypt
a ciphertext not only by using a time-release key which is provided
after its release-time, but also using a secret information called a preopen
key provided from a sender even before the release-time. Though
there are several concrete constructions of TRE-PC proposed so far, no
generic construction has been known. In this paper, we show a generic
construciton of TRE-PC. Specifically, we construct a TRE-PC scheme
from a chosen-ciphertext secure public key encryption scheme (PKE),
a chosen plaintext secure identity-based encryption (IBE) scheme with
specific property that we call target collision resistance for randomness,
and a one-time signature scheme.
Interestingly, our proposed construction of TRE-PC is essentially the
same as the generic construciton of (normal) TRE based on multiple
encryption of IBE and PKE. As one of the consequences of our result,
we can build a TRE-PC scheme secure in the standard model based on
weaker assumptions than the ones used by the existing standard model
TRE-PC scheme.
2009年3月卒業/退職渡邉 悠
研究テーマ
Publications
2008年3月卒業/退職
北田 亘
研究テーマ
Publications
abstract
In identity based encryption (IBE), each entity has one identity to specify one entity.
In IBE, a sender picks an identity of one entity which he wants to send to and a receiver has one entity which specifies himself.
IBE works if the two identities are evaluated to be equal.
In this paper, we show an encryption scheme that allows not only equal relation but more general relations.
In particular, our scheme allows any bitwise operations which can be expressed in combinational circuit when we evaluate labels, generalized notions of identities.
abstract
Chaffing-and-winnowing is a cryptographic technique which does not require encryption but instead use a message authentication code (MAC) to provide the same function as encryption.
Hanaoka et al. showed that an unconditionally secure chaffing-and-winnowing with one-time security can be constructed from any authentication code (A-code) (with one-time security).
In this paper, we show a construction of unconditionally secure chaffing-and-winnowing for multiple use and prove the security of perfect secrecy and non-malleability.
Additionally, we investigate a relation between encryption and authentication in more detail.
Particularly, we show through chaffing-and-winnowing that a fully secure A-code with a specific property can be converted to a non-malleable one-time pad with a short ciphertext size.
Interestingly, when applying this method to a known A-code, this becomes a known construction of a non-malleable one-time pad.
This fact implies that the notions of authentication and encryption can be seamlessly connected by chaffing-and-winnowing mechanism.
abstract
At Eurocrypt'04, Canetti, Halevi, and Katz (CHK) proposed a generic transformation that converts any selectively secure identity-based encryption (IBE) scheme
to a chosen-ciphertext secure public-key encryption scheme (PKE) scheme.
At PKC'06, Kiltz showed the limitation of this transformation.
He showed that when applying the CHK conversion (together with some equivalent simplification) to two different IBE schemes both proposed by Boneh and Boyen,
the resulting schemes are nearly the same in their structures, not two completely different PKE as expected.
Nevertheless, the two PKE schemes are different in their underlying assumptions.
One is based on Bilinear Decision Diffie-Hellman (BDDH) assumption, while the other is based on Square Bilinear Decision Diffie-Hellman (sBDDH) assumption.
To emphasize the limitation in a stronger sense, it is desirable to show the similarity of not only their structures, but also their underlying assumptions.
We argues that the BDDH and the sBDDH assumptions are related in essential way by showing the equivalence of their computational versions.
Vadim Jefte Zendejas Samano
研究テーマ
Publications
abstract
In this article, the introduction of a new method of language-
independent e-mail classification using Social Network Analysis (SNA)
for spam filtering is proposed. Our approach uses a time categorization of
different instances of the e-mail to improve the classification of the filter.
The proposal reduced the complexity of the classification and increases
the accuracy of the filter. Although the naive SNA suffers from a high
unclassification rate, our proposal decreases the number of unclassified
e-mails.
Phan Thi Lan Anh
研究テーマ
Publications
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Matsuura Laboratory, Department of Informatics and Electronics, Institute of Industrial Science, The University of Tokyo4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||