A couple of weeks ago we looked at some of the work that had gone on at the Blockpass Identity Lab (BIL) over the past year or so as the Coronavirus pandemic has continued. Work at the lab has continued unabated as the team has grown and the researchers and members continue to break new ground in the area.
The initial article looking at the lab’s progress examined four papers that members of the team had worked on and can be found here.
This article focuses on another four papers:
An authentication protocol based on chaos and zero knowledge proof
Members of the Blockpass Identity Lab, Professor Bill Buchanan and Jawad Ahmad, worked on a paper titled ‘An authentication protocol based on chaos and zero knowledge proof’ – which looks at how Port Knocking can achieve its full potential. Port Knocking is a method for allowing authentication through firewalls which enhances safety by only opening ports for access when the correct ‘knocking’ sequence is applied – sort of like a secret knock for an exclusive club. The solution proposed in the paper focuses on usability and stealth when securely authenticating users. Named ‘Crucible’, the solution would protect servers from a range of attacks such as port scans and zero-day exploitations. By drawing on Zero-Knowledge Proofs (ZKPs) and chaotic systems for inspiration, the Crucible solution would improve performance and operations in a minimalist, secure and stateless manner.
Asymmetric Private Set Intersection with Applications to Contact Tracing and Private Vertical Federated Machine Learning
Two Research Candidates at the lab, Pavlos Papadopoulos and Adam James Hall, contributed to the next paper: ‘Asymmetric Private Set Intersection with Applications to Contact Tracing and Private Vertical Federated Machine Learning’. In this paper, the authors examine privacy-preserving data analysis methods (such as federated learning and homomorphic encryption), but focus on the issue of comparing such encrypted data in a privacy-centric manner. Private Set Intersection is a protocol that allows two encrypted sets of data to be analysed for required common information without compromising privacy. The solution discussed in this paper builds on Private Set Intersection methods with a library that supports multiple browsers, platforms and languages – to an extent not done before – whilst ensuring high performance. This has potential use in many situations, including areas such as medical data where it could be used, for example, to enable privacy-centric contract-tracing in a pandemic, or amalgamating medical records across disparate practices.
Privacy-Preserving Healthcare Framework Using Hyperledger Fabric
The BIL’s Pavlos Papadopoulos, Nikolaos Pitropakis, and William J Buchanan all worked on a paper that delved into the potential of distributed ledger technology for electronic health record management. ‘Privacy-Preserving Healthcare Framework Using Hyperledger Fabric’ examines problems such as privacy concerns and scalability in electronic healthcare record management systems, and proposes a solution by the name of PREHEALTH to combat these issues. By combining the Hyperledger Fabric’s permissioned blockchain framework and an Identity Mixer, PREHEALTH is designed to enable an anonymous and unlinkable store of patient records that performs effectively and efficiently, secured from many potential avenues of attack. This is just one of many examples of how this kind of technology can provide much-needed security for, and improve the functionality of, medical systems in particular.
A Distributed Trust Framework for Privacy-Preserving Machine Learning
All five authors of this last paper are part of the Blockpass Identity Lab, with Will Abramson, Adam James Hall, Pavlos Papadopoulos, Nikolaos Pitropakis and Professor Bill Buchanan working in concert to produce ‘A Distributed Trust Framework for Privacy-Preserving Machine Learning’. This paper discusses the issue of training machine learning whilst ensuring privacy and confidence in the validity of the results, where traditional methods may run the risk of researchers or data providers copying private information or introducing data to skew the machine learning respectively. The solution proposed involved using Decentralised Identifiers (DIDs) and DID Communication to preserve privacy whilst preventing malicious actors from influencing the system, and to ensure interoperability as Machine Learning takes place. The proof-of-concept here was successful and future work could see the method applied to wider-ranging privacy-preserving networks.