Study on Massive-Scale Slow-Hash Recovery Using Unified Probabilistic Context-Free Grammar and Symmetrical Collaborative Prioritization with Parallel Machines
Slow-hash algorithms are proposed to defend against traditional offline password recovery by making the hash function very slow to compute. In this paper, we study the problem of slow-hash recovery on a large scale. We attack the problem by proposing a novel concurrent model that guesses the target...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2019-04-01
|
Series: | Symmetry |
Subjects: | |
Online Access: | https://www.mdpi.com/2073-8994/11/4/450 |
id |
doaj-3c6c1f5f6bcc4bc781828333fb488301 |
---|---|
record_format |
Article |
spelling |
doaj-3c6c1f5f6bcc4bc781828333fb4883012020-11-25T01:05:22ZengMDPI AGSymmetry2073-89942019-04-0111445010.3390/sym11040450sym11040450Study on Massive-Scale Slow-Hash Recovery Using Unified Probabilistic Context-Free Grammar and Symmetrical Collaborative Prioritization with Parallel MachinesTianjun Wu0Yuexiang Yang1Chi Wang2Rui Wang3College of Computer, National University of Defense Technology, Changsha 410073, ChinaCollege of Computer, National University of Defense Technology, Changsha 410073, ChinaVeriClouds Co., Seattle, WA 98105, USAVeriClouds Co., Seattle, WA 98105, USASlow-hash algorithms are proposed to defend against traditional offline password recovery by making the hash function very slow to compute. In this paper, we study the problem of slow-hash recovery on a large scale. We attack the problem by proposing a novel concurrent model that guesses the target password hash by leveraging known passwords from a largest-ever password corpus. Previously proposed password-reused learning models are specifically designed for targeted online guessing for a single hash and thus cannot be efficiently parallelized for massive-scale offline recovery, which is demanded by modern hash-cracking tasks. In particular, because the size of a probabilistic context-free grammar (PCFG for short) model is non-trivial and keeping track of the next most probable password to guess across all global accounts is difficult, we choose clever data structures and only expand transformations as needed to make the attack computationally tractable. Our adoption of max-min heap, which globally ranks weak accounts for both expanding and guessing according to unified PCFGs and allows for concurrent global ranking, significantly increases the hashes can be recovered within limited time. For example, 59.1% accounts in one of our target password list can be found in our source corpus, allowing our solution to recover 20.1% accounts within one week at an average speed of 7200 non-identical passwords cracked per hour, compared to previous solutions such as oclHashcat (using default configuration), which cracks at an average speed of 28 and needs months to recover the same number of accounts with equal computing resources (thus are infeasible for a real-world attacker who would maximize the gain against the cracking cost). This implies an underestimated threat to slow-hash protected password dumps. Our method provides organizations with a better model of offline attackers and helps them better decide the hashing costs of slow-hash algorithms and detect potential vulnerable credentials before hackers do.https://www.mdpi.com/2073-8994/11/4/450data securitydistributed computingprobabilistic context-free grammarslow hash |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Tianjun Wu Yuexiang Yang Chi Wang Rui Wang |
spellingShingle |
Tianjun Wu Yuexiang Yang Chi Wang Rui Wang Study on Massive-Scale Slow-Hash Recovery Using Unified Probabilistic Context-Free Grammar and Symmetrical Collaborative Prioritization with Parallel Machines Symmetry data security distributed computing probabilistic context-free grammar slow hash |
author_facet |
Tianjun Wu Yuexiang Yang Chi Wang Rui Wang |
author_sort |
Tianjun Wu |
title |
Study on Massive-Scale Slow-Hash Recovery Using Unified Probabilistic Context-Free Grammar and Symmetrical Collaborative Prioritization with Parallel Machines |
title_short |
Study on Massive-Scale Slow-Hash Recovery Using Unified Probabilistic Context-Free Grammar and Symmetrical Collaborative Prioritization with Parallel Machines |
title_full |
Study on Massive-Scale Slow-Hash Recovery Using Unified Probabilistic Context-Free Grammar and Symmetrical Collaborative Prioritization with Parallel Machines |
title_fullStr |
Study on Massive-Scale Slow-Hash Recovery Using Unified Probabilistic Context-Free Grammar and Symmetrical Collaborative Prioritization with Parallel Machines |
title_full_unstemmed |
Study on Massive-Scale Slow-Hash Recovery Using Unified Probabilistic Context-Free Grammar and Symmetrical Collaborative Prioritization with Parallel Machines |
title_sort |
study on massive-scale slow-hash recovery using unified probabilistic context-free grammar and symmetrical collaborative prioritization with parallel machines |
publisher |
MDPI AG |
series |
Symmetry |
issn |
2073-8994 |
publishDate |
2019-04-01 |
description |
Slow-hash algorithms are proposed to defend against traditional offline password recovery by making the hash function very slow to compute. In this paper, we study the problem of slow-hash recovery on a large scale. We attack the problem by proposing a novel concurrent model that guesses the target password hash by leveraging known passwords from a largest-ever password corpus. Previously proposed password-reused learning models are specifically designed for targeted online guessing for a single hash and thus cannot be efficiently parallelized for massive-scale offline recovery, which is demanded by modern hash-cracking tasks. In particular, because the size of a probabilistic context-free grammar (PCFG for short) model is non-trivial and keeping track of the next most probable password to guess across all global accounts is difficult, we choose clever data structures and only expand transformations as needed to make the attack computationally tractable. Our adoption of max-min heap, which globally ranks weak accounts for both expanding and guessing according to unified PCFGs and allows for concurrent global ranking, significantly increases the hashes can be recovered within limited time. For example, 59.1% accounts in one of our target password list can be found in our source corpus, allowing our solution to recover 20.1% accounts within one week at an average speed of 7200 non-identical passwords cracked per hour, compared to previous solutions such as oclHashcat (using default configuration), which cracks at an average speed of 28 and needs months to recover the same number of accounts with equal computing resources (thus are infeasible for a real-world attacker who would maximize the gain against the cracking cost). This implies an underestimated threat to slow-hash protected password dumps. Our method provides organizations with a better model of offline attackers and helps them better decide the hashing costs of slow-hash algorithms and detect potential vulnerable credentials before hackers do. |
topic |
data security distributed computing probabilistic context-free grammar slow hash |
url |
https://www.mdpi.com/2073-8994/11/4/450 |
work_keys_str_mv |
AT tianjunwu studyonmassivescaleslowhashrecoveryusingunifiedprobabilisticcontextfreegrammarandsymmetricalcollaborativeprioritizationwithparallelmachines AT yuexiangyang studyonmassivescaleslowhashrecoveryusingunifiedprobabilisticcontextfreegrammarandsymmetricalcollaborativeprioritizationwithparallelmachines AT chiwang studyonmassivescaleslowhashrecoveryusingunifiedprobabilisticcontextfreegrammarandsymmetricalcollaborativeprioritizationwithparallelmachines AT ruiwang studyonmassivescaleslowhashrecoveryusingunifiedprobabilisticcontextfreegrammarandsymmetricalcollaborativeprioritizationwithparallelmachines |
_version_ |
1725194883890675712 |